Updates from: 02/27/2024 02:08:28
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Legacy Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/legacy-models.md
Title: Azure OpenAI Service legacy models
+ Title: Azure OpenAI Service deprecated models
-description: Learn about the legacy models in Azure OpenAI.
+description: Learn about the deprecated models in Azure OpenAI.
Previously updated : 07/06/2023 Last updated : 02/26/2024 --++ recommendations: false
-# Azure OpenAI Service legacy models
+# Azure OpenAI Service deprecated models
-Azure OpenAI Service offers a variety of models for different use cases. The following models are not available for new deployments beginning July 6, 2023. Deployments created prior to July 6, 2023 remain available to customers until July 5, 2024. We recommend customers migrate to the replacement models prior to the July 5, 2024 retirement.
+Azure OpenAI Service offers a variety of models for different use cases. The following models were deprecated on July 6, 2023 and will be retired on July 5, 2024. These models are no longer available for new deployments. Deployments created prior to July 6, 2023 remain available to customers until July 5, 2024. We recommend customers migrate their applications to deployments of replacement models prior to the July 5, 2024 retirement.
+
+At the time of retirement, deployments of these models will stop returning valid API responses.
## GPT-3.5
ai-services Model Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/model-versions.md
We want to make it easy for customers to stay up to date as models improve. Cus
When a customer deploys GPT-3.5-Turbo and GPT-4 on Azure OpenAI Service, the standard behavior is to deploy the current default version ΓÇô for example, GPT-4 version 0314. When the default version changes to say GPT-4 version 0613, the deployment is automatically updated to version 0613 so that customer deployments feature the latest capabilities of the model.
-Customers can also deploy a specific version like GPT-4 0314 or GPT-4 0613 and choose an update policy, which can include the following options:
+Customers can also deploy a specific version like GPT-4 0613 and choose an update policy, which can include the following options:
* Deployments set to **Auto-update to default** automatically update to use the new default version. * Deployments set to **Upgrade when expired** automatically update when its current version is retired.
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
Azure OpenAI Service is powered by a diverse set of models with different capabi
## GPT-4 and GPT-4 Turbo Preview
- GPT-4 can solve difficult problems with greater accuracy than any of OpenAI's previous models. Like GPT-3.5 Turbo, GPT-4 is optimized for chat and works well for traditional completions tasks. Use the Chat Completions API to use GPT-4. To learn more about how to interact with GPT-4 and the Chat Completions API check out our [in-depth how-to](../how-to/chatgpt.md).
+ GPT-4 is a large multimodal model (accepting text or image inputs and generating text) that can solve difficult problems with greater accuracy than any of OpenAI's previous models. Like GPT-3.5 Turbo, GPT-4 is optimized for chat and works well for traditional completions tasks. Use the Chat Completions API to use GPT-4. To learn more about how to interact with GPT-4 and the Chat Completions API check out our [in-depth how-to](../how-to/chatgpt.md).
+
+ GPT-4 Turbo with Vision is the version of GPT-4 that accepts image inputs. It is available as the `vision-preview` model of `gpt-4`.
- `gpt-4` - `gpt-4-32k`-- `gpt-4-vision` You can see the token context length supported by each model in the [model summary table](#model-summary-table-and-region-availability).
See [model versions](../concepts/model-versions.md) to learn about how Azure Ope
> [!NOTE] > Version `0314` of `gpt-4` and `gpt-4-32k` will be retired no earlier than July 5, 2024. Version `0613` of `gpt-4` and `gpt-4-32k` will be retired no earlier than September 30, 2024. See [model updates](../how-to/working-with-models.md#model-updates) for model upgrade behavior. - GPT-4 version 0125-preview is an updated version of the GPT-4 Turbo preview previously released as version 1106-preview. GPT-4 version 0125-preview completes tasks such as code generation more completely compared to gpt-4-1106-preview. Because of this, depending on the task, customers may find that GPT-4-0125-preview generates more output compared to the gpt-4-1106-preview. We recommend customers compare the outputs of the new model. GPT-4-0125-preview also addresses bugs in gpt-4-1106-preview with UTF-8 handling for non-English languages. > [!IMPORTANT]
The following Embeddings models are available with [Azure Government](/azure/azu
| Model ID | Feature Availability | Max Request (characters) | | | | :: | | dalle2 | East US | 1,000 |
-| dalle3 | Sweden Central | 4,000 |
+| dall-e-3 | Sweden Central | 4,000 |
### Fine-tuning models
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
You can send a streaming request using the `stream` parameter, allowing data to
{ "type": "AzureCognitiveSearch", "parameters": {
- "endpoint": "'$SearchEndpoint'",
- "key": "'$SearchKey'",
- "indexName": "'$SearchIndex'"
+ "endpoint": "'$AZURE_AI_SEARCH_ENDPOINT'",
+ "key": "'$AZURE_AI_SEARCH_API_KEY'",
+ "indexName": "'$AZURE_AI_SEARCH_INDEX'"
} } ],
When you chat with a model, providing a history of the chat will help the model
{ "type": "AzureCognitiveSearch", "parameters": {
- "endpoint": "'$SearchEndpoint'",
- "key": "'$SearchKey'",
- "indexName": "'$SearchIndex'"
+ "endpoint": "'$AZURE_AI_SEARCH_ENDPOINT'",
+ "key": "'$AZURE_AI_SEARCH_API_KEY'",
+ "indexName": "'$AZURE_AI_SEARCH_INDEX'"
} } ],
ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/encrypt-data-at-rest.md
Previously updated : 11/14/2022 Last updated : 2/21/2024
Azure OpenAI is part of Azure AI services. Azure AI services data is encrypted a
By default, your subscription uses Microsoft-managed encryption keys. There's also the option to manage your subscription with your own keys called customer-managed keys (CMK). CMK offers greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
-## Customer-managed keys with Azure Key Vault
+## Use customer-managed keys with Azure Key Vault
Customer-managed keys (CMK), also known as Bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. You must use Azure Key Vault to store your customer-managed keys. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. The Azure AI services resource and the key vault must be in the same region and in the same Microsoft Entra tenant, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](../../key-vault/general/overview.md).
-To enable customer-managed keys, you must also enable both the **Soft Delete** and **Do Not Purge** properties on the key vault.
+To enable customer-managed keys, the key vault containing your keys must meet these requirements:
-Only RSA keys of size 2048 are supported with Azure AI services encryption. For more information about keys, see **Key Vault keys** in [About Azure Key Vault keys, secrets and certificates](../../key-vault/general/about-keys-secrets-certificates.md).
+- You must enable both the **Soft Delete** and **Do Not Purge** properties on the key vault.
+- If you use the [Key Vault firewall](/azure/key-vault/general/access-behind-firewall), you must allow trusted Microsoft services to access the key vault.
+- The key vault must use [legacy access policies](/azure/key-vault/general/assign-access-policy).
+- You must grant the Azure OpenAI resource's system-assigned managed identity the following permissions on your key vault: *get key*, *wrap key*, *unwrap key*.
-## Enable customer-managed keys for your resource
+Only RSA and RSA-HSM keys of size 2048 are supported with Azure AI services encryption. For more information about keys, see **Key Vault keys** in [About Azure Key Vault keys, secrets and certificates](../../key-vault/general/about-keys-secrets-certificates.md).
+
+### Enable your Azure OpenAI resource's managed identity
+
+1. Go to your Azure AI services resource.
+1. On the left, under **Resource Management**, select **Identity**.
+1. Switch the system-assigned managed identity status to **On**.
+1. Save your changes, and confirm that you want to enable the system-assigned managed identity.
+
+### Configure your key vault's access permissions
+
+1. In the Azure portal, go to your key vault.
+1. On the left, select **Access policies**.
+
+ If you see a message advising you that access policies aren't available, [reconfigure your key vault to use legacy access policies](/azure/key-vault/general/assign-access-policy) before continuing.
+1. Select **Create**.
+1. Under **Key permissions**, select **Get**, **Wrap Key**, and **Unwrap Key**. Leave the remaining checkboxes unselected.
+
+ :::image type="content" source="../media/cognitive-services-encryption/key-vault-access-policy.png" alt-text="Screenshot of the Azure portal page for a key vault access policy. The permissions selected are Get Key, Wrap Key, and Unwrap Key.":::
+
+1. Select **Next**.
+1. Search for the name of your Azure OpenAI resource and select its managed identity.
+1. Select **Next**.
+1. Select **Next** to skip configuring any application settings.
+1. Select **Create**.
+
+### Enable customer-managed keys on your Azure OpenAI resource
To enable customer-managed keys in the Azure portal, follow these steps: 1. Go to your Azure AI services resource.
-1. On the left, select **Encryption**.
+1. On the left, under **Resource Management**, select **Encryption**.
1. Under **Encryption type**, select **Customer Managed Keys**, as shown in the following screenshot.
-> [!div class="mx-imgBorder"]
-> ![Screenshot of create a resource user experience](./media/encryption/encryption.png)
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of create a resource user experience.](./media/encryption/encryption.png)
-## Specify a key
+### Specify a key
After you enable customer-managed keys, you can specify a key to associate with the Azure AI services resource.
-### Specify a key as a URI
+#### Specify a key as a URI
To specify a key as a URI, follow these steps: 1. In the Azure portal, go to your key vault.
-1. Under **Settings**, select **Keys**.
+1. Under **Objects**, select **Keys**.
1. Select the desired key, and then select the key to view its versions. Select a key version to view the settings for that version. 1. Copy the **Key Identifier** value, which provides the URI.
To specify a key as a URI, follow these steps:
1. Under **Subscription**, select the subscription that contains the key vault. 1. Save your changes.
-### Specify a key from a key vault
+#### Select a key from a key vault
-To specify a key from a key vault, first make sure that you have a key vault that contains a key. Then follow these steps:
+To select a key from a key vault, first make sure that you have a key vault that contains a key. Then follow these steps:
1. Go to your Azure AI services resource, and then select **Encryption**. 1. Under **Encryption key**, select **Select from Key Vault**.
ai-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/embeddings.md
Our embedding models may be unreliable or pose social risks in certain cases, an
* Store your embeddings and perform vector (similarity) search using your choice of Azure service: * [Azure AI Search](../../../search/vector-search-overview.md) * [Azure Cosmos DB for MongoDB vCore](../../../cosmos-db/mongodb/vcore/vector-search.md)
+ * [Azure SQL Database](/azure/azure-sql/database/ai-artificial-intelligence-intelligent-applications?view=azuresql&preserve-view=true#vector-search)
* [Azure Cosmos DB for NoSQL](../../../cosmos-db/vector-search.md) * [Azure Cosmos DB for PostgreSQL](../../../cosmos-db/postgresql/howto-use-pgvector.md) * [Azure Database for PostgreSQL - Flexible Server](../../../postgresql/flexible-server/how-to-use-pgvector.md)
ai-services Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/migration.md
Title: How to migrate to OpenAI Python v1.x
-description: Learn about migrating to the latest release of the OpenAI Python library with Azure OpenAI
+description: Learn about migrating to the latest release of the OpenAI Python library with Azure OpenAI.
Previously updated : 11/15/2023 Last updated : 02/26/2024
OpenAI has just released a new version of the [OpenAI Python API library](https:
## Updates -- This is a completely new version of the OpenAI Python API library.
+- This is a new version of the OpenAI Python API library.
- Starting on November 6, 2023 `pip install openai` and `pip install openai --upgrade` will install `version 1.x` of the OpenAI Python library. - Upgrading from `version 0.28.1` to `version 1.x` is a breaking change, you'll need to test and update your code. - Auto-retry with backoff if there's an error
print(completion.model_dump_json(indent=2))
## Use your data
-For the full configuration steps that are required to make these code examples work, please consult the [use your data quickstart](../use-your-data-quickstart.md).
+For the full configuration steps that are required to make these code examples work, consult the [use your data quickstart](../use-your-data-quickstart.md).
# [OpenAI Python 0.28.1](#tab/python) ```python
import requests
dotenv.load_dotenv()
-openai.api_base = os.environ.get("AOAIEndpoint")
+openai.api_base = os.environ.get("AZURE_OPENAI_ENDPOINT")
openai.api_version = "2023-08-01-preview" openai.api_type = 'azure'
-openai.api_key = os.environ.get("AOAIKey")
+openai.api_key = os.environ.get("AZURE_OPENAI_API_KEY")
def setup_byod(deployment_id: str) -> None: """Sets up the OpenAI Python SDK to use your own data for the chat endpoint.
def setup_byod(deployment_id: str) -> None:
openai.requestssession = session
-aoai_deployment_id = os.environ.get("AOAIDeploymentId")
+aoai_deployment_id = os.environ.get("AZURE_OPEN_AI_DEPLOYMENT_ID")
setup_byod(aoai_deployment_id) completion = openai.ChatCompletion.create( messages=[{"role": "user", "content": "What are the differences between Azure Machine Learning and Azure AI services?"}],
- deployment_id=os.environ.get("AOAIDeploymentId"),
+ deployment_id=os.environ.get("AZURE_OPEN_AI_DEPLOYMENT_ID"),
dataSources=[ # camelCase is intentional, as this is the format the API expects { "type": "AzureCognitiveSearch", "parameters": {
- "endpoint": os.environ.get("SearchEndpoint"),
- "key": os.environ.get("SearchKey"),
- "indexName": os.environ.get("SearchIndex"),
+ "endpoint": os.environ.get("AZURE_AI_SEARCH_ENDPOINT"),
+ "key": os.environ.get("AZURE_AI_SEARCH_API_KEY"),
+ "indexName": os.environ.get("AZURE_AI_SEARCH_INDEX"),
} } ]
import dotenv
dotenv.load_dotenv()
-endpoint = os.environ.get("AOAIEndpoint")
-api_key = os.environ.get("AOAIKey")
-deployment = os.environ.get("AOAIDeploymentId")
+endpoint = os.environ.get("AZURE_OPENAI_ENDPOINT")
+api_key = os.environ.get("AZURE_OPENAI_API_KEY")
+deployment = os.environ.get("AZURE_OPEN_AI_DEPLOYMENT_ID")
client = openai.AzureOpenAI( base_url=f"{endpoint}/openai/deployments/{deployment}/extensions",
completion = client.chat.completions.create(
{ "type": "AzureCognitiveSearch", "parameters": {
- "endpoint": os.environ["SearchEndpoint"],
- "key": os.environ["SearchKey"],
- "indexName": os.environ["SearchIndex"]
+ "endpoint": os.environ["AZURE_AI_SEARCH_ENDPOINT"],
+ "key": os.environ["AZURE_AI_SEARCH_API_KEY"],
+ "indexName": os.environ["AZURE_AI_SEARCH_INDEX"]
} } ]
ai-services Use Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-web-app.md
Previously updated : 02/09/2024 Last updated : 02/23/2024 recommendations: false
When customizing the app, we recommend:
- When you rotate API keys for your Azure OpenAI or Azure AI Search resource, be sure to update the app settings for each of your deployed apps to use the new keys.
+Sample source code for Azure OpenAI On Your Data web app is available on [GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT). Source code is provided "as is" and as a sample only. Customers are responsible for all customization and implementation of their web apps using Azure OpenAI On Your Data.
+ ### Updating the web app We recommend pulling changes from the `main` branch for the web app's source code frequently to ensure you have the latest bug fixes, API version, and improvements.
ai-services Use Your Data Securely https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-your-data-securely.md
When using the API, pass the `filter` parameter in each API request. For example
{ "type": "AzureCognitiveSearch", "parameters": {
- "endpoint": "'$SearchEndpoint'",
- "key": "'$SearchKey'",
- "indexName": "'$SearchIndex'",
+ "endpoint": "'$AZURE_AI_SEARCH_ENDPOINT'",
+ "key": "'$AZURE_AI_SEARCH_API_KEY'",
+ "indexName": "'$AZURE_AI_SEARCH_INDEX'",
"filter": "my_group_ids/any(g:search.in(g, 'group_id1, group_id2'))" } }
ai-services Working With Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/working-with-models.md
You can get a list of models that are available for both inference and fine-tuni
## Model updates
-Azure OpenAI now supports automatic updates for select model deployments. On models where automatic update support is available, a model version drop-down will be visible in Azure OpenAI Studio under **Create new deployment** and **Edit deployment**:
+Azure OpenAI now supports automatic updates for select model deployments. On models where automatic update support is available, a model version drop-down is visible in Azure OpenAI Studio under **Create new deployment** and **Edit deployment**:
:::image type="content" source="../media/models/auto-update.png" alt-text="Screenshot of the deploy model UI of Azure OpenAI Studio." lightbox="../media/models/auto-update.png":::
You can learn more about Azure OpenAI model versions and how they work in the [A
### Auto update to default
-When **Auto-update to default** is selected your model deployment will be automatically updated within two weeks of a change in the default version. For a preview version, it will update automatically when a new preview version is available starting two weeks after the new preview version is released.
+When you set your deployment to **Auto-update to default**, your model deployment is automatically updated within two weeks of a change in the default version. For a preview version, it updates automatically when a new preview version is available starting two weeks after the new preview version is released.
If you're still in the early testing phases for inference models, we recommend deploying models with **auto-update to default** set whenever it's available. ### Specific model version
-As your use of Azure OpenAI evolves, and you start to build and integrate with applications you might want to manually control model updates so that you can first test and validate that model performance is remaining consistent for your use case prior to upgrade.
+As your use of Azure OpenAI evolves, and you start to build and integrate with applications you might want to manually control model updates. You can first test and validate that your application behavior is consistent for your use case before upgrading.
-When you select a specific model version for a deployment this version will remain selected until you either choose to manually update yourself, or once you reach the retirement date for the model. When the retirement date is reached the model will automatically upgrade to the default version at the time of retirement.
+When you select a specific model version for a deployment, this version remains selected until you either choose to manually update yourself, or once you reach the retirement date for the model. When the retirement date is reached the model will automatically upgrade to the default version at the time of retirement.
-## Viewing deprecation dates
+## Viewing retirement dates
For currently deployed models, from Azure OpenAI Studio select **Deployments**: :::image type="content" source="../media/models/deployments.png" alt-text="Screenshot of the deployment UI of Azure OpenAI Studio." lightbox="../media/models/deployments.png":::
-To view deprecation/expiration dates for all available models in a given region from Azure OpenAI Studio select **Models** > **Column options** > Select **Deprecation fine tune** and **Deprecation inference**:
+To view retirement dates for all available models in a given region from Azure OpenAI Studio, select **Models** > **Column options** > Select **Deprecation fine tune** and **Deprecation inference**:
:::image type="content" source="../media/models/column-options.png" alt-text="Screenshot of the models UI of Azure OpenAI Studio." lightbox="../media/models/column-options.png":::
You can check what model upgrade options are set for previously deployed models
:::image type="content" source="../media/how-to/working-with-models/deployments.png" alt-text="Screenshot of the deployments pane with a deployment name highlighted." lightbox="../media/how-to/working-with-models/deployments.png":::
-This will open the **Properties** for the model deployment. You can view what upgrade options are set for your deployment under **Version update policy**:
+Selecting a deployment name opens the **Properties** for the model deployment. You can view what upgrade options are set for your deployment under **Version update policy**:
:::image type="content" source="../media/how-to/working-with-models/update-policy.png" alt-text="Screenshot of the model deployments property UI." lightbox="../media/how-to/working-with-models/update-policy.png":::
The corresponding property can also be accessed via [REST](../how-to/working-wit
|Option| Read | Update | ||||
-| [REST](../how-to/working-with-models.md#model-deployment-upgrade-configuration) | Yes. If `versionUpgradeOption` is not returned it means it is `null` |Yes |
+| [REST](../how-to/working-with-models.md#model-deployment-upgrade-configuration) | Yes. If `versionUpgradeOption` is not returned, it means it is `null` |Yes |
| [Azure PowerShell](/powershell/module/az.cognitiveservices/get-azcognitiveservicesaccountdeployment) | Yes.`VersionUpgradeOption` can be checked for `$null`| Yes | | [Azure CLI](/cli/azure/cognitiveservices/account/deployment#az-cognitiveservices-account-deployment-show) | Yes. It shows `null` if `versionUpgradeOption` is not set.| *No.* It is currently not possible to update the version upgrade option.|
There are three distinct model deployment upgrade options:
| Name | Description | ||--|
-| `OnceNewDefaultVersionAvailable` | Once a new version is designated as the default, the model deployment will automatically upgrade to the default version within two weeks of that designation change being made. |
-|`OnceCurrentVersionExpired` | Once the retirement date is reached the model deployment will automatically upgrade to the current default version. |
-|`NoAutoUpgrade` | The model deployment will never automatically upgrade. Once the retirement date is reached the model deployment will stop working. You will need to update your code referencing that deployment to point to a nonexpired model deployment. |
+| `OnceNewDefaultVersionAvailable` | Once a new version is designated as the default, the model deployment automatically upgrades to the default version within two weeks of that designation change being made. |
+|`OnceCurrentVersionExpired` | Once the retirement date is reached the model deployment automatically upgrades to the current default version. |
+|`NoAutoUpgrade` | The model deployment never automatically upgrades. Once the retirement date is reached the model deployment stops working. You need to update your code referencing that deployment to point to a nonexpired model deployment. |
> [!NOTE]
-> `null` is equivalent to `AutoUpgradeWhenExpired`. If the **Version update policy** option is not present in the properties for a model that supports model upgrades this indicates the value is currently `null`. Once you explicitly modify this value the property will be visible in the studio properties page as well as via the REST API.
+> `null` is equivalent to `AutoUpgradeWhenExpired`. If the **Version update policy** option is not present in the properties for a model that supports model upgrades this indicates the value is currently `null`. Once you explicitly modify this value, the property is visible in the studio properties page as well as via the REST API.
### Examples
New-AzCognitiveServicesAccountDeployment -ResourceGroupName {ResourceGroupName}
# [REST](#tab/rest)
-To query the current model deployment settings including the deployment upgrade configuration for a given resource use [`Deployments List`](/rest/api/cognitiveservices/accountmanagement/deployments/list?tabs=HTTP#code-try-0). If the value is null you won't see a `versionUpgradeOption` property.
+To query the current model deployment settings including the deployment upgrade configuration for a given resource use [`Deployments List`](/rest/api/cognitiveservices/accountmanagement/deployments/list?tabs=HTTP#code-try-0). If the value is null, you won't see a `versionUpgradeOption` property.
```http GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CognitiveServices/accounts/{accountName}/deployments?api-version=2023-05-01
ai-studio Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/architecture.md
The top level AI Studio resources (AI hub and AI projects) are based on Azure Ma
An AI hub can have multiple child AI projects. Each AI project can have its own set of project-scoped connections.
-### Tenant separation
+### Microsoft-hosted resources
-While most of the resources used by Azure AI Studio live in your Azure subscription, some resources exist in the Azure AI Studio tenant. The Azure AI Studio tenant is a separate Microsoft Entra ID tenant that provides some of the services used by Azure AI Studio. The following resources are in the Azure AI Studio tenant:
+While most of the resources used by Azure AI Studio live in your Azure subscription, some resources are in an Azure subscription managed by Microsoft. This subscription provides some of the services used by Azure AI Studio. The following resources are in the Microsoft-managed Azure subscription, and don't appear in your Azure subscription:
-- **Managed compute resources**: Provided by Azure Batch resources in the Azure AI Studio tenant.-- **Managed virtual network**: Provided by Azure Virtual Network resources in the Azure AI Studio tenant. If FQDN rules are enabled, an Azure Firewall (standard) is added and charged to your subscription. For more information, see [Configure a managed virtual network for Azure AI Studio](../how-to/configure-managed-network.md).-- **Metadata storage**: Provided by Azure Cosmos DB, Azure AI Search, and Azure Storage Account in the Azure AI Studio tenant. If you use customer-managed keys, these resources are created in your subscription. For more information, see [Customer-managed keys](../../ai-services/encryption/cognitive-services-encryption-keys-portal.md?context=/azure/ai-studio/context/context).
+- **Managed compute resources**: Provided by Azure Batch resources in the Microsoft subscription.
+- **Managed virtual network**: Provided by Azure Virtual Network resources in the Microsoft subscription. If FQDN rules are enabled, an Azure Firewall (standard) is added and charged to your subscription. For more information, see [Configure a managed virtual network for Azure AI Studio](../how-to/configure-managed-network.md).
+- **Metadata storage**: Provided by Azure Cosmos DB, Azure AI Search, and Azure Storage Account in the Microsoft subscription.
+
+ > [!NOTE]
+ > If you use customer-managed keys, the metadata storage resources are created in your subscription. For more information, see [Customer-managed keys](../../ai-services/encryption/cognitive-services-encryption-keys-portal.md?context=/azure/ai-studio/context/context).
+
+Managed compute resources and managed virtual networks exist in the Microsoft subscription, but are managed by you. For example, you control which VM sizes are used for compute resources, and which outbound rules are configured for the managed virtual network.
+
+Managed compute resources also require vulnerability management. This is a shared responsibility between you and Microsoft. For more information, see [vulnerability management](vulnerability-management.md).
## Azure resource providers
Create an AI hub using one of the following methods:
- [Azure AI Studio](../how-to/create-azure-ai-resource.md#create-an-azure-ai-hub-resource-in-ai-studio): Create an AI hub for getting started. - [Azure portal](../how-to/create-azure-ai-resource.md#create-a-secure-azure-ai-hub-resource-in-the-azure-portal): Create an AI hub with your own networking, encryption, identity and access management, dependent resources, and resource tag settings.-- [Bicep template](../how-to/create-azure-ai-hub-template.md).
+- [Bicep template](../how-to/create-azure-ai-hub-template.md).
ai-studio Deploy Models Mistral https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-mistral.md
+
+ Title: How to deploy Mistral family of models with Azure AI Studio
+
+description: Learn how to deploy Mistral Large with Azure AI Studio.
+++ Last updated : 02/23/2024+
+reviewer: shubhirajMsft
+++++
+# How to deploy Mistral models with Azure AI Studio
+
+Mistral AI offers two categories of models in AI Studio:
+* Premium models: Mistral Large. These models are available with pay-as-you-go token based billing with Models as a Service in the AI Studio model catalog.
+* Open models: Mixtral-8x7B-Instruct-v01, Mixtral-8x7B-v01, Mistral-7B-Instruct-v01, and Mistral-7B-v01. These models are also available in the AI Studio model catalog and can be deployed to dedicated VM instances in your own Azure subscription with Managed Online Endpoints.
+
+You can browse the Mistral family of models in the Model Catalog by filtering on the Mistral collection.
+
+## Mistral Large
+
+In this article, you learn how to use Azure AI Studio to deploy the Mistral Large model as a service with pay-as you go billing.
+
+Mistral Large is Mistral AI's most advanced Large Language Model (LLM). It can be used on any language-based task thanks to its state-of-the-art reasoning and knowledge capabilities.
+
+Additionally, mistral-large is:
+
+* Specialized in RAG. Crucial information isn't lost in the middle of long context windows (up to 32 K tokens).
+* Strong in coding. Code generation, review, and comments. Supports all mainstream coding languages.
+* Multi-lingual by design. Best-in-class performance in French, German, Spanish, and Italian - in addition to English. Dozens of other languages are supported.
+* Responsible AI. Efficient guardrails baked in the model, with additional safety layer with safe_mode option.
++
+## Deploy Mistral Large with pay-as-you-go
+
+Certain models in the model catalog can be deployed as a service with pay-as-you-go, providing a way to consume them as an API without hosting them on your subscription, while keeping the enterprise security and compliance organizations need. This deployment option doesn't require quota from your subscription.
+
+Mistral Large can be deployed as a service with pay-as-you-go, and is offered by Mistral AI through the Microsoft Azure Marketplace. Note that Mistral AI can change or update the terms of use and pricing of this model.
+
+### Prerequisites
+
+- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
+- An [Azure AI hub resource](../how-to/create-azure-ai-resource.md).
+
+ > [!IMPORTANT]
+ > Pay-as-you-go model deployment offering is only available in AI hubs created in **East US 2** and **France Central** regions.
+
+- An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio.
+- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the Resouce Group.
+
+ For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
++
+### Create a new deployment
+
+To create a deployment:
+
+1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Go to the Azure AI Studio [model catalog](https://ai.azure.com/explore/models) under the **Explore** tab and search for Mistral-large.
+
+ Alternatively, you can initiate a deployment by starting from your project in AI Studio. From the **Build** tab of your project, select the **Deployments** option, then select **+ Create**.
+
+1. In the model catalog, on the model's **Details** page, select **Deploy** and then **Pay-as-you-go**.
+
+ :::image type="content" source="../media/deploy-monitor/mistral/mistral-deploy-pay-as-you-go.png" alt-text="A screenshot showing how to deploy a model with the pay-as-you-go option." lightbox="../media/deploy-monitor/mistral/mistral-deploy-pay-as-you-go.png":::
+
+1. Select the project in which you want to deploy your model. To deploy the Mistral-large model your project must belong to the **East US 2** or **France Central** regions.
+1. In the deployment wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use.
+1. You can also select the **Marketplace offer details** tab to learn about pricing for the selected model.
+1. If this is your first time deploying the model in the project, you have to subscribe your project for the particular offering. This step requires that your account has the **Azure AI Developer role** permissions on the Resource Group, as listed in the prerequisites. Each project has its own subscription to the particular Azure Marketplace offering of the model, which allows you to control and monitor spending. Select **Subscribe and Deploy**. Currently you can have only one deployment for each model within a project.
+
+ :::image type="content" source="../media/deploy-monitor/mistral/mistral-deploy-marketplace-terms.png" alt-text="A screenshot showing the terms and conditions of a given model." lightbox="../media/deploy-monitor/mistral/mistral-deploy-marketplace-terms.png":::
+
+1. Once you subscribe the project for the particular Azure Marketplace offering, subsequent deployments of the _same_ offering in the _same_ project don't require subscribing again. If this scenario applies to you, you will see a **Continue to deploy** option to select (Currently you can have only one deployment for each model within a project).
+
+ :::image type="content" source="../media/deploy-monitor/mistral/mistral-deploy-pay-as-you-go-project.png" alt-text="A screenshot showing a project that is already subscribed to the offering." lightbox="../media/deploy-monitor/mistral/mistral-deploy-pay-as-you-go-project.png":::
+
+1. Give the deployment a name. This name becomes part of the deployment API URL. This URL must be unique in each Azure region.
+
+ :::image type="content" source="../media/deploy-monitor/mistral/mistral-deployment-name.png" alt-text="A screenshot showing how to indicate the name of the deployment you want to create." lightbox="../media/deploy-monitor/mistral/mistral-deployment-name.png":::
+
+1. Select **Deploy**. Wait until the deployment is ready and you're redirected to the Deployments page.
+1. Select **Open in playground** to start interacting with the model.
+1. You can return to the Deployments page, select the deployment, and note the endpoint's **Target** URL and the Secret **Key**, which you can use to call the deployment for chat completions using the [`<target_url>/v1/chat/completions`](#chat-api) API.
+1. You can always find the endpoint's details, URL, and access keys by navigating to the **Build** tab and selecting **Deployments** from the Components section.
+
+To learn about billing for the Mistral AI model deployed with pay-as-you-go, see [Cost and quota considerations for Mistral Large deployed as a service](#cost-and-quota-considerations-for-mistral-large-deployed-as-a-service).
+
+### Consume the Mistral Large model as a service
+
+Mistral Large can be consumed using the chat API.
+
+1. On the **Build** page, select **Deployments**.
+
+1. Find and select the deployment you created.
+
+1. Copy the **Target** URL and the **Key** value.
+
+1. Make an API request using the [`/v1/chat/completions`](#chat-api) API using [`<target_url>/v1/chat/completions`](#chat-api).
+
+ For more information on using the APIs, see the [reference](#reference-for-mistral-large-deployed-as-a-service) section.
+
+### Reference for Mistral Large deployed as a service
+
+#### Chat API
+
+Use the method `POST` to send the request to the `/v1/chat/completions` route:
+
+__Request__
+
+```rest
+POST /v1/chat/completions HTTP/1.1
+Host: <DEPLOYMENT_URI>
+Authorization: Bearer <TOKEN>
+Content-type: application/json
+```
+
+#### Request schema
+
+Payload is a JSON formatted string containing the following parameters:
+
+| Key | Type | Default | Description |
+|--|--|--|--|
+| `messages` | `string` | No default. This value must be specified. | The message or history of messages to use to prompt the model. |
+| `stream` | `boolean` | `False` | Streaming allows the generated tokens to be sent as data-only server-sent events whenever they become available. |
+| `max_tokens` | `integer` | `8192` | The maximum number of tokens to generate in the completion. The token count of your prompt plus `max_tokens` can't exceed the model's context length. |
+| `top_p` | `float` | `1` | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with `top_p` probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering `top_p` or `temperature`, but not both. |
+| `temperature` | `float` | `1` | The sampling temperature to use, between 0 and 2. Higher values mean the model samples more broadly the distribution of tokens. Zero means greedy sampling. We recommend altering this or `top_p`, but not both. |
+| `ignore_eos` | `boolean` | `False` | Whether to ignore the EOS token and continue generating tokens after the EOS token is generated. |
+| `safe_prompt` | `boolean` | `False` | Whether to inject a safety prompt before all conversations. |
+
+The `messages` object has the following fields:
+
+| Key | Type | Value |
+|--|--||
+| `content` | `string` | The contents of the message. Content is required for all messages. |
+| `role` | `string` | The role of the message's author. One of `system`, `user`, or `assistant`. |
++
+#### Example
+
+__Body__
+
+```json
+{
+ "messages":
+ [
+ {
+ "role": "system",
+ "content": "You are a helpful assistant that translates English to Italian."
+ },
+ {
+ "role": "user",
+ "content": "Translate the following sentence from English to Italian: I love programming."
+ }
+ ],
+ "temperature": 0.8,
+ "max_tokens": 512,
+}
+```
+
+#### Response schema
+
+The response payload is a dictionary with the following fields.
+
+| Key | Type | Description |
+|--|--|-|
+| `id` | `string` | A unique identifier for the completion. |
+| `choices` | `array` | The list of completion choices the model generated for the input messages. |
+| `created` | `integer` | The Unix timestamp (in seconds) of when the completion was created. |
+| `model` | `string` | The model_id used for completion. |
+| `object` | `string` | The object type, which is always `chat.completion`. |
+| `usage` | `object` | Usage statistics for the completion request. |
+
+> [!TIP]
+> In the streaming mode, for each chunk of response, `finish_reason` is always `null`, except from the last one which is terminated by a payload `[DONE]`. In each `choices` object, the key for `messages` is changed by `delta`.
++
+The `choices` object is a dictionary with the following fields.
+
+| Key | Type | Description |
+||--|--|
+| `index` | `integer` | Choice index. When `best_of` > 1, the index in this array might not be in order and might not be `0` to `n-1`. |
+| `messages` or `delta` | `string` | Chat completion result in `messages` object. When streaming mode is used, `delta` key is used. |
+| `finish_reason` | `string` | The reason the model stopped generating tokens: <br>- `stop`: model hit a natural stop point or a provided stop sequence. <br>- `length`: if max number of tokens have been reached. <br>- `content_filter`: When RAI moderates and CMP forces moderation <br>- `content_filter_error`: an error during moderation and wasn't able to make decision on the response <br>- `null`: API response still in progress or incomplete.|
+| `logprobs` | `object` | The log probabilities of the generated tokens in the output text. |
++
+The `usage` object is a dictionary with the following fields.
+
+| Key | Type | Value |
+||--|--|
+| `prompt_tokens` | `integer` | Number of tokens in the prompt. |
+| `completion_tokens` | `integer` | Number of tokens generated in the completion. |
+| `total_tokens` | `integer` | Total tokens. |
+
+The `logprobs` object is a dictionary with the following fields:
+
+| Key | Type | Value |
+||-||
+| `text_offsets` | `array` of `integers` | The position or index of each token in the completion output. |
+| `token_logprobs` | `array` of `float` | Selected `logprobs` from dictionary in `top_logprobs` array. |
+| `tokens` | `array` of `string` | Selected tokens. |
+| `top_logprobs` | `array` of `dictionary` | Array of dictionary. In each dictionary, the key is the token and the value is the prob. |
+
+#### Example
+
+The following is an example response:
+
+```json
+{
+ "id": "12345678-1234-1234-1234-abcdefghijkl",
+ "object": "chat.completion",
+ "created": 2012359,
+ "model": "",
+ "choices": [
+ {
+ "index": 0,
+ "finish_reason": "stop",
+ "message": {
+ "role": "assistant",
+ "content": "Sure, I\'d be happy to help! The translation of ""I love programming"" from English to Italian is:\n\n""Amo la programmazione.""\n\nHere\'s a breakdown of the translation:\n\n* ""I love"" in English becomes ""Amo"" in Italian.\n* ""programming"" in English becomes ""la programmazione"" in Italian.\n\nI hope that helps! Let me know if you have any other sentences you\'d like me to translate."
+ }
+ }
+ ],
+ "usage": {
+ "prompt_tokens": 10,
+ "total_tokens": 40,
+ "completion_tokens": 30
+ }
+}
+```
+#### Additional inference examples
+
+| **Sample Type** | **Sample Notebook** |
+|-|-|
+| CLI using CURL and Python web requests | [webrequests.ipynb](https://aka.ms/mistral-large/webrequests-sample)|
+| OpenAI SDK (experimental) | [openaisdk.ipynb](https://aka.ms/mistral-large/openaisdk) |
+| LangChain | [langchain.ipynb](https://aka.ms/mistral-large/langchain-sample) |
+| Mistral AI | [mistralai.ipynb](https://aka.ms/mistral-large/mistralai-sample) |
+| LiteLLM | [litellm.ipynb](https://aka.ms/mistral-large/litellm-sample)
+
+## Cost and quotas
+
+### Cost and quota considerations for Mistral Large deployed as a service
+
+Mistral models deployed as a service are offered by Mistral AI through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying the model.
+
+Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference; however, multiple meters are available to track each scenario independently.
+
+For more information on how to track costs, see [monitor costs for models offered throughout the Azure Marketplace](./costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace).
+
+Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
+
+## Content filtering
+
+Models deployed as a service with pay-as-you-go are protected by Azure AI Content Safety. With Azure AI content safety, both the prompt and completion pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Learn more about [Azure AI Content Safety](../concepts/content-filtering.md).
+
+## Next steps
+
+- [What is Azure AI Studio?](../what-is-ai-studio.md)
+- [Azure AI FAQ article](../faq.yml)
ai-studio Region Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/reference/region-support.md
description: This article lists Azure AI Studio feature availability across clou
Previously updated : 12/11/2023 Last updated : 02/26/2024
Azure AI Studio is currently available in preview in the following Azure regions
- UK South - West Europe - West US
+- West US 3
### Azure Government regions
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
az aks create -n $clusterName -g $resourceGroup \
## Add a new nodepool to a dedicated subnet
-After your have created a cluster with Azure CNI Overlay, you can create another nodepool and assign the nodes to a new subnet of the same VNet.
-This approach can be usefull if you want to control the ingress or egress IPs of the host from/ towards targets in the same VNET or peered VNets.
+After you have created a cluster with Azure CNI Overlay, you can create another nodepool and assign the nodes to a new subnet of the same VNet.
+This approach can be useful if you want to control the ingress or egress IPs of the host from/ towards targets in the same VNET or peered VNets.
```azurecli-interactive clusterName="myOverlayCluster"
Once the cluster has been created, you can deploy your workloads. This article w
### Deploy an NGINX web server
-# [kubectl](#tab/kubectl)
-
-1. Create an NGINX web server using the `kubectl create deployment nginx` command.
-
- ```bash-interactive
- kubectl create deployment nginx --image=nginx:latest --replicas=3
- ```
-
-2. View the pod resources using the `kubectl get pods` command.
-
- ```bash-interactive
- kubectl get pods -o custom-columns="NAME:.metadata.name,IPs:.status.podIPs[*].ip,NODE:.spec.nodeName,READY:.status.conditions[?(@.type=='Ready')].status"
- ```
-
- The output shows the pods have both IPv4 and IPv6 addresses. The pods don't show IP addresses until they're ready.
-
- ```output
- NAME IPs NODE READY
- nginx-55649fd747-9cr7h 10.244.2.2,fd12:3456:789a:0:2::2 aks-nodepool1-14508455-vmss000002 True
- nginx-55649fd747-p5lr9 10.244.0.7,fd12:3456:789a::7 aks-nodepool1-14508455-vmss000000 True
- nginx-55649fd747-r2rqh 10.244.1.2,fd12:3456:789a:0:1::2 aks-nodepool1-14508455-vmss000001 True
- ```
-
-# [YAML](#tab/yaml)
-
-1. Create an NGINX web server using the following YAML manifest.
-
- ```yml
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- labels:
- app: nginx
- name: nginx
- spec:
- replicas: 3
- selector:
- matchLabels:
- app: nginx
- template:
- metadata:
- labels:
- app: nginx
- spec:
- containers:
- - image: nginx:latest
- name: nginx
- ```
-
-2. View the pod resources using the `kubectl get pods` command.
-
- ```bash-interactive
- kubectl get pods -o custom-columns="NAME:.metadata.name,IPs:.status.podIPs[*].ip,NODE:.spec.nodeName,READY:.status.conditions[?(@.type=='Ready')].status"
- ```
-
- The output shows the pods have both IPv4 and IPv6 addresses. The pods don't show IP addresses until they're ready.
-
- ```output
- NAME IPs NODE READY
- nginx-55649fd747-9cr7h 10.244.2.2,fd12:3456:789a:0:2::2 aks-nodepool1-14508455-vmss000002 True
- nginx-55649fd747-p5lr9 10.244.0.7,fd12:3456:789a::7 aks-nodepool1-14508455-vmss000000 True
- nginx-55649fd747-r2rqh 10.244.1.2,fd12:3456:789a:0:1::2 aks-nodepool1-14508455-vmss000001 True
- ```
--
+The application routing addon is the recommended way for ingress in an AKS cluster. For more information about the application routing addon and an example of how to deploy an application with the addon, see [Managed NGINX ingress with the application routing add-on](app-routing.md).
## Expose the workload via a `LoadBalancer` type service > [!IMPORTANT] > There are currently **two limitations** pertaining to IPv6 services in AKS. >
-> 1. Azure Load Balancer sends health probes to IPv6 destinations from a link-local address. In Azure Linux node pools, this traffic can't be routed to a pod, so traffic flowing to IPv6 services deployed with `externalTrafficPolicy: Cluster` fail. IPv6 services must be deployed with `externalTrafficPolicy: Local`, which causes `kube-proxy` to respond to the probe on the node.
-> 2. Prior to Kubernetes version 1.27, only the first IP address for a service will be provisioned to the load balancer, so a dual-stack service only receives a public IP for its first-listed IP family. To provide a dual-stack service for a single deployment, please create two services targeting the same selector, one for IPv4 and one for IPv6. This is no longer a limitation in kubernetes 1.27 or later.
+> - Azure Load Balancer sends health probes to IPv6 destinations from a link-local address. In Azure Linux node pools, this traffic can't be routed to a pod, so traffic flowing to IPv6 services deployed with `externalTrafficPolicy: Cluster` fail. IPv6 services must be deployed with `externalTrafficPolicy: Local`, which causes `kube-proxy` to respond to the probe on the node.
+> - Prior to Kubernetes version 1.27, only the first IP address for a service will be provisioned to the load balancer, so a dual-stack service only receives a public IP for its first-listed IP family. To provide a dual-stack service for a single deployment, please create two services targeting the same selector, one for IPv4 and one for IPv6. This is no longer a limitation in kubernetes 1.27 or later.
# [kubectl](#tab/kubectl)
aks Azure Netapp Files Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-netapp-files-dual-protocol.md
Title: Provision Azure NetApp Files dual-protocol volumes for Azure Kubernetes S
description: Describes how to statically provision Azure NetApp Files dual-protocol volumes for Azure Kubernetes Service. Previously updated : 05/08/2023 Last updated : 02/26/2024 # Provision Azure NetApp Files dual-protocol volumes for Azure Kubernetes Service
-After you [configure Azure NetApp Files for Azure Kubernetes Service](azure-netapp-files.md), you can provision Azure NetApp Files volumes for Azure Kubernetes Service.
+After you [configure Azure NetApp Files for Azure Kubernetes Service][azure-netapp-files], you can provision Azure NetApp Files volumes for Azure Kubernetes Service.
-Azure NetApp Files supports volumes using [NFS](azure-netapp-files-nfs.md) (NFSv3 or NFSv4.1), [SMB](azure-netapp-files-smb.md), and dual-protocol (NFSv3 and SMB, or NFSv4.1 and SMB).
-* This article describes details for statically provisioning volumes for dual-protocol access.
-* For information about provisioning SMB volumes statically or dynamically, see [Provision Azure NetApp Files SMB volumes for Azure Kubernetes Service](azure-netapp-files-smb.md).
-* For information about provisioning NFS volumes statically or dynamically, see [Provision Azure NetApp Files NFS volumes for Azure Kubernetes Service](azure-netapp-files-nfs.md).
+Azure NetApp Files supports volumes using [NFS][azure-netapp-nfs] (NFSv3 or NFSv4.1), [SMB][azure-netapp-smb], and dual-protocol (NFSv3 and SMB, or NFSv4.1 and SMB).
+
+This article shows you how to statically provisioning volumes for dual-protocol access using NFS or SMB.
## Before you begin
-* You must have already created a dual-protocol volume. See [create a dual-protocol volume for Azure NetApp Files](../azure-netapp-files/create-volumes-dual-protocol.md).
+* Make sure you have already created a dual-protocol volume. See [create a dual-protocol volume for Azure NetApp Files][azure-netapp-files-volume-dual-protocol].
## Provision a dual-protocol volume in Azure Kubernetes Service
This section describes how to expose an Azure NetApp Files dual-protocol volume
ANF_ACCOUNT_NAME="myaccountname" POOL_NAME="mypool1" VOLUME_NAME="myvolname"
- ```
-
-2. List the details of your volume using [`az netappfiles volume show`](/cli/azure/netappfiles/volume#az-netappfiles-volume-show) command.
+ ```
+
+2. List the details of your volume using the [`az netappfiles volume show`][az-netappfiles-volume-show] command.
```azurecli-interactive az netappfiles volume show \
This section describes how to expose an Azure NetApp Files dual-protocol volume
--volume-name $VOLUME_NAME -o JSON ```
- The following output is an example of the above command executed with real values.
+ The following output is an example of the above command executed with real values.
```output {
This section describes how to expose an Azure NetApp Files dual-protocol volume
path: /myfilepath2 ```
-4. Create the persistent volume using the [`kubectl apply`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) command:
+4. Create the persistent volume using the [`kubectl apply`][kubectl-apply] command:
```bash kubectl apply -f pv-nfs.yaml ```
-5. Verify the status of the persistent volume is *Available* by using the [`kubectl describe`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe) command:
+5. Verify the status of the persistent volume is *Available* by using the [`kubectl describe`][kubectl-describe] command:
```bash kubectl describe pv pv-nfs
This section describes how to expose an Azure NetApp Files dual-protocol volume
storage: 100Gi ```
-2. Create the persistent volume claim using the [`kubectl apply`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) command:
+2. Create the persistent volume claim using the [`kubectl apply`][kubectl-apply] command:
```bash kubectl apply -f pvc-nfs.yaml ```
-3. Verify the *Status* of the persistent volume claim is *Bound* by using the [`kubectl describe`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe) command:
+3. Verify the *Status* of the persistent volume claim is *Bound* by using the [`kubectl describe`][kubectl-describe] command:
```bash kubectl describe pvc pvc-nfs
This section describes how to expose an Azure NetApp Files dual-protocol volume
claimName: pvc-nfs ```
-2. Create the pod using the [`kubectl apply`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) command:
+2. Create the pod using the [`kubectl apply`][kubectl-apply][kubectl-apply] command:
```bash kubectl apply -f nginx-nfs.yaml ```
-3. Verify the pod is *Running* by using the [`kubectl apply`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) command:
+3. Verify the pod is *Running* by using the [`kubectl apply`][kubectl-apply] command:
```bash kubectl describe pod nginx-nfs ```
-4. Verify your volume has been mounted on the pod by using [`kubectl exec`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec) to connect to the pod, and then use `df -h` to check if the volume is mounted.
+4. Verify your volume has been mounted on the pod by using [`kubectl exec`][kubectl-exec] to connect to the pod, and then use `df -h` to check if the volume is mounted.
```bash kubectl exec -it nginx-nfs -- sh
This section describes how to expose an Azure NetApp Files dual-protocol volume
### Create a secret with the domain credentials
-1. Create a secret on your AKS cluster to access the AD server using the [`kubectl create secret`](https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/) command. This secret will be used by the Kubernetes persistent volume to access the Azure NetApp Files SMB volume. Use the following command to create the secret, replacing `USERNAME` with your username, `PASSWORD` with your password, and `DOMAIN_NAME` with your domain name for your Active Directory.
+1. Create a secret on your AKS cluster to access the AD server using the [`kubectl create secret`][kubectl-create-secret] command. This secret will be used by the Kubernetes persistent volume to access the Azure NetApp Files SMB volume. Use the following command to create the secret, replacing `USERNAME` with your username, `PASSWORD` with your password, and `DOMAIN_NAME` with your Active Directory domain name.
```bash kubectl create secret generic smbcreds --from-literal=username=USERNAME --from-literal=password="PASSWORD" --from-literal=domain='DOMAIN_NAME' ```
-2. Check the secret has been created.
+2. To verify the secret has been created, run the [`kubectl get`][kubectl-get] command.
```bash kubectl get secret
+ ```
+
+ ```output
NAME TYPE DATA AGE smbcreds Opaque 2 20h ``` ### Install an SMB CSI driver
-You must install a Container Storage Interface (CSI) driver to create a Kubernetes SMB `PersistentVolume`.
+You must install a Container Storage Interface (CSI) driver to create a Kubernetes SMB `PersistentVolume`.
-1. Install the SMB CSI driver on your cluster using helm. Be sure to set the `windows.enabled` option to `true`:
+1. Install the SMB CSI driver on your cluster using helm. Be sure to set the `windows.enabled` option to `true`:
```bash helm repo add csi-driver-smb https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/charts helm install csi-driver-smb csi-driver-smb/csi-driver-smb --namespace kube-system --version v1.10.0 ΓÇô-set windows.enabled=true ```
- For other methods of installing the SMB CSI Driver, see [Install SMB CSI driver master version on a Kubernetes cluster](https://github.com/kubernetes-csi/csi-driver-smb/blob/master/docs/install-csi-driver-master.md).
+ For other methods of installing the SMB CSI Driver, see [Install SMB CSI driver master version on a Kubernetes cluster][install-smb-csi-driver].
-2. Verify that the `csi-smb` controller pod is running and each worker node has a pod running using the [`kubectl get pods`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) command:
+2. Verify the `csi-smb` controller pod is running and each worker node has a pod running using the [`kubectl get pods`][kubectl-get-pods] command:
```bash kubectl get pods -n kube-system | grep csi-smb
-
+ ```
+
+ ```output
csi-smb-controller-68df7b4758-xf2m9 3/3 Running 0 3m46s csi-smb-node-s6clj 3/3 Running 0 3m47s csi-smb-node-win-tfxvk 3/3 Running 0 3m47s
You must install a Container Storage Interface (CSI) driver to create a Kubernet
ANF_ACCOUNT_NAME="myaccountname" POOL_NAME="mypool1" VOLUME_NAME="myvolname"
- ```
-
-2. List the details of your volume using [`az netappfiles volume show`](/cli/azure/netappfiles/volume#az-netappfiles-volume-show).
+ ```
+
+2. List the details of your volume using [`az netappfiles volume show`][az-netappfiles-volume-show] command.
```azurecli-interactive az netappfiles volume show \
You must install a Container Storage Interface (CSI) driver to create a Kubernet
--volume-name "$VOLUME_NAME -o JSON ```
- The following output is an example of the above command executed with real values.
+ The following output is an example of the above command executed with real values.
```output {
You must install a Container Storage Interface (CSI) driver to create a Kubernet
namespace: default ```
-4. Create the persistent volume using the [`kubectl apply`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) command:
+4. Create the persistent volume using the [`kubectl apply`][kubectl-apply] command:
```bash kubectl apply -f pv-smb.yaml ```
-5. Verify the status of the persistent volume is *Available* using the [`kubectl describe`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe) command:
+5. Verify the status of the persistent volume is *Available* using the [`kubectl describe`][kubectl-describe] command:
```bash kubectl describe pv anf-pv-smb
You must install a Container Storage Interface (CSI) driver to create a Kubernet
### Create a persistent volume claim for SMB
-1. Create a file name `pvc-smb.yaml` and copy in the following YAML.
+1. Create a file name `pvc-smb.yaml` and copy in the following YAML.
```yaml apiVersion: v1
You must install a Container Storage Interface (CSI) driver to create a Kubernet
storage: 100Gi ```
-2. Create the persistent volume claim using the [`kubectl apply`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) command:
+2. Create the persistent volume claim using the [`kubectl apply`][kubectl-apply] command:
```bash kubectl apply -f pvc-smb.yaml ```
- Verify the status of the persistent volume claim is *Bound* by using the [`kubectl describe`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe) command:
+ Verify the status of the persistent volume claim is *Bound* by using the [`kubectl describe`][kubectl-describe] command:
```bash kubectl describe pvc anf-pvc-smb ```
-### Mount within a pod using SMB
+### Mount within a pod using SMB
1. Create a file named `iis-smb.yaml` and copy in the following YAML. This file will be used to create an Internet Information Services pod to mount the volume to path `/inetpub/wwwroot`.
You must install a Container Storage Interface (CSI) driver to create a Kubernet
readOnly: false ```
-2. Create the pod using the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) command:
+2. Create the pod using the [kubectl apply][kubectl-apply] command:
```bash kubectl apply -f iis-smb.yaml ```
-3. Verify the pod is *Running* and `/inetpub/wwwroot` is mounted from SMB by using the [`kubectl describe`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe) command:
+3. Verify the pod is *Running* and `/inetpub/wwwroot` is mounted from SMB by using the [`kubectl describe`][kubectl-describe] command:
```bash kubectl describe pod iis-pod ```
- The output of the command resembles the following example:
+ The output of the command resembles the following example:
```output Name: iis-pod
You must install a Container Storage Interface (CSI) driver to create a Kubernet
... ```
-4. Verify your volume has been mounted on the pod by using the [kubectl exec](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec) command to connect to the pod, and then use `dir` command in the correct directory to check if the volume is mounted and the size matches the size of the volume you provisioned.
+4. Verify your volume has been mounted on the pod by using the [kubectl exec][kubectl-exec] command to connect to the pod. Then use the `dir` command in the correct directory to check if the volume is mounted and the size matches the size of the volume you provisioned.
```bash kubectl exec -it iis-pod ΓÇô- cmd.exe ```
- The output of the command resembles the following example:
+
+ The output of the command resembles the following example:
```output Microsoft Windows [Version 10.0.20348.1668]
Astra Trident supports many features with Azure NetApp Files. For more informati
* [Importing volumes][importing-trident-volumes] <!-- EXTERNAL LINKS -->
-[astra-trident]: https://docs.netapp.com/us-en/trident/https://docsupdatetracker.net/index.html
-[kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe [kubectl-exec]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec
-[astra-control-service]: https://cloud.netapp.com/astra-control
-[kubernetes-csi-driver]: https://kubernetes-csi.github.io/docs/
-[trident-install-guide]: https://docs.netapp.com/us-en/trident/trident-get-started/kubernetes-deploy.html
-[trident-helm-chart]: https://docs.netapp.com/us-en/trident/trident-get-started/kubernetes-deploy-operator.html
-[tridentctl]: https://docs.netapp.com/us-en/trident/trident-get-started/kubernetes-deploy-tridentctl.html
-[trident-backend-install-guide]: https://docs.netapp.com/us-en/trident/trident-use/backends.html
+[kubectl-create-secret]: https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/
+[kubectl-get-pods]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get [expand-trident-volumes]: https://docs.netapp.com/us-en/trident/trident-use/vol-expansion.html
+[install-smb-csi-driver]: https://github.com/kubernetes-csi/csi-driver-smb/blob/master/docs/install-csi-driver-master.md
[on-demand-trident-volume-snapshots]: https://docs.netapp.com/us-en/trident/trident-use/vol-snapshots.html [importing-trident-volumes]: https://docs.netapp.com/us-en/trident/trident-use/vol-import.html
-[backend-anf.yaml]: https://raw.githubusercontent.com/NetApp/trident/v23.01.1/trident-installer/sample-input/backends-samples/azure-netapp-files/backend-anf.yaml
<!-- INTERNAL LINKS -->
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
-[anf]: ../azure-netapp-files/azure-netapp-files-introduction.md
-[anf-delegate-subnet]: ../azure-netapp-files/azure-netapp-files-delegate-subnet.md
-[anf-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=netapp&regions=all
-[az-aks-show]: /cli/azure/aks#az_aks_show
-[az-netappfiles-account-create]: /cli/azure/netappfiles/account#az_netappfiles_account_create
-[az-netapp-files-dynamic]: azure-netapp-files-dynamic.md
-[az-netappfiles-pool-create]: /cli/azure/netappfiles/pool#az_netappfiles_pool_create
-[az-netappfiles-volume-create]: /cli/azure/netappfiles/volume#az_netappfiles_volume_create
[az-netappfiles-volume-show]: /cli/azure/netappfiles/volume#az_netappfiles_volume_show
-[az-network-vnet-subnet-create]: /cli/azure/network/vnet/subnet#az_network_vnet_subnet_create
-[install-azure-cli]: /cli/azure/install-azure-cli
-[use-tags]: use-tags.md
-[azure-ad-app-registration]: ../active-directory/develop/howto-create-service-principal-portal.md
+[azure-netapp-nfs]: azure-netapp-files-nfs.md
+[azure-netapp-smb]: azure-netapp-files-smb.md
+[azure-netapp-files]: azure-netapp-files.md
+[azure-netapp-files-volume-dual-protocol]: ../azure-netapp-files/create-volumes-dual-protocol.md
aks Concepts Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-network.md
The *LoadBalancer* only works at layer 4. At layer 4, the Service is unaware of
### Create an Ingress resource
-In AKS, you can create an [Ingress resource using NGINX][nginx-ingress], a similar tool, or the AKS HTTP application routing feature. When you enable HTTP application routing for an AKS cluster, the Azure platform creates the ingress controller and an *External-DNS* controller. As new Ingress resources are created in Kubernetes, the required DNS `A` records are created in a cluster-specific DNS zone.
+The application routing addon is the recommended way to configure an Ingress controller in AKS. The application routing addon is a fully managed, ingress controller for Azure Kubernetes Service (AKS) that provides the following features:
-For more information, see [Deploy HTTP application routing][aks-http-routing].
+* Easy configuration of managed NGINX Ingress controllers based on Kubernetes NGINX Ingress controller.
-### Application Gateway Ingress Controller (AGIC)
+* Integration with Azure DNS for public and private zone management.
-With the Application Gateway Ingress Controller (AGIC) add-on, you can use Azure's native Application Gateway level 7 load-balancer to expose cloud software to the Internet. AGIC runs as a pod within the AKS cluster. It consumes [Kubernetes Ingress Resources][k8s-ingress] and converts them to an Application Gateway configuration, which allows the gateway to load-balance traffic to the Kubernetes pods.
+* SSL termination with certificates stored in Azure Key Vault.
-To learn more about the AGIC add-on for AKS, see [What is Application Gateway Ingress Controller?][agic-overview].
-
-### SSL/TLS termination
-
-SSL/TLS termination is another common feature of Ingress. On large web applications accessed via HTTPS, the Ingress resource handles the TLS termination rather than within the application itself. To provide automatic TLS certification generation and configuration, you can configure the Ingress resource to use providers such as "Let's Encrypt."
-
-For more information on configuring an NGINX ingress controller with Let's Encrypt, see [Ingress and TLS][aks-ingress-tls].
+For more information about the application routing addon, see [Managed NGINX ingress with the application routing add-on](app-routing.md).
### Client source IP preservation
For more information on core Kubernetes and AKS concepts, see the following arti
[service-types]: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types <!-- LINKS - Internal -->
-[aks-http-routing]: http-application-routing.md
-[aks-ingress-tls]: ./ingress-tls.md
[aks-configure-kubenet-networking]: configure-kubenet.md [aks-configure-advanced-networking]: configure-azure-cni.md [aks-concepts-clusters-workloads]: concepts-clusters-workloads.md
For more information on core Kubernetes and AKS concepts, see the following arti
[support-policies]: support-policies.md [limit-egress]: limit-egress-traffic.md [k8s-ingress]: https://kubernetes.io/docs/concepts/services-networking/ingress/
-[nginx-ingress]: ingress-basic.md
[ip-preservation]: https://techcommunity.microsoft.com/t5/fasttrack-for-azure/how-client-source-ip-preservation-works-for-loadbalancer/ba-p/3033722#:~:text=Enable%20Client%20source%20IP%20preservation%201%20Edit%20loadbalancer,is%20the%20same%20as%20the%20source%20IP%20%28srjumpbox%29. [nsg-traffic]: ../virtual-network/network-security-group-how-it-works.md [azure-cni-aks]: configure-azure-cni.md
aks Gpu Multi Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-multi-instance.md
Before you install the Nvidia plugins, you need to specify which multi-instance
```azurecli-interactive helm install \
- --version=0.7.0 \
+ --version=0.14.0 \
--generate-name \ --set migStrategy=${MIG_STRATEGY} \ nvdp/nvidia-device-plugin
aks Http Application Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/http-application-routing.md
Last updated 04/05/2023 + # HTTP application routing add-on for Azure Kubernetes Service (AKS) (retired)
aks Manage Ssh Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-ssh-node-access.md
AKS supports the following configuration options to manage SSH keys on cluster n
### Register the `DisableSSHPreview` feature flag
+To use the **Disable** SSH feature, perform the following steps to register and enable it in your subscription.
+ 1. Register the `DisableSSHPreview` feature flag using the [`az feature register`][az-feature-register] command. ```azurecli-interactive
aks Node Autoprovision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-autoprovision.md
NAP is based on the Open Source [Karpenter](https://karpenter.sh) project, and t
- Windows and Azure Linux node pools aren't supported yet - Kubelet configuration through Node pool configuration is not supported - NAP can only be enabled on new clusters currently
+- It is not currently possible to stop nodepools or clusters which use the NAP feature
## Enable node autoprovisioning
aks Start Stop Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/start-stop-cluster.md
This article assumes you have an existing AKS cluster. If you need an AKS cluste
When using the cluster stop/start feature, the following conditions apply: - This feature is only supported for Virtual Machine Scale Set backed clusters.
+- You can't stop clusters which use the [Node Autoprovisioning (NAP)](node-autoprovision.md) feature.
- The cluster state of a stopped AKS cluster is preserved for up to 12 months. If your cluster is stopped for more than 12 months, you can't recover the state. For more information, see the [AKS support policies](support-policies.md). - You can only perform start or delete operations on a stopped AKS cluster. To perform other operations, like scaling or upgrading, you need to start your cluster first. - If you provisioned PrivateEndpoints linked to private clusters, they need to be deleted and recreated again when starting a stopped AKS cluster.
aks Start Stop Nodepools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/start-stop-nodepools.md
You might not need to continuously run your AKS workloads. For example, you migh
* Spot node pools are supported. * Stopped node pools can be upgraded. * The cluster and node pool must be running.
+* You can't stop node pools from clusters which use the [Node Autoprovisioning (NAP)](node-autoprovision.md) feature.
## Before you begin
application-gateway Migrate V1 V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/migrate-v1-v2.md
Title: Migrate from V1 to V2 - Azure Application Gateway
-description: This article shows you how to migrate Azure Application Gateway and Web Application Firewall from V1 to V2
+description: This article shows you how to migrate Azure Application Gateway and Web Application Firewall from V1 to V2.
Previously updated : 08/01/2023 Last updated : 02/26/2024
This article primarily helps with the configuration migration. Client traffic mi
* An existing Application Gateway V1 Standard. * Make sure you have the latest PowerShell modules, or you can use Azure Cloud Shell in the portal. * If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-* Ensure that there is no existing Application gateway with the provided Appgw V2 Name and Resource group name in V1 subscription. This will rewrite the existing resources.
-* If Public IP is provided ensure that its in succeeded state.If not provided and AppGwResourceGroupName is provided ensure that public IP resource with name AppGwV2Name-IP doesnΓÇÖt exist in a resourcegroup with the name AppGwResourceGroupName in the V1 subscription.
-* Ensure that no other operation is planned on the V1 gateway or any of its associated resources during migration.
+* Ensure that there's no existing Application gateway with the provided AppGW V2 Name and Resource group name in V1 subscription. This rewrites the existing resources.
+* If a public IP address is provided, ensure that it's in a succeeded state. If not provided and AppGWResourceGroupName is provided ensure that public IP resource with name AppGWV2Name-IP doesnΓÇÖt exist in a resource group with the name AppGWResourceGroupName in the V1 subscription.
+* Ensure that no other operation is planned on the V1 gateway or any associated resources during migration.
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)] [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] > [!IMPORTANT]
->Run the `Set-AzContext -Subscription <V1 application gateway SubscriptionId>` cmdlet every time before running the migration script. This is necessary to set the active Azure context to the correct subscription, because the migration script might clean up the existing resource group if it doesn't exist in current subscription context.This is not a mandatory step for version 1.0.11 & above of the migration script.
+>Run the `Set-AzContext -Subscription <V1 application gateway SubscriptionId>` cmdlet every time before running the migration script. This is necessary to set the active Azure context to the correct subscription, because the migration script might clean up the existing resource group if it doesn't exist in current subscription context. This is not a mandatory step for version 1.0.11 & above of the migration script.
> [!IMPORTANT]
->A new stable version of the migration script , version 1.0.11 is available now , which contains important bug fixes and updates.Use this version to avoid potential issues.
+>A new stable version of the migration script, version 1.0.11 is available now, which contains important bug fixes and updates.Use this version to avoid potential issues.
## Configuration migration
An Azure PowerShell script is provided in this document. It performs the followi
## Downloading the script
-You can download the migration script from the [PowerShell Gallery](https://www.powershellgallery.com/packages/AzureAppGWMigration).A new stable release (Version 1.0.11) of the migration script is available ,which includes major updates and bug fixes .It is recommended to use this stable version.
+You can download the migration script from the [PowerShell Gallery](https://www.powershellgallery.com/packages/AzureAppGWMigration).A new stable release (Version 1.0.11) of the migration script is available, which includes major updates and bug fixes. It's recommended to use this stable version.
## Using the script
There are two options for you depending on your local PowerShell environment set
To determine if you have the Azure Az modules installed, run `Get-InstalledModule -Name az`. If you don't see any installed Az modules, then you can use the `Install-Script` method.
-#### Install using the Install-Script method
-
+#### Install using the Install-Script method (recommended)
To use this option, you must not have the Azure Az modules installed on your computer. If they're installed, the following command displays an error. You can either uninstall the Azure Az modules, or use the other option to download the script manually and run it. Run the script with the following command to get the latest version:
This command also installs the required Az modules.
#### Install using the script directly If you have some Azure Az modules installed and can't uninstall them (or don't want to uninstall them), you can manually download the script using the **Manual Download** tab in the script download link. The script is downloaded as a raw nupkg file. To install the script from this nupkg file, see [Manual Package Download](/powershell/gallery/how-to/working-with-packages/manual-download).
-Version 1.0.11 is the new version of the migration script which includes major bug fixes.It is recommended to use this stable version.
+Version 1.0.11 is the new version of the migration script which includes major bug fixes. It's recommended to use this stable version.
#### How to check the version of the downloaded script To check the version of the downloaded script the steps are as follows: * Extract the contents of the NuGet package.
-* Open the .PS1 file in the folder and check the .VERSION on top to confirm the version of the downloaded script
+* Open the `.PS1` file in the folder and check the `.VERSION` on top to confirm the version of the downloaded script
``` <#PSScriptInfo .VERSION 1.0.10
To run the script:
2. Use `Import-Module Az` to import the Az modules.
-3. Run the `Set-AzContext` cmdlet ,to set the active Azure context to the correct subscription.This is an important step because the migration script might clean up the existing resource group if it doesn't exist in current subscription context.
+3. Run the `Set-AzContext` cmdlet, to set the active Azure context to the correct subscription. This is an important step because the migration script might clean up the existing resource group if it doesn't exist in current subscription context.
``` Set-AzContext -Subscription '<V1 application gateway SubscriptionId>' ``` 4. Run `Get-Help AzureAppGWMigration.ps1` to examine the required parameters: ```
- AzureAppGwMigration.ps1
+ AzureAppGWMigration.ps1
-resourceId <V1 application gateway Resource ID> -subnetAddressRange <subnet space you want to use> -appgwName <string to use to append>
- -AppGwResourceGroupName <resource group name you want to use>
+ -AppGWResourceGroupName <resource group name you want to use>
-sslCertificates <comma-separated SSLCert objects as above> -trustedRootCertificates <comma-separated Trusted Root Cert objects as above> -privateIpAddress <private IP string>
To run the script:
-validateMigration -enableAutoScale ``` > [!NOTE]
-> During migration don't attempt any other operation on the V1 gateway or any of its associated resources.
+> During migration don't attempt any other operation on the V1 gateway or any associated resources.
Parameters for the script: * **resourceId: [String]: Required**: This parameter is the Azure Resource ID for your existing Standard V1 or WAF V1 gateway. To find this string value, navigate to the Azure portal, select your application gateway or WAF resource, and click the **Properties** link for the gateway. The Resource ID is located on that page.
To run the script:
* **subnetAddressRange: [String]: Required**: This parameter is the IP address space that you've allocated (or want to allocate) for a new subnet that contains your new V2 gateway. The address space must be specified in the CIDR notation. For example: 10.0.0.0/24. You don't need to create this subnet in advance but the CIDR needs to be part of the VNET address space. The script creates it for you if it doesn't exist and if it exists, it uses the existing one (make sure the subnet is either empty, contains only V2 Gateway if any, and has enough available IPs). * **appgwName: [String]: Optional**. This is a string you specify to use as the name for the new Standard_V2 or WAF_V2 gateway. If this parameter isn't supplied, the name of your existing V1 gateway is used with the suffix *_V2* appended.
- * **AppGwResourceGroupName: [String]: Optional**. Name of resource group where you want V2 Application Gateway resources to be created (default value is `<V1-app-gw-rgname>`)
+ * **AppGWResourceGroupName: [String]: Optional**. Name of resource group where you want V2 Application Gateway resources to be created (default value is `<V1-app-gw-rgname>`)
> [!NOTE]
-> Ensure that there is no existing Application gateway with the provided Appgw V2 Name and Resource group name in V1 subscription. This will rewrite the existing resources.
+> Ensure that there's no existing Application gateway with the provided AppGW V2 Name and Resource group name in V1 subscription. This rewrites the existing resources.
* **sslCertificates: [PSApplicationGatewaySslCertificate]: Optional**. A comma-separated list of PSApplicationGatewaySslCertificate objects that you create to represent the TLS/SSL certs from your V1 gateway must be uploaded to the new V2 gateway. For each of your TLS/SSL certs configured for your Standard V1 or WAF V1 gateway, you can create a new PSApplicationGatewaySslCertificate object via the `New-AzApplicationGatewaySslCertificate` command shown here. You need the path to your TLS/SSL Cert file and the password. This parameter is only optional if you don't have HTTPS listeners configured for your V1 gateway or WAF. If you have at least one HTTPS listener setup, you must specify this parameter.
To run the script:
To create a list of PSApplicationGatewayTrustedRootCertificate objects, see [New-AzApplicationGatewayTrustedRootCertificate](/powershell/module/Az.Network/New-AzApplicationGatewayTrustedRootCertificate). * **privateIpAddress: [String]: Optional**. A specific private IP address that you want to associate to your new V2 gateway. This must be from the same VNet that you allocate for your new V2 gateway. If this isn't specified, the script allocates a private IP address for your V2 gateway.
- * **publicIpResourceId: [String]: Optional**. The resourceId of existing public IP address (standard SKU) resource in your subscription that you want to allocate to the new V2 gateway.If public Ip resource name is provided, ensure that it exists in succeeded state.
- If this isn't specified, the script allocates a new public IP in the same resource group. The name is the V2 gateway's name with *-IP* appended.If AppGwResourceGroupName is provided and public IP is not provided ensure that public IP resource with name AppGwV2Name-IP doesnΓÇÖt exist in a resourcegroup with the name AppGwResourceGroupName in the V1 subscription
+ * **publicIpResourceId: [String]: Optional**. The resourceId of existing public IP address (standard SKU) resource in your subscription that you want to allocate to the new V2 gateway. If public Ip resource name is provided, ensure that it exists in succeeded state.
+ If this isn't specified, the script allocates a new public IP address in the same resource group. The name is the V2 gateway's name with *-IP* appended. If AppGWResourceGroupName is provided and a public IP address is not provided, ensure that public IP resource with name AppGWV2Name-IP doesnΓÇÖt exist in a resource group with the name AppGWResourceGroupName in the V1 subscription.
- * **validateMigration: [switch]: Optional**. Use this parameter if you want the script to do some basic configuration comparison validations after the V2 gateway creation and the configuration copy. By default, no validation is done.
- * **enableAutoScale: [switch]: Optional**. Use this parameter if you want the script to enable autoscaling on the new V2 gateway after it's created. By default, autoscaling is disabled. You can always manually enable it later on the newly created V2 gateway.
+ * **validateMigration: [switch]: Optional**. Use this parameter to enable the script to do some basic configuration comparison validations after the V2 gateway creation and the configuration copy. By default, no validation is done.
+ * **enableAutoScale: [switch]: Optional**. Use this parameter to enable the script to enable autoscaling on the new V2 gateway after it's created. By default, autoscaling is disabled. You can always manually enable it later on the newly created V2 gateway.
5. Run the script using the appropriate parameters. It may take five to seven minutes to finish.
To run the script:
-resourceId /subscriptions/8b1d0fea-8d57-4975-adfb-308f1f4d12aa/resourceGroups/MyResourceGroup/providers/Microsoft.Network/applicationGateways/myv1appgateway ` -subnetAddressRange 10.0.0.0/24 ` -appgwname "MynewV2gw" `
- -AppGwResourceGroupName "MyResourceGroup" `
+ -AppGWResourceGroupName "MyResourceGroup" `
-sslCertificates $mySslCert1,$mySslCert2 ` -trustedRootCertificates $trustedCert ` -privateIpAddress "10.0.0.1" `
To run the script:
* If you have FIPS mode enabled for your V1 gateway, it isn't migrated to your new V2 gateway. FIPS mode isn't supported in V2. * If you have a Private IP only V1 gateway, the script generates a private and public IP address for the new V2 gateway. The Private IP only V2 gateway is currently in public preview. Once it becomes generally available, customers can utilize the script to transfer their private IP only V1 gateway to a private IP only V2 gateway. * NTLM and Kerberos authentication isn't supported by Application Gateway V2. The script is unable to detect if the gateway is serving this type of traffic and may pose as a breaking change from V1 to V2 gateways if run.
+* WAFv2 is created in old WAF config mode; migration to WAF policy is required.
## Traffic migration
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
description: This article provides information about deploying the extension-bas
Previously updated : 04/10/2023 Last updated : 02/26/2024 #Customer intent: As a developer, I want to learn about extension so that I can efficiently deploy Hybrid Runbook Workers.
The extension-based onboarding is only for **User** Hybrid Runbook Workers. This
For **System** Hybrid Runbook Worker onboarding, see [Deploy an agent-based Windows Hybrid Runbook Worker in Automation](./automation-windows-hrw-install.md) or [Deploy an agent-based Linux Hybrid Runbook Worker in Automation](./automation-linux-hrw-install.md).
-You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on an Azure or non-Azure machine, including [Azure Arc-enabled servers](../azure-arc/servers/overview.md) and [Arc-enabled VMware vSphere (preview)](../azure-arc/vmware-vsphere/overview.md). From the machine or server that's hosting the role, you can run runbooks directly against it and against resources in the environment to manage those local resources.
+You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on an Azure or non-Azure machine, including [Azure Arc-enabled servers](../azure-arc/servers/overview.md), [Arc-enabled VMware vSphere](../azure-arc/vmware-vsphere/overview.md), and [Arc-enabled SCVMM](../azure-arc/system-center-virtual-machine-manager/overview.md). From the machine or server that's hosting the role, you can run runbooks directly against it and against resources in the environment to manage those local resources.
Azure Automation stores and manages runbooks and then delivers them to one or more chosen machines. After you successfully deploy a runbook worker, review [Run runbooks on a Hybrid Runbook Worker](automation-hrw-run-runbooks.md) to learn how to configure your runbooks to automate processes in your on-premises datacenter or other cloud environment. > [!NOTE]
-> A hybrid worker can co-exist with both platforms: **Agent based (V1)** and **Extension based (V2)**. If you install Extension based (V2)on a hybrid worker already running Agent based (V1), then you would see two entries of the Hybrid Runbook Worker in the group. One with Platform Extension based (V2) and the other Agent based (V1). [**Learn more**](#migrate-an-existing-agent-based-to-extension-based-hybrid-workers).
+> A hybrid worker can co-exist with both platforms: **Agent based (V1)** and **Extension based (V2)**. If you install Extension based (V2) on a hybrid worker already running Agent based (V1), then you would see two entries of the Hybrid Runbook Worker in the group. One with Platform Extension based (V2) and the other Agent based (V1). [**Learn more**](#migrate-an-existing-agent-based-to-extension-based-hybrid-workers).
## Prerequisites
Azure Automation stores and manages runbooks and then delivers them to one or mo
- Two cores - 4 GB of RAM-- **Non-Azure machines** must have the [Azure Connected Machine agent](../azure-arc/servers/agent-overview.md) installed. To install the `AzureConnectedMachineAgent`, see [Connect hybrid machines to Azure from the Azure portal](../azure-arc/servers/onboard-portal.md) for Arc-enabled servers or see [Manage VMware virtual machines Azure Arc](../azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md#enable-guest-management) to enable guest management for Arc-enabled VMware vSphere VMs.-- The system-assigned managed identity must be enabled on the Azure virtual machine, Arc-enabled server or Arc-enabled VMware vSphere VM. If the system-assigned managed identity isn't enabled, it will be enabled as part of the adding process.
+- **Non-Azure machines** must have the [Azure Connected Machine agent](../azure-arc/servers/agent-overview.md) installed. To install the `AzureConnectedMachineAgent`, see [Connect hybrid machines to Azure from the Azure portal](../azure-arc/servers/onboard-portal.md) for Arc-enabled servers. See [Install Arc agent for Arc-enabled VMware VMs](../azure-arc/vmware-vsphere/enable-guest-management-at-scale.md) to enable guest management for Arc-enabled VMware vSphere VMs and install [Arc agent for Arc-enabled SCVMM](../azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale.md) to enable guest management for Arc-enabled SCVMM VMs.
+- The system-assigned managed identity must be enabled on the Azure virtual machine, Arc-enabled server, Arc-enabled VMware vSphere VM or Arc-enabled SCVMM VM. If the system-assigned managed identity isn't enabled, it will be enabled as part of the adding process.
### Supported operating systems
To create a hybrid worker group in the Azure portal, follow these steps:
- If you select **Default**, the hybrid extension will be installed using the local system account. - If you select **Custom**, then from the drop-down list, select the credential asset.
-1. Select **Next** to advance to the **Hybrid workers** tab. You can select Azure virtual machines, Azure Arc-enabled servers or Azure Arc-enabled VMware vSphere (preview) to be added to this Hybrid worker group. If you don't select any machines, an empty Hybrid worker group will be created. You can still add machines later.
+1. Select **Next** to advance to the **Hybrid workers** tab. You can select Azure virtual machines, Azure Arc-enabled servers, Azure Arc-enabled VMware vSphere and Arc-enabled SCVMM to be added to this Hybrid worker group. If you don't select any machines, an empty Hybrid worker group will be created. You can still add machines later.
:::image type="content" source="./media/extension-based-hybrid-runbook-worker-install/basics-tab-portal.png" alt-text="Screenshot showing to enter name and credentials in basics tab.":::
You can also add machines to an existing hybrid worker group.
1. Select the checkbox next to the machine(s) you want to add to the hybrid worker group.
- If you don't see your non-Azure machine listed, ensure Azure Arc Connected Machine agent is installed on the machine. To install the `AzureConnectedMachineAgent` see [Connect hybrid machines to Azure from the Azure portal](../azure-arc/servers/onboard-portal.md) for Arc-enabled servers or see [Manage VMware virtual machines Azure Arc](../azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md#enable-guest-management) to enable guest management for Arc-enabled VMware vSphere VMs.
+ If you don't see your non-Azure machine listed, ensure Azure Arc Connected Machine agent is installed on the machine. To install the `AzureConnectedMachineAgent` see [Connect hybrid machines to Azure from the Azure portal](../azure-arc/servers/onboard-portal.md) for Arc-enabled servers. See [Install Arc agent for Arc-enabled VMs](../azure-arc/vmware-vsphere/enable-guest-management-at-scale.md) to enable guest management for Arc-enabled VMware vSphere and [Install Arc agent for Arc-enabled SCVMM](../azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale.md) to enable guest management for Arc-enabled SCVMM VMs.
1. Select **Add** to add the machine to the group.
- After adding, you can see the machine type as Azure virtual machine, Server-Azure Arc or VMware virtual machine-Azure Arc. The **Platform** field shows the worker as **Agent based (V1)** or **Extension based (V2)**.
+ After adding, you can see the machine type as Azure virtual machine, Machine ΓÇô Azure Arc , Machine ΓÇô Azure Arc (VMware) or Machine ΓÇô Azure Arc SCVMM. The **Platform** field shows the worker as **Agent based (V1)** or **Extension based (V2)**.
:::image type="content" source="./media/extension-based-hybrid-runbook-worker-install/hybrid-worker-group-platform-inline.png" alt-text="Screenshot of platform field showing agent or extension based." lightbox="./media/extension-based-hybrid-runbook-worker-install/hybrid-worker-group-platform-expanded.png":::
Using [VM insights](../azure-monitor/vm/vminsights-overview.md), you can monitor
- To learn about Azure VM extensions, see [Azure VM extensions and features for Windows](../virtual-machines/extensions/features-windows.md) and [Azure VM extensions and features for Linux](../virtual-machines/extensions/features-linux.md). - To learn about VM extensions for Arc-enabled servers, see [VM extension management with Azure Arc-enabled servers](../azure-arc/servers/manage-vm-extensions.md).-- To learn about VM extensions for Arc-enabled VMware vSphere VMs, see [Manage VMware VMs in Azure through Arc-enabled VMware vSphere (preview)](../azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md).+
+- To learn about Azure management services for Arc-enabled VMware VMs, see [Install Arc agents at scale for your VMware VMs](../azure-arc/vmware-vsphere/enable-guest-management-at-scale.md).
+
+- To learn about Azure management services for Arc-enabled SCVMM VMs, see [Install Arc agents at scale for Arc-enabled SCVMM VMs](../azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale.md).
azure-arc Plan At Scale Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/plan-at-scale-deployment.md
Title: Plan and deploy Azure Arc-enabled servers description: Learn how to enable a large number of machines to Azure Arc-enabled servers to simplify configuration of essential security, management, and monitoring capabilities in Azure. Previously updated : 05/04/2023 Last updated : 02/26/2024
Phase 3 is when administrators or system engineers can enable automation of manu
|Create a Resource Health alert |If a server stops sending heartbeats to Azure for longer than 15 minutes, it can mean that it is offline, the network connection has been blocked, or the agent is not running. Develop a plan for how youΓÇÖll respond and investigate these incidents and use [Resource Health alerts](../..//service-health/resource-health-alert-monitor-guide.md) to get notified when they start.<br><br> Specify the following when configuring the alert:<br> **Resource type** = **Azure Arc-enabled servers**<br> **Current resource status** = **Unavailable**<br> **Previous resource status** = **Available** | One hour | |Create an Azure Advisor alert | For the best experience and most recent security and bug fixes, we recommend keeping the Azure Connected Machine agent up to date. Out-of-date agents will be identified with an [Azure Advisor alert](../../advisor/advisor-alerts-portal.md).<br><br> Specify the following when configuring the alert:<br> **Recommendation type** = **Upgrade to the latest version of the Azure Connected Machine agent** | One hour | |[Assign Azure policies](../../governance/policy/assign-policy-portal.md) to your subscription or resource group scope |Assign the **Enable Azure Monitor for VMs** [policy](../../azure-monitor/vm/vminsights-enable-policy.md) (and others that meet your needs) to the subscription or resource group scope. Azure Policy allows you to assign policy definitions that install the required agents for VM insights across your environment.| Varies |
-|[Enable Update Management for your Azure Arc-enabled servers](../../automation/update-management/enable-from-automation-account.md) |Configure Update Management in Azure Automation to manage operating system updates for your Windows and Linux virtual machines registered with Azure Arc-enabled servers. | 15 minutes |
+|Enable [Azure Update Manager](/azure/update-manager/) for your Azure Arc-enabled servers. |Configure Azure Update Manager on your Arc-enabled servers to manage system updates for your Windows and Linux virtual machines. You can choose to [deploy updates on-demand](/azure/update-manager/deploy-updates?tabs=install-single-overview%2Cinstall-scale-overview) or [apply updates using custom schedule](/azure/update-manager/scheduled-patching?tabs=schedule-updates-single-machine%2Cschedule-updates-scale-overview%2Cwindows-maintenance). | 5 minutes |
## Next steps
azure-arc Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/private-link-security.md
For more information, see [Key Benefits of Private Link](../../private-link/pri
## How it works
-Azure Arc Private Link Scope connects private endpoints (and the virtual networks they're contained in) to an Azure resource, in this case Azure Arc-enabled servers. When you enable any one of the Azure Arc-enabled servers supported VM extensions, such as Azure Automation Update Management or Azure Monitor, those resources connect other Azure resources. Such as:
+Azure Arc Private Link Scope connects private endpoints (and the virtual networks they're contained in) to an Azure resource, in this case Azure Arc-enabled servers. When you enable any one of the Azure Arc-enabled servers supported VM extensions, such as Azure Monitor, those resources connect other Azure resources. Such as:
-- Log Analytics workspace, required for Azure Automation Update Management, Azure Automation Change Tracking and Inventory, Azure Monitor VM insights, and Azure Monitor log collection with Log Analytics agent.
+- Log Analytics workspace, required for Azure Automation Change Tracking and Inventory, Azure Monitor VM insights, and Azure Monitor log collection with Log Analytics agent.
- Azure Automation account, required for Update Management and Change Tracking and Inventory. - Azure Key Vault - Azure Blob storage, required for Custom Script Extension.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/overview.md
Title: Overview of the Azure Connected System Center Virtual Machine Manager description: This article provides a detailed overview of the Azure Arc-enabled System Center Virtual Machine Manager. Previously updated : 02/23/2024 Last updated : 02/26/2024 ms.
In addition, SCVMM requires the following exception:
| **Service** | **Port** | **URL** | **Direction** | **Notes**| | | | | | |
-| SCVMM management Server | 443 | URL of the SCVMM management server | Appliance VM IP and control plane endpoint need outbound connection. | Used by the SCVMM server to communicate with the Appliance VM and the control plane. |
+| SCVMM Management Server | 443 | URL of the SCVMM management server. | Appliance VM IP and control plane endpoint need outbound connection. | Used by the SCVMM server to communicate with the Appliance VM and the control plane. |
+| WinRM | WinRM Port numbers (Default: 5985 and 5986). | URL of the WinRM service. | IPs in the IP Pool used by the Appliance VM and control plane need connection with the VMM server. | Used by the SCVMM server to communicate with the Appliance VM. |
[!INCLUDE [network-requirement-principles](../includes/network-requirement-principles.md)]
azure-arc Quickstart Connect System Center Virtual Machine Manager To Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md
ms. Previously updated : 2/23/2024 Last updated : 2/26/2024 # Customer intent: As a VI admin, I want to connect my VMM management server to Azure Arc.
This Quickstart shows you how to connect your SCVMM management server to Azure A
| **Requirement** | **Details** | | | | | **Azure** | An Azure subscription <br/><br/> A resource group in the above subscription where you have the *Owner/Contributor* role. |
-| **SCVMM** | You need an SCVMM management server running version 2019 or later.<br/><br/> A private cloud or a host group with a minimum free capacity of 32 GB of RAM, 4 vCPUs with 100 GB of free disk space. <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> Only Static IP allocation is supported and VMM Static IP Pool is required. Follow [these steps](/system-center/vmm/network-pool?view=sc-vmm-2022&preserve-view=true) to create a VMM Static IP Pool and ensure that the Static IP Pool has at least four IP addresses. Dynamic IP allocation using DHCP isn't supported. <br/><br/> A library share with write permission for the SCVMM admin account through which Resource Bridge deployment is going to be performed. |
+| **SCVMM** | You need an SCVMM management server running version 2019 or later.<br/><br/> A private cloud or a host group with a minimum free capacity of 32 GB of RAM, 4 vCPUs with 100 GB of free disk space. <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> Only Static IP allocation is supported and VMM Static IP Pool is required. Follow [these steps](/system-center/vmm/network-pool?view=sc-vmm-2022&preserve-view=true) to create a VMM Static IP Pool and ensure that the Static IP Pool has at least four IP addresses. If your SCVMM server is behind a firewall, all IPs in this IP Pool and the Control Plane IP should be allowed to communicate through WinRM ports. The default WinRM ports are 5985 and 5986. <br/><br/> Dynamic IP allocation using DHCP isn't supported. <br/><br/> A library share with write permission for the SCVMM admin account through which Resource Bridge deployment is going to be performed. |
| **SCVMM accounts** | An SCVMM admin account that can perform all administrative actions on all objects that VMM manages. <br/><br/> The user should be part of local administrator account in the SCVMM server. If the SCVMM server is installed in a High Availability configuration, the user should be a part of the local administrator accounts in all the SCVMM cluster nodes. <br/><br/>This will be used for the ongoing operation of Azure Arc-enabled SCVMM and the deployment of the Arc Resource bridge VM. | | **Workstation** | The workstation will be used to run the helper script.<br/><br/> A Windows/Linux machine that can access both your SCVMM management server and internet, directly or through proxy.<br/><br/> The helper script can be run directly from the VMM server machine as well.<br/><br/> To avoid network latency issues, we recommend executing the helper script directly in the VMM server machine.<br/><br/> Note that when you execute the script from a Linux machine, the deployment takes a bit longer and you might experience performance issues. |
azure-cache-for-redis Cache Best Practices Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-kubernetes.md
If your Azure Cache for Redis client application runs on a Linux-based container
## Potential connection collision with _Istio/Envoy_
-Currently, Azure Cache for Redis uses ports 15000-15019 for clustered caches to expose cluster nodes to client applications. As documented [here](https://istio.io/latest/docs/ops/deployment/requirements/#ports-used-by-istio), the same ports are also used by _Istio.io_ sidecar proxy called _Envoy_ and could interfere with creating connections, especially on port 15006.
+Currently, Azure Cache for Redis uses ports 15xxx for clustered caches to expose cluster nodes to client applications. As documented [here](https://istio.io/latest/docs/ops/deployment/requirements/#ports-used-by-istio), the same ports are also used by _Istio.io_ sidecar proxy called _Envoy_ and could interfere with creating connections, especially on port 15001 and 15006.
When using _Istio_ with an Azure Cache for Redis cluster, consider excluding the potential collision ports with an [istio annotation](https://istio.io/latest/docs/reference/config/annotations/).
azure-cache-for-redis Cache How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-monitor.md
Under normal conditions, **Average** and **Max** are similar because only one no
Generally, **Average** shows you a smooth chart of your desired metric and reacts well to changes in time granularity. **Max** and **Min** can hide large changes in the metric if the time granularity is large but can be used with a small time granularity to help pinpoint exact times when large changes occur in the metric.
-The types **Count** and **ΓÇ£Sum** can be misleading for certain metrics (connected clients included). Instead, we suggest you look at the **Average** metrics and not the **Sum** metrics.
+The types **Count** and **Sum** can be misleading for certain metrics (connected clients included). Instead, we suggest you look at the **Average** metrics and not the **Sum** metrics.
> [!NOTE] > Even when the cache is idle with no connected active client applications, you might see some cache activity, such as connected clients, memory usage, and operations being performed. The activity is normal in the operation of cache.
In contrast, for clustered caches, we recommend using the metrics with the suffi
- **Export** ΓÇô when there's an issue related to Export RDB - **AADAuthenticationFailure** (preview) - when there's an authentication failure using Microsoft Entra access token - **AADTokenExpired** (preview) - when a Microsoft Entra access token used for authentication isn't renewed and it expires.
+> [!NOTE]
+> Metrics for errors aren't available when using the Enterprise Tiers.
+ - Evicted Keys - The number of items evicted from the cache during the specified reporting interval because of the `maxmemory` limit. - This number maps to `evicted_keys` from the Redis INFO command.
azure-cache-for-redis Cache Tutorial Semantic Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-tutorial-semantic-cache.md
Last updated 01/08/2024
# Tutorial: Use Azure Cache for Redis as a semantic cache
-In this tutorial, you use Azure Cache for Redis as a semantic cache with an AI-based large language model (LLM). You use Azure Open AI Service to generate LLM responses to queries and cache those responses using Azure Cache for Redis, delivering faster responses and lowering costs.
+In this tutorial, you use Azure Cache for Redis as a semantic cache with an AI-based large language model (LLM). You use Azure OpenAI Service to generate LLM responses to queries and cache those responses using Azure Cache for Redis, delivering faster responses and lowering costs.
Because Azure Cache for Redis offers built-in vector search capability, you can also perform _semantic caching_. You can return cached responses for identical queries and also for queries that are similar in meaning, even if the text isn't the same.
See [Deploy a model](/azure/ai-services/openai/how-to/create-resource?pivots=web
To successfully make a call against Azure OpenAI, you need an **endpoint** and a **key**. You also need an **endpoint** and a **key** to connect to Azure Cache for Redis.
-1. Go to your Azure Open AI resource in the Azure portal.
+1. Go to your Azure OpenAI resource in the Azure portal.
1. Locate **Endpoint and Keys** in the **Resource Management** section of your Azure OpenAI resource. Copy your endpoint and access key because you need both for authenticating your API calls. An example endpoint is: `https://docs-test-001.openai.azure.com`. You can use either `KEY1` or `KEY2`.
azure-functions Create First Function Cli Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-node.md
Title: Create a JavaScript function from the command line - Azure Functions description: Learn how to create a JavaScript function from the command line, then publish the local Node.js project to serverless hosting in Azure Functions. Previously updated : 12/15/2023 Last updated : 02/26/2024 ms.devlang: javascript
In Azure Functions, a function project is a container for one or more individual
::: zone pivot="nodejs-model-v3" 1. In a suitable folder, run the [`func init`](functions-core-tools-reference.md#func-init) command, as follows, to create a JavaScript Node.js v3 project in the current folder:-
+
```console func init --javascript --model V3 ```
+
This folder now contains various files for the project, including configurations files named [local.settings.json](functions-develop-local.md#local-settings-file) and [host.json](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file. 1. Add a function to your project by using the following command, where the `--name` argument is the unique name of your function (HttpExample) and the `--template` argument specifies the function's trigger (HTTP).-
+
```console func new --name HttpExample --template "HTTP trigger" --authlevel "anonymous" ```-
- [`func new`](functions-core-tools-reference.md#func-new) creates a subfolder matching the function name that contains a code file appropriate to the project's chosen language and a configuration file named *function.json*.
+
+ The [`func new`](functions-core-tools-reference.md#func-new) command creates a subfolder matching the function name that contains a code file appropriate to the project's chosen language and a configuration file named *function.json*.
You may find the [Azure Functions Core Tools reference](functions-core-tools-reference.md) helpful.
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
The Azure Monitoring Agent for Linux now officially supports various hardening s
Currently supported hardening standards: - SELinux - CIS Lvl 1 and 2<sup>1</sup>-
-On the roadmap
- STIG - FIPs
+- FedRamp
| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent (legacy) <sup>1</sup> | Diagnostics extension <sup>2</sup>| |:|::|::|::|
azure-monitor Data Collection Transformations Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-transformations-structure.md
ms.reviwer: nikeist
# Structure of transformation in Azure Monitor
-[Transformations in Azure Monitor](./data-collection-transformations.md) allow you to filter or modify incoming data before it's stored in a Log Analytics workspace. They are implemented as a Kusto Query Language (KQL) statement in a [data collection rule (DCR)](data-collection-rule-overview.md). This article provides details on how this query is structured and limitations on the KQL language allowed.
+[Transformations in Azure Monitor](./data-collection-transformations.md) allow you to filter or modify incoming data before it's stored in a Log Analytics workspace. They're implemented as a Kusto Query Language (KQL) statement in a [data collection rule (DCR)](data-collection-rule-overview.md). This article provides details on how this query is structured and limitations on the KQL language allowed.
## Transformation structure
-The KQL statement is applied individually to each entry in the data source. It must understand the format of the incoming data and create output in the structure of the target table. The input stream is represented by a virtual table named `source` with columns matching the input data stream definition. Following is a typical example of a transformation. This example includes the following functionality:
+The KQL statement is applied individually to each entry in the data source. It must understand the format of the incoming data and create output in the structure of the target table. A virtual table named `source` represents the input stream. `source` table columns match the input data stream definition. Following is a typical example of a transformation. This example includes the following functionality:
- Filters the incoming data with a [where](/azure/data-explorer/kusto/query/whereoperator) statement - Adds a new column using the [extend](/azure/data-explorer/kusto/query/extendoperator) operator
The following [Bitwise operators](/azure/data-explorer/kusto/query/binoperators)
##### parse_cef_dictionary
-Given a string containing a CEF message, `parse_cef_dictionary` parses the Extension property of the message into a dynamic key/value object. Semicolon is a reserved character that should be replaced prior to passing the raw message into the method, as shown in the example below.
+Given a string containing a CEF message, `parse_cef_dictionary` parses the Extension property of the message into a dynamic key/value object. Semicolon is a reserved character that should be replaced prior to passing the raw message into the method, as shown in the example.
```kusto | extend cefMessage=iff(cefMessage contains_cs ";", replace(";", " ", cefMessage), cefMessage)
Given a string containing a CEF message, `parse_cef_dictionary` parses the Exten
:::image type="content" source="media/data-collection-transformations-structure/parse_cef_dictionary.png" alt-text="Sample output of parse_cef_dictionary function." lightbox="media/data-collection-transformations-structure/parse_cef_dictionary.png":::
+##### geo_location
+
+Given a string containing IP address (IPv4 and IPv6 are supported), `geo_location` function returns approximate geographical location, including the following attributes:
+* Country
+* Region
+* State
+* City
+* Latitude
+* Longitude
+
+```kusto
+| extend GeoLocation = geo_location("1.0.0.5")
+```
++
+> [!IMPORTANT]
+> Due to nature of IP geolocation service utilized by this function, it may introduce data ingestion latency if used excessively. Exercise caution when using this function more than several times per transformation.
### Identifier quoting Use [Identifier quoting](/azure/data-explorer/kusto/query/schema-entities/entity-names?q=identifier#identifier-quoting) as required.
azure-monitor Prometheus Metrics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-overview.md
Previously updated : 05/10/2023 Last updated : 01/25/2024 # Azure Monitor managed service for Prometheus
See [Azure Monitor service limits](../service-limits.md#prometheus-metrics) for
## Limitations/Known issues - Azure Monitor managed Service for Prometheus - Scraping and storing metrics at frequencies less than 1 second isn't supported.-- Metrics with the same label names but different cases are rejected during ingestion (ex;- `diskSize(cluster="eastus", node="node1", filesystem="usr_mnt", FileSystem="usr_opt")` is invalid due to `filesystem` and `FileSystem` labels, and are rejected). - Microsoft Azure operated by 21Vianet cloud and Air gapped clouds aren't supported for Azure Monitor managed service for Prometheus. - To monitor Windows nodes & pods in your cluster(s), follow steps outlined [here](../containers/kubernetes-monitoring-enable.md#enable-windows-metrics-collection-preview). - Azure Managed Grafana isn't currently available in the Azure US Government cloud. - Usage metrics (metrics under `Metrics` menu for the Azure Monitor workspace) - Ingestion quota limits and current usage for any Azure monitor Workspace aren't available yet in US Government cloud. - During node updates, you might experience gaps lasting 1 to 2 minutes in some metric collections from our cluster level collector. This gap is due to a regular action from Azure Kubernetes Service to update the nodes in your cluster. This behavior is expected and occurs due to the node it runs on being updated. None of our recommended alert rules are affected by this behavior. + ## Prometheus references Following are links to Prometheus documentation.
If you use the Azure portal to enable Prometheus metrics collection and install
- [Enable Azure Monitor managed service for Prometheus on your Kubernetes clusters](../containers/kubernetes-monitoring-enable.md). - [Configure Prometheus alerting and recording rules groups](prometheus-rule-groups.md).-- [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md).
+- [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md).
azure-monitor Troubleshoot Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/troubleshoot-workbooks.md
Title: Troubleshooting Azure Monitor workbook-based insights description: Provides troubleshooting guidance for Azure Monitor workbook-based insights for services like Azure Key Vault, Azure Cosmos DB, Azure Storage, and Azure Cache for Redis.+ + Last updated 06/17/2020
azure-monitor Logs Ingestion Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-ingestion-api-overview.md
Data sent to the ingestion API can be sent to the following tables:
| Tables | Description | |:|:| | Custom tables | Any custom table that you create in your Log Analytics workspace. The target table must exist before you can send data to it. Custom tables must have the `_CL` suffix. |
-| Azure tables | The following Azure tables are currently supported. Other tables may be added to this list as support for them is implemented.<br><br>- [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog)<br>- [SecurityEvents](/azure/azure-monitor/reference/tables/securityevent)<br>- [Syslog](/azure/azure-monitor/reference/tables/syslog)<br>- [WindowsEvents](/azure/azure-monitor/reference/tables/windowsevent)
+| Azure tables | The following Azure tables are currently supported. Other tables may be added to this list as support for them is implemented.<br><br>
+- [ADAssessmentRecommendation](/azure/azure-monitor/reference/tables/adassessmentrecommendation)<br>
+- [ADSecurityAssessmentRecommendation](/azure/azure-monitor/reference/tables/adsecurityassessmentrecommendation)<br>
+- [ASimAuditEventLogs](/azure/azure-monitor/reference/tables/asimauditeventlogs)<br>
+- [ASimAuthenticationEventLogs](/azure/azure-monitor/reference/tables/asimauthenticationeventlogs)<br>
+- [ASimDhcpEventLogs](/azure/azure-monitor/reference/tables/asimdhcpeventlogs)<br>
+- [ASimDnsActivityLogs](/azure/azure-monitor/reference/tables/asimdnsactivitylogs)<br>
+- ASimDnsAuditLogs<br>
+- [ASimFileEventLogs](/azure/azure-monitor/reference/tables/asimfileeventlogs)<br>
+- [ASimNetworkSessionLogs](/azure/azure-monitor/reference/tables/asimnetworksessionlogs)<br>
+- [ASimProcessEventLogs](/azure/azure-monitor/reference/tables/asimprocesseventlogs)<br>
+- [ASimRegistryEventLogs](/azure/azure-monitor/reference/tables/asimregistryeventlogs)<br>
+- [ASimUserManagementActivityLogs](/azure/azure-monitor/reference/tables/asimusermanagementactivitylogs)<br>
+- [ASimWebSessionLogs](/azure/azure-monitor/reference/tables/asimwebsessionlogs)<br>
+- [AWSCloudTrail](/azure/azure-monitor/reference/tables/awscloudtrail)<br>
+- [AWSCloudWatch](/azure/azure-monitor/reference/tables/awscloudwatch)<br>
+- [AWSGuardDuty](/azure/azure-monitor/reference/tables/awsguardduty)<br>
+- [AWSVPCFlow](/azure/azure-monitor/reference/tables/awsvpcflow)<br>
+- [AzureAssessmentRecommendation](/azure/azure-monitor/reference/tables/azureassessmentrecommendation)<br>
+- [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog)<br>
+- [DeviceTvmSecureConfigurationAssessmentKB](/azure/azure-monitor/reference/tables/devicetvmsecureconfigurationassessmentkb)<br>
+- [DeviceTvmSoftwareVulnerabilitiesKB](/azure/azure-monitor/reference/tables/devicetvmsoftwarevulnerabilitieskb)<br>
+- [ExchangeAssessmentRecommendation](/azure/azure-monitor/reference/tables/exchangeassessmentrecommendation)<br>
+- [ExchangeOnlineAssessmentRecommendation](/azure/azure-monitor/reference/tables/exchangeonlineassessmentrecommendation)<br>
+- [GCPAuditLogs](/azure/azure-monitor/reference/tables/gcpauditlogs)<br>
+- [GoogleCloudSCC](/azure/azure-monitor/reference/tables/googlecloudscc)<br>
+- [SCCMAssessmentRecommendation](/azure/azure-monitor/reference/tables/sccmassessmentrecommendation)<br>
+- [SCOMAssessmentRecommendation](/azure/azure-monitor/reference/tables/scomassessmentrecommendation)<br>
+- [SecurityEvent](/azure/azure-monitor/reference/tables/securityevent)<br>
+- [SfBAssessmentRecommendation](/azure/azure-monitor/reference/tables/sfbassessmentrecommendation)<br>
+- [SfBOnlineAssessmentRecommendation](/azure/azure-monitor/reference/tables/sfbonlineassessmentrecommendation)<br>
+- [SharePointOnlineAssessmentRecommendation](/azure/azure-monitor/reference/tables/sharepointonlineassessmentrecommendation)<br>
+- [SPAssessmentRecommendation](/azure/azure-monitor/reference/tables/spassessmentrecommendation)<br>
+- [SQLAssessmentRecommendation](/azure/azure-monitor/reference/tables/sqlassessmentrecommendation)<br>
+- StorageInsightsAccountPropertiesDaily<br>
+- StorageInsightsDailyMetrics<br>
+- StorageInsightsHourlyMetrics<br>
+- StorageInsightsMonthlyMetrics<br>
+- StorageInsightsWeeklyMetrics<br>
+- [Syslog](/azure/azure-monitor/reference/tables/syslog)<br>
+- [UCClient](/azure/azure-monitor/reference/tables/ucclient)<br>
+- [UCClientReadinessStatus](/azure/azure-monitor/reference/tables/ucclientreadinessstatus)<br>
+- [UCClientUpdateStatus](/azure/azure-monitor/reference/tables/ucclientupdatestatus)<br>
+- [UCDeviceAlert](/azure/azure-monitor/reference/tables/ucdevicealert)<br>
+- [UCDOAggregatedStatus](/azure/azure-monitor/reference/tables/ucdoaggregatedstatus)<br>
+- [UCDOStatus](/azure/azure-monitor/reference/tables/ucdostatus)<br>
+- [UCServiceUpdateStatus](/azure/azure-monitor/reference/tables/ucserviceupdatestatus)<br>
+- [UCUpdateAlert](/azure/azure-monitor/reference/tables/ucupdatealert)<br>
+- [WindowsClientAssessmentRecommendation](/azure/azure-monitor/reference/tables/windowsclientassessmentrecommendation)<br>
+- [WindowsEvent](/azure/azure-monitor/reference/tables/windowsevent)<br>
+- [WindowsServerAssessmentRecommendation](/azure/azure-monitor/reference/tables/windowsserverassessmentrecommendation)<br>
+ > [!NOTE] > Column names must start with a letter and can consist of up to 45 alphanumeric characters and underscores (`_`). `_ResourceId`, `id`, `_ResourceId`, `_SubscriptionId`, `TenantId`, `Type`, `UniqueId`, and `Title` are reserved column names. Custom columns you add to an Azure table must have the suffix `_CF`.
azure-monitor Tables Feature Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tables-feature-support.md
The following list identifies the tables in a [Log Analytics workspace](log-anal
| [AADRiskyUsers](/azure/azure-monitor/reference/tables/aadriskyusers) | | | [AADServicePrincipalSignInLogs](/azure/azure-monitor/reference/tables/aadserviceprincipalsigninlogs) | | | [AADUserRiskEvents](/azure/azure-monitor/reference/tables/aaduserriskevents) | |
+| ABAPAuditLog | |
| [ABSBotRequests](/azure/azure-monitor/reference/tables/absbotrequests) | |
-| [ACRConnectedClientList](/azure/azure-monitor/reference/tables/acrconnectedclientlist) | |
| [ACSAuthIncomingOperations](/azure/azure-monitor/reference/tables/acsauthincomingoperations) | | | [ACSBillingUsage](/azure/azure-monitor/reference/tables/acsbillingusage) | |
-| [ACSCallDiagnostics](/azure/azure-monitor/reference/tables/acscalldiagnostics) | |
-| [ACSCallSummary](/azure/azure-monitor/reference/tables/acscallsummary) | |
| [ACSChatIncomingOperations](/azure/azure-monitor/reference/tables/acschatincomingoperations) | | | [ACSSMSIncomingOperations](/azure/azure-monitor/reference/tables/acssmsincomingoperations) | | | [ADAssessmentRecommendation](/azure/azure-monitor/reference/tables/adassessmentrecommendation) | |
-| [AddonAzureBackupAlerts](/azure/azure-monitor/reference/tables/AddonAzureBackupAlerts) | |
-| [AddonAzureBackupJobs](/azure/azure-monitor/reference/tables/AddonAzureBackupJobs) | |
-| [AddonAzureBackupPolicy](/azure/azure-monitor/reference/tables/AddonAzureBackupPolicy) | |
-| [AddonAzureBackupProtectedInstance](/azure/azure-monitor/reference/tables/AddonAzureBackupProtectedInstance) | |
-| [AddonAzureBackupStorage](/azure/azure-monitor/reference/tables/AddonAzureBackupStorage) | |
+| [AddonAzureBackupAlerts](/azure/azure-monitor/reference/tables/addonazurebackupalerts) | |
+| [AddonAzureBackupJobs](/azure/azure-monitor/reference/tables/addonazurebackupjobs) | |
+| [AddonAzureBackupPolicy](/azure/azure-monitor/reference/tables/addonazurebackuppolicy) | |
+| [AddonAzureBackupProtectedInstance](/azure/azure-monitor/reference/tables/addonazurebackupprotectedinstance) | |
+| [AddonAzureBackupStorage](/azure/azure-monitor/reference/tables/addonazurebackupstorage) | |
| [ADFActivityRun](/azure/azure-monitor/reference/tables/adfactivityrun) | |
-| [ADFAirflowSchedulerLogs](/azure/azure-monitor/reference/tables/ADFAirflowSchedulerLogs) | |
-| [ADFAirflowTaskLogs](/azure/azure-monitor/reference/tables/ADFAirflowTaskLogs) | |
-| [ADFAirflowWebLogs](/azure/azure-monitor/reference/tables/ADFAirflowWebLogs) | |
-| [ADFAirflowWorkerLogs](/azure/azure-monitor/reference/tables/ADFAirflowWorkerLogs) | |
+| [ADFAirflowSchedulerLogs](/azure/azure-monitor/reference/tables/adfairflowschedulerlogs) | |
+| [ADFAirflowTaskLogs](/azure/azure-monitor/reference/tables/adfairflowtasklogs) | |
+| [ADFAirflowWebLogs](/azure/azure-monitor/reference/tables/adfairflowweblogs) | |
+| [ADFAirflowWorkerLogs](/azure/azure-monitor/reference/tables/adfairflowworkerlogs) | |
| [ADFPipelineRun](/azure/azure-monitor/reference/tables/adfpipelinerun) | |
-| [ADFSandboxActivityRun](/azure/azure-monitor/reference/tables/ADFSandboxActivityRun) | |
-| [ADFSandboxPipelineRun](/azure/azure-monitor/reference/tables/ADFSandboxPipelineRun) | |
+| [ADFSandboxActivityRun](/azure/azure-monitor/reference/tables/adfsandboxactivityrun) | |
+| [ADFSandboxPipelineRun](/azure/azure-monitor/reference/tables/adfsandboxpipelinerun) | |
| [ADFSSignInLogs](/azure/azure-monitor/reference/tables/adfssigninlogs) | |
-| [ADFSSISIntegrationRuntimeLogs](/azure/azure-monitor/reference/tables/ADFSSISIntegrationRuntimeLogs) | |
-| [ADFSSISPackageEventMessageContext](/azure/azure-monitor/reference/tables/ADFSSISPackageEventMessageContext) | |
-| [ADFSSISPackageEventMessages](/azure/azure-monitor/reference/tables/ADFSSISPackageEventMessages) | |
-| [ADFSSISPackageExecutableStatistics](/azure/azure-monitor/reference/tables/ADFSSISPackageExecutableStatistics) | |
-| [ADFSSISPackageExecutionComponentPhases](/azure/azure-monitor/reference/tables/ADFSSISPackageExecutionComponentPhases) | |
-| [ADFSSISPackageExecutionDataStatistics](/azure/azure-monitor/reference/tables/ADFSSISPackageExecutionDataStatistics) | |
+| [ADFSSISIntegrationRuntimeLogs](/azure/azure-monitor/reference/tables/adfssisintegrationruntimelogs) | |
+| [ADFSSISPackageEventMessageContext](/azure/azure-monitor/reference/tables/adfssispackageeventmessagecontext) | |
+| [ADFSSISPackageEventMessages](/azure/azure-monitor/reference/tables/adfssispackageeventmessages) | |
+| [ADFSSISPackageExecutableStatistics](/azure/azure-monitor/reference/tables/adfssispackageexecutablestatistics) | |
+| [ADFSSISPackageExecutionComponentPhases](/azure/azure-monitor/reference/tables/adfssispackageexecutioncomponentphases) | |
+| [ADFSSISPackageExecutionDataStatistics](/azure/azure-monitor/reference/tables/adfssispackageexecutiondatastatistics) | |
| [ADFTriggerRun](/azure/azure-monitor/reference/tables/adftriggerrun) | | | [ADPAudit](/azure/azure-monitor/reference/tables/adpaudit) | | | [ADPDiagnostics](/azure/azure-monitor/reference/tables/adpdiagnostics) | |
The following list identifies the tables in a [Log Analytics workspace](log-anal
| [ADReplicationResult](/azure/azure-monitor/reference/tables/adreplicationresult) | | | [ADSecurityAssessmentRecommendation](/azure/azure-monitor/reference/tables/adsecurityassessmentrecommendation) | | | [ADTDigitalTwinsOperation](/azure/azure-monitor/reference/tables/adtdigitaltwinsoperation) | |
-| [ADTEventRoutesOperation](/azure/azure-monitor/reference/tables/adteventroutesoperation) | |
| [ADTModelsOperation](/azure/azure-monitor/reference/tables/adtmodelsoperation) | | | [ADTQueryOperation](/azure/azure-monitor/reference/tables/adtqueryoperation) | | | [ADXCommand](/azure/azure-monitor/reference/tables/adxcommand) | |
-| [ADXJournal](/azure/azure-monitor/reference/tables/ADXJournal) | |
+| [ADXJournal](/azure/azure-monitor/reference/tables/adxjournal) | |
| [ADXQuery](/azure/azure-monitor/reference/tables/adxquery) | |
-| [ADXTableDetails](/azure/azure-monitor/reference/tables/ADXTableDetails) | |
-| [ADXTableUsageStatistics](/azure/azure-monitor/reference/tables/ADXTableUsageStatistics) | |
+| [ADXTableDetails](/azure/azure-monitor/reference/tables/adxtabledetails) | |
+| [ADXTableUsageStatistics](/azure/azure-monitor/reference/tables/adxtableusagestatistics) | |
| [AegDeliveryFailureLogs](/azure/azure-monitor/reference/tables/aegdeliveryfailurelogs) | | | [AegPublishFailureLogs](/azure/azure-monitor/reference/tables/aegpublishfailurelogs) | |
-| [AEWAuditLogs](/azure/azure-monitor/reference/tables/aewauditlogs) | |
-| [AgriFoodApplicationAuditLogs](/azure/azure-monitor/reference/tables/agrifoodapplicationauditlogs) | |
-| [AgriFoodFarmManagementLogs](/azure/azure-monitor/reference/tables/agrifoodfarmmanagementlogs) | |
-| [AgriFoodFarmOperationLogs](/azure/azure-monitor/reference/tables/agrifoodfarmoperationlogs) | |
-| [AgriFoodInsightLogs](/azure/azure-monitor/reference/tables/agrifoodinsightlogs) | |
-| [AgriFoodJobProcessedLogs](/azure/azure-monitor/reference/tables/agrifoodjobprocessedlogs) | |
-| [AgriFoodModelInferenceLogs](/azure/azure-monitor/reference/tables/agrifoodmodelinferencelogs) | |
-| [AgriFoodProviderAuthLogs](/azure/azure-monitor/reference/tables/agrifoodproviderauthlogs) | |
-| [AgriFoodSatelliteLogs](/azure/azure-monitor/reference/tables/agrifoodsatellitelogs) | |
-| [AgriFoodWeatherLogs](/azure/azure-monitor/reference/tables/agrifoodweatherlogs) | |
-| [AirflowDagProcessingLogs](/azure/azure-monitor/reference/tables/AirflowDagProcessingLogs) | |
+| [AirflowDagProcessingLogs](/azure/azure-monitor/reference/tables/airflowdagprocessinglogs) | |
| [Alert](/azure/azure-monitor/reference/tables/alert) | | | [AlertEvidence](/azure/azure-monitor/reference/tables/alertevidence) | |
-| [AlertInfo](/azure/azure-monitor/reference/tables/AlertInfo) | |
-| [AmlComputeClusterEvent](/azure/azure-monitor/reference/tables/AmlComputeClusterEvent) | |
-| [AmlComputeCpuGpuUtilization](/azure/azure-monitor/reference/tables/AmlComputeCpuGpuUtilization) | |
-| [AmlComputeInstanceEvent](/azure/azure-monitor/reference/tables/AmlComputeInstanceEvent) | |
-| [AmlComputeJobEvent](/azure/azure-monitor/reference/tables/AmlComputeJobEvent) | |
-| [AmlDataLabelEvent](/azure/azure-monitor/reference/tables/AmlDataLabelEvent) | |
-| [AmlDataSetEvent](/azure/azure-monitor/reference/tables/AmlDataSetEvent) | |
-| [AmlDataStoreEvent](/azure/azure-monitor/reference/tables/AmlDataStoreEvent) | |
-| [AmlDeploymentEvent](/azure/azure-monitor/reference/tables/AmlDeploymentEvent) | |
-| [AmlEnvironmentEvent](/azure/azure-monitor/reference/tables/AmlEnvironmentEvent) | |
-| [AmlInferencingEvent](/azure/azure-monitor/reference/tables/AmlInferencingEvent) | |
-| [AmlModelsEvent](/azure/azure-monitor/reference/tables/AmlModelsEvent) | |
+| [AlertInfo](/azure/azure-monitor/reference/tables/alertinfo) | |
+| [AmlComputeClusterEvent](/azure/azure-monitor/reference/tables/amlcomputeclusterevent) | |
+| [AmlComputeCpuGpuUtilization](/azure/azure-monitor/reference/tables/amlcomputecpugpuutilization) | |
+| [AmlComputeInstanceEvent](/azure/azure-monitor/reference/tables/amlcomputeinstanceevent) | |
+| [AmlComputeJobEvent](/azure/azure-monitor/reference/tables/amlcomputejobevent) | |
+| [AmlDataLabelEvent](/azure/azure-monitor/reference/tables/amldatalabelevent) | |
+| [AmlDataSetEvent](/azure/azure-monitor/reference/tables/amldatasetevent) | |
+| [AmlDataStoreEvent](/azure/azure-monitor/reference/tables/amldatastoreevent) | |
+| [AmlDeploymentEvent](/azure/azure-monitor/reference/tables/amldeploymentevent) | |
+| [AmlEnvironmentEvent](/azure/azure-monitor/reference/tables/amlenvironmentevent) | |
+| [AmlInferencingEvent](/azure/azure-monitor/reference/tables/amlinferencingevent) | |
+| [AmlModelsEvent](/azure/azure-monitor/reference/tables/amlmodelsevent) | |
| [AmlOnlineEndpointConsoleLog](/azure/azure-monitor/reference/tables/amlonlineendpointconsolelog) | |
-| [AmlPipelineEvent](/azure/azure-monitor/reference/tables/AmlPipelineEvent) | |
-| [AmlRunEvent](/azure/azure-monitor/reference/tables/AmlRunEvent) | |
-| [AmlRunStatusChangedEvent](/azure/azure-monitor/reference/tables/AmlRunStatusChangedEvent) | |
-| [Anomalies](/azure/azure-monitor/reference/tables/Anomalies) | |
+| [AmlPipelineEvent](/azure/azure-monitor/reference/tables/amlpipelineevent) | |
+| [AmlRunEvent](/azure/azure-monitor/reference/tables/amlrunevent) | |
+| [AmlRunStatusChangedEvent](/azure/azure-monitor/reference/tables/amlrunstatuschangedevent) | |
+| [Anomalies](/azure/azure-monitor/reference/tables/anomalies) | |
| [ApiManagementGatewayLogs](/azure/azure-monitor/reference/tables/apimanagementgatewaylogs) | | | [AppAvailabilityResults](/azure/azure-monitor/reference/tables/appavailabilityresults) | | | [AppBrowserTimings](/azure/azure-monitor/reference/tables/appbrowsertimings) | |
-| [AppBrowserTimings](/azure/azure-monitor/reference/tables/AppBrowserTimings) | |
| [AppCenterError](/azure/azure-monitor/reference/tables/appcentererror) | | | [AppDependencies](/azure/azure-monitor/reference/tables/appdependencies) | | | [AppEvents](/azure/azure-monitor/reference/tables/appevents) | |
The following list identifies the tables in a [Log Analytics workspace](log-anal
| [AppMetrics](/azure/azure-monitor/reference/tables/appmetrics) | | | [AppPageViews](/azure/azure-monitor/reference/tables/apppageviews) | | | [AppPerformanceCounters](/azure/azure-monitor/reference/tables/appperformancecounters) | |
-| [AppPlatformIngressLogs](/azure/azure-monitor/reference/tables/AppPlatformIngressLogs) | |
-| [AppPlatformLogsforSpring](/azure/azure-monitor/reference/tables/AppPlatformLogsforSpring) | |
+| [AppPlatformIngressLogs](/azure/azure-monitor/reference/tables/appplatformingresslogs) | |
+| [AppPlatformLogsforSpring](/azure/azure-monitor/reference/tables/appplatformlogsforspring) | |
| [AppPlatformSystemLogs](/azure/azure-monitor/reference/tables/appplatformsystemlogs) | | | [AppRequests](/azure/azure-monitor/reference/tables/apprequests) | |
-| [AppServiceAntivirusScanAuditLogs](/azure/azure-monitor/reference/tables/AppServiceAntivirusScanAuditLogs) | |
+| [AppServiceAntivirusScanAuditLogs](/azure/azure-monitor/reference/tables/appserviceantivirusscanauditlogs) | |
| [AppServiceAppLogs](/azure/azure-monitor/reference/tables/appserviceapplogs) | | | [AppServiceAuditLogs](/azure/azure-monitor/reference/tables/appserviceauditlogs) | | | [AppServiceConsoleLogs](/azure/azure-monitor/reference/tables/appserviceconsolelogs) | |
-| [AppServiceEnvironmentPlatformLogs](/azure/azure-monitor/reference/tables/AppServiceEnvironmentPlatformLogs) | |
+| [AppServiceEnvironmentPlatformLogs](/azure/azure-monitor/reference/tables/appserviceenvironmentplatformlogs) | |
| [AppServiceFileAuditLogs](/azure/azure-monitor/reference/tables/appservicefileauditlogs) | | | [AppServiceHTTPLogs](/azure/azure-monitor/reference/tables/appservicehttplogs) | |
-| [AppServiceIPSecAuditLogs](/azure/azure-monitor/reference/tables/AppServiceIPSecAuditLogs) | |
+| [AppServiceIPSecAuditLogs](/azure/azure-monitor/reference/tables/appserviceipsecauditlogs) | |
| [AppServicePlatformLogs](/azure/azure-monitor/reference/tables/appserviceplatformlogs) | | | [AppSystemEvents](/azure/azure-monitor/reference/tables/appsystemevents) | | | [AppTraces](/azure/azure-monitor/reference/tables/apptraces) | |
-| [ASimAuditEventLogs](/azure/azure-monitor/reference/tables/ASimAuditEventLogs) | |
-| [ASimDnsActivityLogs](/azure/azure-monitor/reference/tables/ASimDnsActivityLogs) | |
+| [ASimAuditEventLogs](/azure/azure-monitor/reference/tables/asimauditeventlogs) | |
+| [ASimAuthenticationEventLogs](/azure/azure-monitor/reference/tables/asimauthenticationeventlogs) | |
+| [ASimDhcpEventLogs](/azure/azure-monitor/reference/tables/asimdhcpeventlogs) | |
+| [ASimDnsActivityLogs](/azure/azure-monitor/reference/tables/asimdnsactivitylogs) | |
+| ASimDnsAuditLogs | |
| [ASimFileEventLogs](/azure/azure-monitor/reference/tables/asimfileeventlogs) | |
-| [ASimNetworkSessionLogs](/azure/azure-monitor/reference/tables/ASimNetworkSessionLogs) | |
-| [ASimWebSessionLogs](/azure/azure-monitor/reference/tables/ASimWebSessionLogs) | |
-| [ATCExpressRouteCircuitIpfix](/azure/azure-monitor/reference/tables/atcexpressroutecircuitipfix) | |
+| [ASimNetworkSessionLogs](/azure/azure-monitor/reference/tables/asimnetworksessionlogs) | |
+| [ASimProcessEventLogs](/azure/azure-monitor/reference/tables/asimprocesseventlogs) | |
+| [ASimRegistryEventLogs](/azure/azure-monitor/reference/tables/asimregistryeventlogs) | |
+| [ASimUserManagementActivityLogs](/azure/azure-monitor/reference/tables/asimusermanagementactivitylogs) | |
+| [ASimWebSessionLogs](/azure/azure-monitor/reference/tables/asimwebsessionlogs) | |
| [AuditLogs](/azure/azure-monitor/reference/tables/auditlogs) | | | [AutoscaleEvaluationsLog](/azure/azure-monitor/reference/tables/autoscaleevaluationslog) | | | [AutoscaleScaleActionsLog](/azure/azure-monitor/reference/tables/autoscalescaleactionslog) | | | [AWSCloudTrail](/azure/azure-monitor/reference/tables/awscloudtrail) | |
-| [AWSCloudWatch](/azure/azure-monitor/reference/tables/AWSCloudWatch) | |
+| [AWSCloudWatch](/azure/azure-monitor/reference/tables/awscloudwatch) | |
| [AWSGuardDuty](/azure/azure-monitor/reference/tables/awsguardduty) | | | [AWSVPCFlow](/azure/azure-monitor/reference/tables/awsvpcflow) | | | [AzureAssessmentRecommendation](/azure/azure-monitor/reference/tables/azureassessmentrecommendation) | |
The following list identifies the tables in a [Log Analytics workspace](log-anal
| [CloudAppEvents](/azure/azure-monitor/reference/tables/cloudappevents) | | | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) | | | [ComputerGroup](/azure/azure-monitor/reference/tables/computergroup) | |
-| [ConfigurationChange](/azure/azure-monitor/reference/tables/ConfigurationChange) | |
-| [ConfigurationData](/azure/azure-monitor/reference/tables/configurationdata) | Partial support ΓÇô some of the data is ingested through internal services that aren't supported.|
+| [ConfigurationChange](/azure/azure-monitor/reference/tables/configurationchange) | |
+| [ConfigurationData](/azure/azure-monitor/reference/tables/configurationdata) | Partial support ΓÇô some of the data is ingested through internal services that aren't supported. |
| [ContainerImageInventory](/azure/azure-monitor/reference/tables/containerimageinventory) | | | [ContainerInventory](/azure/azure-monitor/reference/tables/containerinventory) | | | [ContainerLog](/azure/azure-monitor/reference/tables/containerlog) | | | [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) | | | [ContainerNodeInventory](/azure/azure-monitor/reference/tables/containernodeinventory) | |
-| [ContainerRegistryLoginEvents](/azure/azure-monitor/reference/tables/ContainerRegistryLoginEvents) | |
-| [ContainerRegistryRepositoryEvents](/azure/azure-monitor/reference/tables/ContainerRegistryRepositoryEvents) | |
+| [ContainerRegistryLoginEvents](/azure/azure-monitor/reference/tables/containerregistryloginevents) | |
+| [ContainerRegistryRepositoryEvents](/azure/azure-monitor/reference/tables/containerregistryrepositoryevents) | |
| [ContainerServiceLog](/azure/azure-monitor/reference/tables/containerservicelog) | | | [CoreAzureBackup](/azure/azure-monitor/reference/tables/coreazurebackup) | | | [DatabricksAccounts](/azure/azure-monitor/reference/tables/databricksaccounts) | | | [DatabricksClusters](/azure/azure-monitor/reference/tables/databricksclusters) | | | [DatabricksDBFS](/azure/azure-monitor/reference/tables/databricksdbfs) | |
-| [DatabricksFeatureStore](/azure/azure-monitor/reference/tables/DatabricksFeatureStore) | |
-| [DatabricksGenie](/azure/azure-monitor/reference/tables/DatabricksGenie) | |
-| [DatabricksGlobalInitScripts](/azure/azure-monitor/reference/tables/DatabricksGlobalInitScripts) | |
+| [DatabricksFeatureStore](/azure/azure-monitor/reference/tables/databricksfeaturestore) | |
+| [DatabricksGenie](/azure/azure-monitor/reference/tables/databricksgenie) | |
+| [DatabricksGlobalInitScripts](/azure/azure-monitor/reference/tables/databricksglobalinitscripts) | |
| [DatabricksInstancePools](/azure/azure-monitor/reference/tables/databricksinstancepools) | | | [DatabricksJobs](/azure/azure-monitor/reference/tables/databricksjobs) | |
-| [DatabricksMLflowAcledArtifact](/azure/azure-monitor/reference/tables/DatabricksMLflowAcledArtifact) | |
-| [DatabricksMLflowExperiment](/azure/azure-monitor/reference/tables/DatabricksMLflowExperiment) | |
+| [DatabricksMLflowAcledArtifact](/azure/azure-monitor/reference/tables/databricksmlflowacledartifact) | |
+| [DatabricksMLflowExperiment](/azure/azure-monitor/reference/tables/databricksmlflowexperiment) | |
| [DatabricksNotebook](/azure/azure-monitor/reference/tables/databricksnotebook) | |
-| [DatabricksRemoteHistoryService](/azure/azure-monitor/reference/tables/DatabricksRemoteHistoryService) | |
+| [DatabricksRemoteHistoryService](/azure/azure-monitor/reference/tables/databricksremotehistoryservice) | |
| [DatabricksSecrets](/azure/azure-monitor/reference/tables/databrickssecrets) | | | [DatabricksSQLPermissions](/azure/azure-monitor/reference/tables/databrickssqlpermissions) | | | [DatabricksSSH](/azure/azure-monitor/reference/tables/databricksssh) | | | [DatabricksWorkspace](/azure/azure-monitor/reference/tables/databricksworkspace) | |
-| DefenderForSqlAlerts | |
+| [DataverseActivity](/azure/azure-monitor/reference/tables/dataverseactivity) | |
+| DefenderForSqlAlerts | |
| DefenderForSqlTelemetry | |
-| [DeviceEvents](/azure/azure-monitor/reference/tables/DeviceEvents) | |
-| [DeviceFileCertificateInfo](/azure/azure-monitor/reference/tables/DeviceFileCertificateInfo) | |
-| [DeviceFileEvents](/azure/azure-monitor/reference/tables/DeviceFileEvents) | |
-| [DeviceImageLoadEvents](/azure/azure-monitor/reference/tables/DeviceImageLoadEvents) | |
-| [DeviceInfo](/azure/azure-monitor/reference/tables/DeviceInfo) | |
-| [DeviceLogonEvents](/azure/azure-monitor/reference/tables/DeviceLogonEvents) | |
-| [DeviceNetworkEvents](/azure/azure-monitor/reference/tables/DeviceNetworkEvents) | |
+| [DeviceEvents](/azure/azure-monitor/reference/tables/deviceevents) | |
+| [DeviceFileCertificateInfo](/azure/azure-monitor/reference/tables/devicefilecertificateinfo) | |
+| [DeviceFileEvents](/azure/azure-monitor/reference/tables/devicefileevents) | |
+| [DeviceImageLoadEvents](/azure/azure-monitor/reference/tables/deviceimageloadevents) | |
+| [DeviceInfo](/azure/azure-monitor/reference/tables/deviceinfo) | |
+| [DeviceLogonEvents](/azure/azure-monitor/reference/tables/devicelogonevents) | |
+| [DeviceNetworkEvents](/azure/azure-monitor/reference/tables/devicenetworkevents) | |
| [DeviceNetworkInfo](/azure/azure-monitor/reference/tables/devicenetworkinfo) | |
-| [DeviceProcessEvents](/azure/azure-monitor/reference/tables/DeviceProcessEvents) | |
-| [DeviceRegistryEvents](/azure/azure-monitor/reference/tables/DeviceRegistryEvents) | |
-| [DeviceTvmSecureConfigurationAssessment](/azure/azure-monitor/reference/tables/DeviceTvmSecureConfigurationAssessment) | |
-| [DeviceTvmSecureConfigurationAssessmentKB](/azure/azure-monitor/reference/tables/DeviceTvmSecureConfigurationAssessmentKB) | |
-| [DeviceTvmSoftwareInventory](/azure/azure-monitor/reference/tables/DeviceTvmSoftwareInventory) | |
-| [DeviceTvmSoftwareVulnerabilities](/azure/azure-monitor/reference/tables/DeviceTvmSoftwareVulnerabilities) | |
-| [DeviceTvmSoftwareVulnerabilitiesKB](/azure/azure-monitor/reference/tables/DeviceTvmSoftwareVulnerabilitiesKB) | |
+| [DeviceProcessEvents](/azure/azure-monitor/reference/tables/deviceprocessevents) | |
+| [DeviceRegistryEvents](/azure/azure-monitor/reference/tables/deviceregistryevents) | |
+| [DeviceTvmSecureConfigurationAssessment](/azure/azure-monitor/reference/tables/devicetvmsecureconfigurationassessment) | |
+| [DeviceTvmSecureConfigurationAssessmentKB](/azure/azure-monitor/reference/tables/devicetvmsecureconfigurationassessmentkb) | |
+| [DeviceTvmSoftwareInventory](/azure/azure-monitor/reference/tables/devicetvmsoftwareinventory) | |
+| [DeviceTvmSoftwareVulnerabilities](/azure/azure-monitor/reference/tables/devicetvmsoftwarevulnerabilities) | |
+| [DeviceTvmSoftwareVulnerabilitiesKB](/azure/azure-monitor/reference/tables/devicetvmsoftwarevulnerabilitieskb) | |
| [DnsEvents](/azure/azure-monitor/reference/tables/dnsevents) | | | [DnsInventory](/azure/azure-monitor/reference/tables/dnsinventory) | |
-| [DynamicEventCollection](/azure/azure-monitor/reference/tables/DynamicEventCollection) | |
+| DummyHydrationFact | |
+| [DynamicEventCollection](/azure/azure-monitor/reference/tables/dynamiceventcollection) | |
| [Dynamics365Activity](/azure/azure-monitor/reference/tables/dynamics365activity) | | | [EmailAttachmentInfo](/azure/azure-monitor/reference/tables/emailattachmentinfo) | | | [EmailEvents](/azure/azure-monitor/reference/tables/emailevents) | | | [EmailPostDeliveryEvents](/azure/azure-monitor/reference/tables/emailpostdeliveryevents) | | | [EmailUrlInfo](/azure/azure-monitor/reference/tables/emailurlinfo) | |
-| [Event](/azure/azure-monitor/reference/tables/event) | Partial support . Data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported. Data arriving from Diagnostics Extension  is collected through Azure storage. This path isn’t supported. |
+| [Event](/azure/azure-monitor/reference/tables/event) | Partial support . Data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported. Data arriving from Diagnostics Extension  is collected through Azure storage. This path isn’t supported. |
| [ExchangeAssessmentRecommendation](/azure/azure-monitor/reference/tables/exchangeassessmentrecommendation) | |
-| [ExchangeOnlineAssessmentRecommendation](/azure/azure-monitor/reference/tables/ExchangeOnlineAssessmentRecommendation) | |
+| [ExchangeOnlineAssessmentRecommendation](/azure/azure-monitor/reference/tables/exchangeonlineassessmentrecommendation) | |
| [FailedIngestion](/azure/azure-monitor/reference/tables/failedingestion) | | | [FunctionAppLogs](/azure/azure-monitor/reference/tables/functionapplogs) | |
-| [GCPAuditLogs](/azure/azure-monitor/reference/tables/GCPAuditLogs) | |
+| [GCPAuditLogs](/azure/azure-monitor/reference/tables/gcpauditlogs) | |
+| [GoogleCloudSCC](/azure/azure-monitor/reference/tables/googlecloudscc) | |
| [HDInsightAmbariClusterAlerts](/azure/azure-monitor/reference/tables/hdinsightambariclusteralerts) | | | [HDInsightAmbariSystemMetrics](/azure/azure-monitor/reference/tables/hdinsightambarisystemmetrics) | | | [HDInsightHadoopAndYarnLogs](/azure/azure-monitor/reference/tables/hdinsighthadoopandyarnlogs) | |
The following list identifies the tables in a [Log Analytics workspace](log-anal
| [HDInsightHiveAndLLAPLogs](/azure/azure-monitor/reference/tables/hdinsighthiveandllaplogs) | | | [HDInsightHiveAndLLAPMetrics](/azure/azure-monitor/reference/tables/hdinsighthiveandllapmetrics) | | | [HDInsightHiveTezAppStats](/azure/azure-monitor/reference/tables/hdinsighthivetezappstats) | |
-| [HDInsightJupyterNotebookEvents](/azure/azure-monitor/reference/tables/hdinsightjupyternotebookevents) | |
+| [HDInsightKafkaLogs](/azure/azure-monitor/reference/tables/hdinsightkafkalogs) | |
| [HDInsightKafkaMetrics](/azure/azure-monitor/reference/tables/hdinsightkafkametrics) | | | [HDInsightOozieLogs](/azure/azure-monitor/reference/tables/hdinsightoozielogs) | |
-| [HDInsightRangerAuditLogs](/azure/azure-monitor/reference/tables/hdinsightrangerauditlogs) | |
| [HDInsightSecurityLogs](/azure/azure-monitor/reference/tables/hdinsightsecuritylogs) | | | [HDInsightSparkApplicationEvents](/azure/azure-monitor/reference/tables/hdinsightsparkapplicationevents) | | | [HDInsightSparkBlockManagerEvents](/azure/azure-monitor/reference/tables/hdinsightsparkblockmanagerevents) | |
The following list identifies the tables in a [Log Analytics workspace](log-anal
| [HDInsightSparkStageEvents](/azure/azure-monitor/reference/tables/hdinsightsparkstageevents) | | | [HDInsightSparkStageTaskAccumulables](/azure/azure-monitor/reference/tables/hdinsightsparkstagetaskaccumulables) | | | [HDInsightSparkTaskEvents](/azure/azure-monitor/reference/tables/hdinsightsparktaskevents) | |
-| [HealthStateChangeEvent](/azure/azure-monitor/reference/tables/HealthStateChangeEvent) | |
+| [HealthStateChangeEvent](/azure/azure-monitor/reference/tables/healthstatechangeevent) | |
| [HuntingBookmark](/azure/azure-monitor/reference/tables/huntingbookmark) | |
-| [IdentityDirectoryEvents](/azure/azure-monitor/reference/tables/IdentityDirectoryEvents) | |
-| [IdentityInfo](/azure/azure-monitor/reference/tables/IdentityInfo) | |
-| [IdentityLogonEvents](/azure/azure-monitor/reference/tables/IdentityLogonEvents) | |
-| [IdentityQueryEvents](/azure/azure-monitor/reference/tables/IdentityQueryEvents) | |
+| [IdentityDirectoryEvents](/azure/azure-monitor/reference/tables/identitydirectoryevents) | |
+| [IdentityInfo](/azure/azure-monitor/reference/tables/identityinfo) | |
+| [IdentityLogonEvents](/azure/azure-monitor/reference/tables/identitylogonevents) | |
+| [IdentityQueryEvents](/azure/azure-monitor/reference/tables/identityqueryevents) | |
| [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics) | Partial support ΓÇô some of the data is ingested through internal services that aren't supported. | | [IntuneAuditLogs](/azure/azure-monitor/reference/tables/intuneauditlogs) | | | [IntuneDevices](/azure/azure-monitor/reference/tables/intunedevices) | |
The following list identifies the tables in a [Log Analytics workspace](log-anal
| [KubeMonAgentEvents](/azure/azure-monitor/reference/tables/kubemonagentevents) | | | [KubeNodeInventory](/azure/azure-monitor/reference/tables/kubenodeinventory) | | | [KubePodInventory](/azure/azure-monitor/reference/tables/kubepodinventory) | |
-| [KubePVInventory](/azure/azure-monitor/reference/tables/KubePVInventory) | |
+| [KubePVInventory](/azure/azure-monitor/reference/tables/kubepvinventory) | |
| [KubeServices](/azure/azure-monitor/reference/tables/kubeservices) | | | [LAQueryLogs](/azure/azure-monitor/reference/tables/laquerylogs) | |
-| [LinuxAuditLog](/azure/azure-monitor/reference/tables/LinuxAuditLog) | |
+| [LinuxAuditLog](/azure/azure-monitor/reference/tables/linuxauditlog) | |
| [McasShadowItReporting](/azure/azure-monitor/reference/tables/mcasshadowitreporting) | | | [MCCEventLogs](/azure/azure-monitor/reference/tables/mcceventlogs) | | | [MicrosoftAzureBastionAuditLogs](/azure/azure-monitor/reference/tables/microsoftazurebastionauditlogs) | | | [MicrosoftDataShareReceivedSnapshotLog](/azure/azure-monitor/reference/tables/microsoftdatasharereceivedsnapshotlog) | | | [MicrosoftDataShareSentSnapshotLog](/azure/azure-monitor/reference/tables/microsoftdatasharesentsnapshotlog) | |
-| [MicrosoftDataShareShareLog](/azure/azure-monitor/reference/tables/microsoftdatasharesharelog) | |
-| [MicrosoftGraphActivityLogs](/azure/azure-monitor/reference/tables/MicrosoftGraphActivityLogs) | |
+| [MicrosoftGraphActivityLogs](/azure/azure-monitor/reference/tables/microsoftgraphactivitylogs) | |
| [MicrosoftHealthcareApisAuditLogs](/azure/azure-monitor/reference/tables/microsofthealthcareapisauditlogs) | |
-| [MicrosoftPurviewInformationProtection](/azure/azure-monitor/reference/tables/MicrosoftPurviewInformationProtection) | |
-| [NetworkAccessTraffic](/azure/azure-monitor/reference/tables/NetworkAccessTraffic) | |
-| [NetworkMonitoring](/azure/azure-monitor/reference/tables/NetworkMonitoring) | |
-| [NTAIpDetails](/azure/azure-monitor/reference/tables/NTAIpDetails) | |
-| [NTANetAnalytics](/azure/azure-monitor/reference/tables/NTANetAnalytics) | |
-| [NTATopologyDetails](/azure/azure-monitor/reference/tables/NTATopologyDetails) | |
+| [MicrosoftPurviewInformationProtection](/azure/azure-monitor/reference/tables/microsoftpurviewinformationprotection) | |
+| [NetworkAccessTraffic](/azure/azure-monitor/reference/tables/networkaccesstraffic) | |
+| [NetworkMonitoring](/azure/azure-monitor/reference/tables/networkmonitoring) | |
+| [NTAIpDetails](/azure/azure-monitor/reference/tables/ntaipdetails) | |
+| [NTANetAnalytics](/azure/azure-monitor/reference/tables/ntanetanalytics) | |
+| [NTATopologyDetails](/azure/azure-monitor/reference/tables/ntatopologydetails) | |
| [NWConnectionMonitorPathResult](/azure/azure-monitor/reference/tables/nwconnectionmonitorpathresult) | | | [NWConnectionMonitorTestResult](/azure/azure-monitor/reference/tables/nwconnectionmonitortestresult) | | | [OfficeActivity](/azure/azure-monitor/reference/tables/officeactivity) | |
-| [Perf](/azure/azure-monitor/reference/tables/perf) | Partial support ΓÇô only windows perf data is currently supported. |
-| [PowerBIActivity](/azure/azure-monitor/reference/tables/PowerBIActivity) | |
+| [Perf](/azure/azure-monitor/reference/tables/perf) | Partial support ΓÇô only windows perf data is currently supported. |
+| [PowerAppsActivity](/azure/azure-monitor/reference/tables/powerappsactivity) | |
+| [PowerAutomateActivity](/azure/azure-monitor/reference/tables/powerautomateactivity) | |
+| [PowerBIActivity](/azure/azure-monitor/reference/tables/powerbiactivity) | |
| [PowerBIDatasetsWorkspace](/azure/azure-monitor/reference/tables/powerbidatasetsworkspace) | |
+| [PowerPlatformAdminActivity](/azure/azure-monitor/reference/tables/powerplatformadminactivity) | |
+| [PowerPlatformConnectorActivity](/azure/azure-monitor/reference/tables/powerplatformconnectoractivity) | |
+| [PowerPlatformDlpActivity](/azure/azure-monitor/reference/tables/powerplatformdlpactivity) | |
| ProcessInvestigator | |
-| [ProjectActivity](/azure/azure-monitor/reference/tables/ProjectActivity) | |
-| [ProtectionStatus](/azure/azure-monitor/reference/tables/ProtectionStatus) | |
+| [ProjectActivity](/azure/azure-monitor/reference/tables/projectactivity) | |
+| [ProtectionStatus](/azure/azure-monitor/reference/tables/protectionstatus) | |
| [PurviewScanStatusLogs](/azure/azure-monitor/reference/tables/purviewscanstatuslogs) | | | RomeDetectionEvent | | | [SCCMAssessmentRecommendation](/azure/azure-monitor/reference/tables/sccmassessmentrecommendation) | | | [SCOMAssessmentRecommendation](/azure/azure-monitor/reference/tables/scomassessmentrecommendation) | |
-| [SecureScoreControls](/azure/azure-monitor/reference/tables/SecureScoreControls) | |
-| [SecureScores](/azure/azure-monitor/reference/tables/SecureScores) | |
+| [SecureScoreControls](/azure/azure-monitor/reference/tables/securescorecontrols) | |
+| [SecureScores](/azure/azure-monitor/reference/tables/securescores) | |
| [SecurityAlert](/azure/azure-monitor/reference/tables/securityalert) | | | [SecurityBaseline](/azure/azure-monitor/reference/tables/securitybaseline) | | | [SecurityBaselineSummary](/azure/azure-monitor/reference/tables/securitybaselinesummary) | |
The following list identifies the tables in a [Log Analytics workspace](log-anal
| [SecurityIoTRawEvent](/azure/azure-monitor/reference/tables/securityiotrawevent) | | | [SecurityNestedRecommendation](/azure/azure-monitor/reference/tables/securitynestedrecommendation) | | | [SecurityRecommendation](/azure/azure-monitor/reference/tables/securityrecommendation) | |
-| [SecurityRegulatoryCompliance](/azure/azure-monitor/reference/tables/SecurityRegulatoryCompliance) | |
+| [SecurityRegulatoryCompliance](/azure/azure-monitor/reference/tables/securityregulatorycompliance) | |
| [SentinelHealth](/azure/azure-monitor/reference/tables/sentinelhealth) | | | ServiceMap | | | [SfBAssessmentRecommendation](/azure/azure-monitor/reference/tables/sfbassessmentrecommendation) | |
The following list identifies the tables in a [Log Analytics workspace](log-anal
| [SigninLogs](/azure/azure-monitor/reference/tables/signinlogs) | | | [SPAssessmentRecommendation](/azure/azure-monitor/reference/tables/spassessmentrecommendation) | | | [SQLAssessmentRecommendation](/azure/azure-monitor/reference/tables/sqlassessmentrecommendation) | |
-| [SqlAtpStatus](/azure/azure-monitor/reference/tables/SqlAtpStatus) | |
+| [SqlAtpStatus](/azure/azure-monitor/reference/tables/sqlatpstatus) | |
| [SQLSecurityAuditEvents](/azure/azure-monitor/reference/tables/sqlsecurityauditevents) | |
-| [SqlThreatProtectionLoginAudits](/azure/azure-monitor/reference/tables/SqlThreatProtectionLoginAudits) | |
-| [SqlVulnerabilityAssessmentResult](/azure/azure-monitor/reference/tables/SqlVulnerabilityAssessmentResult) | |
-| [SqlVulnerabilityAssessmentScanStatus](/azure/azure-monitor/reference/tables/SqlVulnerabilityAssessmentScanStatus) | |
-| [StorageBlobLogs](/azure/azure-monitor/reference/tables/StorageBlobLogs) | |
-| [StorageFileLogs](/azure/azure-monitor/reference/tables/StorageFileLogs) | |
-| [StorageQueueLogs](/azure/azure-monitor/reference/tables/StorageQueueLogs) | |
-| [StorageTableLogs](/azure/azure-monitor/reference/tables/StorageTableLogs) | |
+| [SqlThreatProtectionLoginAudits](/azure/azure-monitor/reference/tables/sqlthreatprotectionloginaudits) | |
+| [SqlVulnerabilityAssessmentResult](/azure/azure-monitor/reference/tables/sqlvulnerabilityassessmentresult) | |
+| [SqlVulnerabilityAssessmentScanStatus](/azure/azure-monitor/reference/tables/sqlvulnerabilityassessmentscanstatus) | |
+| [StorageBlobLogs](/azure/azure-monitor/reference/tables/storagebloblogs) | |
+| [StorageFileLogs](/azure/azure-monitor/reference/tables/storagefilelogs) | |
+| StorageInsightsAccountPropertiesDaily | |
+| StorageInsightsDailyMetrics | |
+| StorageInsightsHourlyMetrics | |
+| StorageInsightsMonthlyMetrics | |
+| StorageInsightsWeeklyMetrics | |
+| [StorageQueueLogs](/azure/azure-monitor/reference/tables/storagequeuelogs) | |
+| [StorageTableLogs](/azure/azure-monitor/reference/tables/storagetablelogs) | |
| [SucceededIngestion](/azure/azure-monitor/reference/tables/succeededingestion) | | | [SynapseBigDataPoolApplicationsEnded](/azure/azure-monitor/reference/tables/synapsebigdatapoolapplicationsended) | | | [SynapseBuiltinSqlPoolRequestsEnded](/azure/azure-monitor/reference/tables/synapsebuiltinsqlpoolrequestsended) | |
-| [SynapseDXFailedIngestion](/azure/azure-monitor/reference/tables/SynapseDXFailedIngestion) | |
-| [SynapseDXSucceededIngestion](/azure/azure-monitor/reference/tables/SynapseDXSucceededIngestion) | |
+| [SynapseDXFailedIngestion](/azure/azure-monitor/reference/tables/synapsedxfailedingestion) | |
+| [SynapseDXSucceededIngestion](/azure/azure-monitor/reference/tables/synapsedxsucceededingestion) | |
| [SynapseGatewayApiRequests](/azure/azure-monitor/reference/tables/synapsegatewayapirequests) | | | [SynapseIntegrationActivityRuns](/azure/azure-monitor/reference/tables/synapseintegrationactivityruns) | | | [SynapseIntegrationPipelineRuns](/azure/azure-monitor/reference/tables/synapseintegrationpipelineruns) | |
The following list identifies the tables in a [Log Analytics workspace](log-anal
| [SynapseSqlPoolRequestSteps](/azure/azure-monitor/reference/tables/synapsesqlpoolrequeststeps) | | | [SynapseSqlPoolSqlRequests](/azure/azure-monitor/reference/tables/synapsesqlpoolsqlrequests) | | | [SynapseSqlPoolWaits](/azure/azure-monitor/reference/tables/synapsesqlpoolwaits) | |
-| [Syslog](/azure/azure-monitor/reference/tables/syslog) | Partial support ΓÇô data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported. Data arriving via Diagnostics Extension agent is collected though storage while this path isnΓÇÖt supported. |
+| [Syslog](/azure/azure-monitor/reference/tables/syslog) | Partial support ΓÇô data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported. Data arriving via Diagnostics Extension agent is collected though storage while this path isnΓÇÖt supported. |
| [ThreatIntelligenceIndicator](/azure/azure-monitor/reference/tables/threatintelligenceindicator) | |
-| [TSIIngress](/azure/azure-monitor/reference/tables/TSIIngress) | |
-| [UCClient](/azure/azure-monitor/reference/tables/UCClient) | |
-| [UCClientReadinessStatus](/azure/azure-monitor/reference/tables/UCClientReadinessStatus) | |
-| [UCClientUpdateStatus](/azure/azure-monitor/reference/tables/UCClientUpdateStatus) | |
-| [UCDeviceAlert](/azure/azure-monitor/reference/tables/UCDeviceAlert) | |
-| [UCDOAggregatedStatus](/azure/azure-monitor/reference/tables/UCDOAggregatedStatus) | |
-| [UCDOStatus](/azure/azure-monitor/reference/tables/UCDOStatus) | |
-| [UCServiceUpdateStatus](/azure/azure-monitor/reference/tables/UCServiceUpdateStatus) | |
-| [UCUpdateAlert](/azure/azure-monitor/reference/tables/UCUpdateAlert) | |
+| [TSIIngress](/azure/azure-monitor/reference/tables/tsiingress) | |
+| [UCClient](/azure/azure-monitor/reference/tables/ucclient) | |
+| [UCClientReadinessStatus](/azure/azure-monitor/reference/tables/ucclientreadinessstatus) | |
+| [UCClientUpdateStatus](/azure/azure-monitor/reference/tables/ucclientupdatestatus) | |
+| [UCDeviceAlert](/azure/azure-monitor/reference/tables/ucdevicealert) | |
+| [UCDOAggregatedStatus](/azure/azure-monitor/reference/tables/ucdoaggregatedstatus) | |
+| [UCDOStatus](/azure/azure-monitor/reference/tables/ucdostatus) | |
+| [UCServiceUpdateStatus](/azure/azure-monitor/reference/tables/ucserviceupdatestatus) | |
+| [UCUpdateAlert](/azure/azure-monitor/reference/tables/ucupdatealert) | |
| [Update](/azure/azure-monitor/reference/tables/update) | Partial support ΓÇô some of the data is ingested through internal services that aren't supported. | | [UpdateRunProgress](/azure/azure-monitor/reference/tables/updaterunprogress) | | | [UpdateSummary](/azure/azure-monitor/reference/tables/updatesummary) | |
-| [UrlClickEvents](/azure/azure-monitor/reference/tables/UrlClickEvents) | |
-| [UserAccessAnalytics](/azure/azure-monitor/reference/tables/useraccessanalytics) | |
-| [UserPeerAnalytics](/azure/azure-monitor/reference/tables/userpeeranalytics) | |
-| [W3CIISLog](/azure/azure-monitor/reference/tables/W3CIISLog) | |
-| [WaaSDeploymentStatus](/azure/azure-monitor/reference/tables/WaaSDeploymentStatus) | |
-| [WaaSInsiderStatus](/azure/azure-monitor/reference/tables/WaaSInsiderStatus) | |
-| [WaaSUpdateStatus](/azure/azure-monitor/reference/tables/WaaSUpdateStatus) | |
+| [UrlClickEvents](/azure/azure-monitor/reference/tables/urlclickevents) | |
+| [W3CIISLog](/azure/azure-monitor/reference/tables/w3ciislog) | |
+| [WaaSDeploymentStatus](/azure/azure-monitor/reference/tables/waasdeploymentstatus) | |
+| [WaaSInsiderStatus](/azure/azure-monitor/reference/tables/waasinsiderstatus) | |
+| [WaaSUpdateStatus](/azure/azure-monitor/reference/tables/waasupdatestatus) | |
| [Watchlist](/azure/azure-monitor/reference/tables/watchlist) | |
-| [WebPubSubConnectivity](/azure/azure-monitor/reference/tables/WebPubSubConnectivity) | |
-| [WebPubSubHttpRequest](/azure/azure-monitor/reference/tables/WebPubSubHttpRequest) | |
-| [WebPubSubMessaging](/azure/azure-monitor/reference/tables/WebPubSubMessaging) | |
-| [WindowsClientAssessmentRecommendation](/azure/azure-monitor/reference/tables/WindowsClientAssessmentRecommendation) | |
+| [WebPubSubConnectivity](/azure/azure-monitor/reference/tables/webpubsubconnectivity) | |
+| [WebPubSubHttpRequest](/azure/azure-monitor/reference/tables/webpubsubhttprequest) | |
+| [WebPubSubMessaging](/azure/azure-monitor/reference/tables/webpubsubmessaging) | |
+| [WindowsClientAssessmentRecommendation](/azure/azure-monitor/reference/tables/windowsclientassessmentrecommendation) | |
| [WindowsEvent](/azure/azure-monitor/reference/tables/windowsevent) | | | [WindowsFirewall](/azure/azure-monitor/reference/tables/windowsfirewall) | |
-| [WindowsServerAssessmentRecommendation](/azure/azure-monitor/reference/tables/WindowsServerAssessmentRecommendation) | |
+| [WindowsServerAssessmentRecommendation](/azure/azure-monitor/reference/tables/windowsserverassessmentrecommendation) | |
| [WireData](/azure/azure-monitor/reference/tables/wiredata) | Partial support ΓÇô some of the data is ingested through internal services that aren't supported. | | [WorkloadDiagnosticLogs](/azure/azure-monitor/reference/tables/workloaddiagnosticlogs) | |
-| [WUDOAggregatedStatus](/azure/azure-monitor/reference/tables/WUDOAggregatedStatus) | |
-| [WUDOStatus](/azure/azure-monitor/reference/tables/WUDOStatus) | |
+| [WUDOAggregatedStatus](/azure/azure-monitor/reference/tables/wudoaggregatedstatus) | |
+| [WUDOStatus](/azure/azure-monitor/reference/tables/wudostatus) | |
| [WVDAgentHealthStatus](/azure/azure-monitor/reference/tables/wvdagenthealthstatus) | | | [WVDCheckpoints](/azure/azure-monitor/reference/tables/wvdcheckpoints) | |
-| [WVDConnectionNetworkData](/azure/azure-monitor/reference/tables/WVDConnectionNetworkData) | |
+| [WVDConnectionNetworkData](/azure/azure-monitor/reference/tables/wvdconnectionnetworkdata) | |
| [WVDConnections](/azure/azure-monitor/reference/tables/wvdconnections) | | | [WVDErrors](/azure/azure-monitor/reference/tables/wvderrors) | | | [WVDFeeds](/azure/azure-monitor/reference/tables/wvdfeeds) | |
-| [WVDHostRegistrations](/azure/azure-monitor/reference/tables/WVDHostRegistrations) | |
+| [WVDHostRegistrations](/azure/azure-monitor/reference/tables/wvdhostregistrations) | |
| [WVDManagement](/azure/azure-monitor/reference/tables/wvdmanagement) | |+
azure-monitor Vminsights Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-performance.md
Last updated 09/28/2023
VM insights includes a set of performance charts that target several key [performance indicators](vminsights-log-query.md#performance-records) to help you determine how well a virtual machine is performing. The charts show resource utilization over a period of time. You can use them to identify bottlenecks and anomalies. You can also switch to a perspective that lists each machine to view resource utilization based on the metric selected.
-VM insights monitors key operating system performance indicators related to processor, memory, network adapter, and disk utilization. Performance complements the health monitoring feature and helps to:
+VM insights monitors key operating system performance indicators related to processor, memory, network adapter, and disk utilization. Performance helps to:
- Expose issues that indicate a possible system component failure. - Support tuning and optimization to achieve efficiency.
To access from Azure Monitor:
<!-- convertborder later --> :::image type="content" source="media/vminsights-performance/vminsights-performance-aggview-01.png" lightbox="media/vminsights-performance/vminsights-performance-aggview-01.png" alt-text="Screenshot that shows a VM insights Performance Top N List view." border="false":::
-On the **Top N Charts** tab, if you have more than one Log Analytics workspace, select the workspace enabled with the solution from the **Workspace** selector at the top of the page. The **Group** selector returns subscriptions, resource groups, [computer groups](../logs/computer-groups.md), and virtual machine scale sets of computers related to the selected workspace that you can use to further filter results presented in the charts on this page and across the other pages. Your selection only applies to the Performance feature and doesn't carry over to Health or Map.
+On the **Top N Charts** tab, if you have more than one Log Analytics workspace, select the workspace enabled with the solution from the **Workspace** selector at the top of the page. The **Group** selector returns subscriptions, resource groups, [computer groups](../logs/computer-groups.md), and virtual machine scale sets of computers related to the selected workspace that you can use to further filter results presented in the charts on this page and across the other pages. Your selection only applies to the Performance feature and doesn't carry over to Map.
By default, the charts show performance counters for the last hour. By using the **TimeRange** selector, you can query for historical time ranges of up to 30 days to show how performance looked in the past.
Five capacity utilization charts are shown on the page:
* **Bytes Sent Rate**: Shows the top five machines with the highest average of bytes sent. * **Bytes Receive Rate**: Shows the top five machines with the highest average of bytes received.
+>[!NOTE]
+>Each chart described above only shows the top 5 machines.
+>
+ Selecting the pushpin icon in the upper-right corner of a chart pins it to the last Azure dashboard you viewed. From the dashboard, you can resize and reposition the chart. Selecting the chart from the dashboard redirects you to VM insights and loads the correct scope and view. Select the icon to the left of the pushpin icon on a chart to open the **Top N List** view. This list view shows the resource utilization for a performance metric by individual VM. It also shows which machine is trending the highest.
azure-portal Learn Training https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/learn-training.md
+
+ Title: Learn about Azure in the Azure mobile app
+description: The Microsoft Learn features in the Azure mobile app help you learn Azure skills anytime, anywhere.
Last updated : 02/26/2024+++
+# Learn about Azure in the Azure mobile app
+
+The Microsoft Learn features on the Azure mobile app are designed to help you learn anytime, anywhere. Browse and view the most popular Azure modules, with training on fundamentals, security, AI, and more. Access the Azure certification page, Azure Learn landing page, Q&A, and other useful pages.
+
+In this article, we'll walk through some of the features you can use to access training content and grow your Azure skills, right from within the app. With the Azure mobile app, you can learn Azure at your own pace and convenience.
+
+You can access the **Learn** Page from [Azure mobile app **Home**](home.md).
+
+## Most popular lessons
+
+When you arrive to the Learn page on the Azure mobile app, the **Most popular lessons** section shows the most popular lessons. These modules are the highest-viewed Azure content that can be easily completed in a short amount of time.
++
+Each lesson card shows information about the module, including the title, average time to complete, and user rating.
+
+To see more of the most popular lessons, select **More** in the top right. The current top 10 most popular lessons will be shown.
++
+To start a lesson, just select it to begin. Remember to sign in to your Microsoft account to save your progress!
+
+## Learn Links
+
+The **Learn Links** section shows buttons that take you to different experiences across Microsoft Learn, including:
+
+- **Azure Learn**: Shows learning paths and other resources to help you build Azure skills.
+- **Azure Basics**: Launches the Microsoft Azure Fundamentals learning path with three modules about basic cloud concepts and Azure services.
+- **Certifications**: Shows information about available Azure-related Microsoft Certifications.
+- **Azure Q&A**: Explore technical questions and answers about Azure.
+
+Select any of these links to explore their content.
+
+## Learn more about Azure AI
+
+The **Learn more about Azure AI** section showcases a few of the most popular learning modules focused on Azure AI. The content you see here will vary, based on popularity and new releases. Select any module to open and begin it. As noted earlier, be sure to sign in with your Microsoft account if you want to save your progress.
++
+## Next steps
+
+- Learn more about the [Azure mobile app](overview.md).
+- Download the Azure mobile app for free from the [Apple App Store](https://aka.ms/azureapp/ios/doc), [Google Play](https://aka.ms/azureapp/android/doc) or [Amazon App Store](https://aka.ms/azureapp/amazon/doc).
azure-portal How To Create Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/how-to-create-azure-support-request.md
Title: How to create an Azure support request
description: Customers who need assistance can use the Azure portal to find self-service solutions and to create and manage support requests. Previously updated : 12/18/2023 Last updated : 02/26/2024 # Create an Azure support request
To create a support request without a subscription, for example a Microsoft Entr
> [!IMPORTANT] > If a support request requires investigation into multiple subscriptions, you must have the required access for each subscription involved ([Owner](../../role-based-access-control/built-in-roles.md#owner), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), [Reader](../../role-based-access-control/built-in-roles.md#reader), [Support Request Contributor](../../role-based-access-control/built-in-roles.md#support-request-contributor), or a custom role with the [Microsoft.Support/supportTickets/read](../../role-based-access-control/resource-provider-operations.md#microsoftsupport) permission).
+If a support request requires confirmation or release of account-specific information, changes to account information, or operations such as subscription ownership transfer or cancelation, you must be an [account billing administrator](/azure/cost-management-billing/manage/add-change-subscription-administrator#determine-account-billing-administrator) for the subscription.
+ ### Open a support request from the global header To start a support request from anywhere in the Azure portal:
Next, we collect more details about the problem. Providing thorough and detailed
> [!TIP] > To add a support plan that requires an **Access ID** and **Contract ID**, select **Help + Support** > **Support plans** > **Link support benefits**. When a limited support plan expires or has no support incidents remaining, it won't be available to select. - 1. Provide your preferred contact method, your availability, and your preferred support language. Confirm that your country/region setting is accurate, as this setting affects the business hours in which a support engineer can work on your request. 1. Complete the **Contact info** section so that we know how to reach you.
azure-resource-manager Bicep Functions Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-files.md
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
### Remarks
-Use this function when you have YAML content or minified YAML content that is stored in a separate file. Rather than duplicating the YAML content in your Bicep file, load the content with this function. You can load a part of a YAML file by specifying a path filer. The file is loaded when the Bicep file is compiled to the YAML template. You can't include variables in the file path because they haven't been resolved when compiling to the template. During deployment, the YAML template contains the contents of the file as a hard-coded string.
+Use this function when you have YAML content or minified YAML content that is stored in a separate file. Rather than duplicating the YAML content in your Bicep file, load the content with this function. You can load a part of a YAML file by specifying a path filter. The file is loaded when the Bicep file is compiled to the YAML template. You can't include variables in the file path because they haven't been resolved when compiling to the template. During deployment, the YAML template contains the contents of the file as a hard-coded string.
In VS Code, the properties of the loaded object are available intellisense. For example, you can create a file with values to share across many Bicep files. An example is shown in this article.
azure-vmware Concepts Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-api-management.md
Title: Concepts - API Management
description: Learn how API Management protects APIs running on Azure VMware Solution virtual machines (VMs) Previously updated : 11/28/2023 Last updated : 2/26/2024
An external deployment publishes APIs consumed by external users that use a publ
The external deployment diagram shows the entire process and the actors involved (shown at the top). The actors are: -- **Administrator(s):** Represents the admin or DevOps team, which manages Azure VMware Solution through the Azure portal and automation mechanisms like PowerShell or Azure DevOps.
+- **Administrator(s):** Represents the admin or DevOps team, which manages the Azure VMware Solution through the Azure portal and automation mechanisms like PowerShell or Azure DevOps.
- **Users:** Represents the exposed APIs' consumers and represents both users and services consuming the APIs.
-The traffic flow goes through the API Management instance, which abstracts the backend services, plugged into the Hub virtual network. The ExpressRoute Gateway routes the traffic to the ExpressRoute Global Reach channel and reaches an NSX Load Balancer distributing the incoming traffic to the different backend service instances.
+The traffic flow goes through the API Management instance, which abstracts the backend services, plugged into the Hub virtual network. The ExpressRoute Gateway routes the traffic to the ExpressRoute Global Reach connection and reaches an NSX Load Balancer distributing the incoming traffic to the different backend service instances.
API Management has an Azure Public API, and activating Azure DDoS Protection Service is recommended.
API Management has an Azure Public API, and activating Azure DDoS Protection Ser
An internal deployment publishes APIs consumed by internal users or systems. DevOps teams and API developers use the same management tools and developer portal as in the external deployment.
-Use [Azure Application Gateway](../api-management/api-management-howto-integrate-internal-vnet-appgateway.md) for internal deployments to create a public and secure endpoint for the API. The gateway's capabilities are used to create a hybrid deployment that enables different scenarios.
+Use [Azure Application Gateway](../api-management/api-management-howto-integrate-internal-vnet-appgateway.md) for internal deployments to create a public and secure endpoint for the API. The gateway's capabilities are used to create a hybrid deployment that enables different scenarios.
* Use the same API Management resource for consumption by both internal and external consumers.
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md
Title: Introduction
description: Learn the features and benefits of Azure VMware Solution to deploy and manage VMware-based workloads in Azure. Previously updated : 12/18/2023 Last updated : 2/26/2024
The following table provides a detailed list of roles and responsibilities betwe
| **Role** | **Task/details** | | -- | - |
-| Microsoft - Azure VMware Solution | Physical infrastructure<ul><li>Azure regions</li><li>Azure availability zones</li><li>Express Route/Global Reach</ul></li>Compute/Network/Storage<ul><li>Rack and power Bare Metal hosts</li><li>Rack and power network equipment</ul></li>Software defined Data Center (SDDC) deploy/lifecycle<ul><li>VMware ESXi deploy, patch, and upgrade</li><li>VMware vCenter Servers deploy, patch, and upgrade</li><li>VMware NSX-T Data Centers deploy, patch, and upgrade</li><li>VMware vSAN deploy, patch, and upgrade</ul></li>SDDC Networking - VMware NSX-T Data Center provider config<ul><li>Microsoft Edge node/cluster, VMware NSX-T Data Center host preparation</li><li>Provider Tier-0 and Tenant Tier-1 Gateway</li><li>Connectivity from Tier-0 (using BGP) to Azure Network via Express Route</ul></li>SDDC Compute - VMware vCenter Server provider config<ul><li>Create default cluster</li><li>Configure virtual networking for vMotion, Management, vSAN, and others</ul></li>SDDC backup/restore<ul><li>Back up and restore VMware vCenter Server</li><li>Back up and restore VMware NSX-T Data Center NSX-T Manager</ul></li>SDDC health monitoring and corrective actions, for example: replace failed hosts</br><br>(optional) VMware HCX deploys with fully configured compute profile on cloud side as add-on</br><br>(optional) SRM deploys, upgrade, and scale up/down</br><br>Support - SDDC platforms and VMware HCX |
+| Microsoft - Azure VMware Solution | Physical infrastructure<ul><li>Azure regions</li><li>Azure availability zones</li><li>Express Route/Global Reach</ul></li>Compute/Network/Storage<ul><li>Rack and power Bare Metal hosts</li><li>Rack and power network equipment</ul></li>Software defined Data Center (SDDC) deploy/lifecycle<ul><li>VMware ESXi deploy, patch, and upgrade</li><li>VMware vCenter Servers deploy, patch, and upgrade</li><li>VMware NSX-T Data Centers deploy, patch, and upgrade</li><li>VMware vSAN deploy, patch, and upgrade</ul></li>SDDC Networking - VMware NSX-T Data Center provider config<ul><li>Microsoft Edge node/cluster, VMware NSX-T Data Center host preparation</li><li>Provider Tier-0 and Tenant Tier-1 Gateway</li><li>Connectivity from Tier-0 (using BGP) to Azure Network via ExpressRoute</ul></li>SDDC Compute - VMware vCenter Server provider config<ul><li>Create default cluster</li><li>Configure virtual networking for vMotion, Management, vSAN, and others</ul></li>SDDC backup/restore<ul><li>Back up and restore VMware vCenter Server</li><li>Back up and restore VMware NSX-T Data Center NSX-T Manager</ul></li>SDDC health monitoring and corrective actions, for example: replace failed hosts</br><br>(optional) VMware HCX deploys with fully configured compute profile on cloud side as add-on</br><br>(optional) VMware SRM deploys, upgrade, and scale up/down</br><br>Support - SDDC platforms and VMware HCX |
| Customer | Request Azure VMware Solution host quote with Microsoft<br>Plan and create a request for SDDCs on Azure portal with:<ul><li>Host count</li><li>Management network range</li><li>Other information</ul></li>Configure SDDC network and security (VMware NSX-T Data Center)<ul><li>Network segments to host applications</li><li>More Tier -1 routers</li><li>Firewall</li><li>VMware NSX-T Data Center LB</li><li>IPsec VPN</li><li>NAT</li><li>Public IP addresses</li><li>Distributed firewall/gateway firewall</li><li>Network extension using VMware HCX or VMware NSX-T Data Center</li><li>AD/LDAP config for RBAC</ul></li>Configure SDDC - VMware vCenter Server<ul><li>AD/LDAP config for RBAC</li><li>Deploy and lifecycle management of Virtual Machines (VMs) and application<ul><li>Install operating systems</li><li>Patch operating systems</li><li>Install antivirus software</li><li>Install backup software</li><li>Install configuration management software</li><li>Install application components</li><li>VM networking using VMware NSX-T Data Center segments</ul></li><li>Migrate Virtual Machines (VMs)<ul><li>VMware HCX configuration</li><li>Live vMotion</li><li>Cold migration</li><li>Content library sync</ul></li></ul></li>Configure SDDC - vSAN<ul><li>Define and maintain vSAN VM policies</li><li>Add hosts to maintain adequate 'slack space'</ul></li>Configure VMware HCX<ul><li>Download and deploy HCA connector OVA in on-premises</li><li>Pairing on-premises VMware HCX connector</li><li>Configure the network profile, compute profile, and service mesh</li><li>Configure VMware HCX network extension/MON</li><li>Upgrade/updates</ul></li>Network configuration to connect to on-premises, virtual network, or internet</br><br>Add or delete hosts requests to cluster from Portal</br><br>Deploy/lifecycle management of partner (third party) solutions |
-| Partner ecosystem | Support for their product/solution. For reference, the following are some of the supported Azure VMware Solution partner solution/product:<ul><li>BCDR - SRM, JetStream, Zerto, and others</li><li>Backup - Veeam, Commvault, Rubrik, and others</li><li>VDI - Horizon/Citrix</li><li>Multitenancy - VMware Cloud director service (CDs), VMware Cloud director availability(VCDA)</li><li>Security solutions - BitDefender, TrendMicro, Checkpoint</li><li>Other VMware products - vRA, vROps, AVI |
+| Partner ecosystem | Support for their product/solution. For reference, the following are some of the supported Azure VMware Solution partner solution/product:<ul><li>BCDR - VMware SRM, JetStream, Zerto, and others</li><li>Backup - Veeam, Commvault, Rubrik, and others</li><li>VDI - Horizon/Citrix</li><li>Multitenancy for enterprises - VMware Cloud Director Service (CDS), VMware vCloud Director Availability (VCDA)</li><li>Security solutions - BitDefender, TrendMicro, Checkpoint</li><li>Other VMware products - Aria Suite, NSX Load Balancer |
## Next steps
The next step is to learn key [private cloud and cluster concepts](concepts-priv
<!-- LINKS - external --> [concepts-private-clouds-clusters]: ./concepts-private-clouds-clusters.md-
backup Quick Backup Vm Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-vm-portal.md
Title: Quickstart - Back up a VM with the Azure portal
-description: In this Quickstart, learn how to create a Recovery Services vault, enable protection on an Azure VM, and backup the VM, with the Azure portal.
Previously updated : 02/27/2023
+ Title: Quickstart - Back up a VM with the Azure portal by using Azure Backup
+description: In this Quickstart, learn how to create a Recovery Services vault, enable protection on an Azure VM, and back up the VM, with the Azure portal.
Last updated : 02/26/2024 ms.devlang: azurecli-+
-# Back up a virtual machine in Azure
+# Quickstart: Back up a virtual machine in Azure
-Azure backups can be created through the Azure portal. This method provides a browser-based user interface to create and configure Azure backups and all related resources. You can protect your data by taking backups at regular intervals. Azure Backup creates recovery points that can be stored in geo-redundant recovery vaults. This article details how to back up a virtual machine (VM) with the Azure portal.
+This quickstart describes how to enable backup on an existing Azure VM by using the Azure portal. If you need to create a VM, you can [create a VM with the Azure portal](../virtual-machines/windows/quick-create-portal.md).
-This quickstart enables backup on an existing Azure VM. If you need to create a VM, you can [create a VM with the Azure portal](../virtual-machines/windows/quick-create-portal.md).
+Azure backups can be created through the Azure portal. This method provides a browser-based user interface to create and configure Azure backups and all related resources. You can protect your data by taking backups at regular intervals. Azure Backup creates recovery points that can be stored in geo-redundant recovery vaults. This article details how to back up a virtual machine (VM) with the Azure portal.
## Sign in to Azure
To apply a backup policy to your Azure VMs, follow these steps:
![Screenshot showing the Backup button.](./media/backup-azure-arm-vms-prepare/backup-button.png)
-1. Select **Azure Virtual machines** as the **Datasource type** and select the vault you have created. Then select **Continue**.
+1. On the **Start: Configure Backup** blade, select **Azure Virtual machines** as the **Datasource type** and select the vault you have created. Then select **Continue**.
- ![Screenshot showing Backup and Backup Goal panes.](./media/backup-azure-arm-vms-prepare/select-backup-goal-1.png)
+ ![Screenshot showing Backup and Backup Goal blades.](./media/backup-azure-arm-vms-prepare/select-backup-goal-1.png)
1. Assign a Backup policy.
Create a simple scheduled daily backup to a Recovery Services vault.
![Screenshot showing to add virtual machines.](./media/backup-azure-arm-vms-prepare/add-virtual-machines.png)
-1. The **Select virtual machines** pane will open. Select the VMs you want to back up using the policy. Then select **OK**.
+1. The **Select virtual machines** blade will open. Select the VMs you want to back up using the policy. Then select **OK**.
* The selected VMs are validated. * You can only select VMs in the same region as the vault. * VMs can only be backed up in a single vault.
- ![Screenshot showing the Select virtual machines pane.](./media/backup-azure-arm-vms-prepare/select-vms-to-backup.png)
+ ![Screenshot showing the Select virtual machines blade.](./media/backup-azure-arm-vms-prepare/select-vms-to-backup.png)
>[!NOTE] > All the VMs in the same region and subscription as that of the vault are available to configure backup. When configuring backup, you can browse to the virtual machine name and its resource group, even though you donΓÇÖt have the required permission on those VMs. If your VM is in soft deleted state, then it won't be visible in this list. If you need to re-protect the VM, then you need to wait for the soft delete period to expire or undelete the VM from the soft deleted list. For more information, see [the soft delete for VMs article](soft-delete-virtual-machines.md#soft-delete-for-vms-using-azure-portal).
The snapshot phase guarantees the availability of a recovery point stored along
![Screenshot showing the backup job status.](./media/backup-azure-arm-vms-prepare/backup-job-status.png)
-There are two **Sub Tasks** running at the backend, one for front-end backup job that can be checked from the **Backup Job** details pane as given below:
+There are two **Sub Tasks** running at the backend, one for front-end backup job that can be checked from the **Backup Job** details blade as given below:
![Screenshot showing backup job status sub-tasks.](./media/backup-azure-arm-vms-prepare/backup-job-phase.png)
If you're going to continue on to a Backup tutorial that explains how to restore
4. In the **Type the name of the Backup item** dialog, enter your VM name, such as *myVM*. Select **Stop Backup**.
- Once the VM backup has been stopped and recovery points removed, you can delete the resource group. If you used an existing VM, you may wish to leave the resource group and VM in place.
+ Once the VM backup has been stopped and recovery points removed, you can delete the resource group. If you used an existing VM, you may want to leave the resource group and VM in place.
5. In the menu on the left, select **Resource groups**. 6. From the list, choose your resource group. If you used the sample VM quickstart commands, the resource group is named *myResourceGroup*.
backup Tutorial Backup Vm At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-backup-vm-at-scale.md
Title: Tutorial - Back up multiple Azure virtual machines
+ Title: Tutorial - Back up multiple Azure virtual machines by using Azure Backup
description: In this tutorial, learn how to create a Recovery Services vault, define a backup policy, and simultaneously back up multiple virtual machines. Previously updated : 02/27/2023 Last updated : 02/26/2024 -+
-# Use Azure portal to back up multiple virtual machines
+# Tutotial: Back up multiple virtual machines by using the Azure portal
-When you back up data in Azure, you store that data in an Azure resource called a Recovery Services vault. The Recovery Services vault resource is available from the Settings menu of most Azure services. The benefit of having the Recovery Services vault integrated into the Settings menu of most Azure services is the ease of backing up data. However, working individually with each database or virtual machine in your business is tedious. What if you want to back up the data for all virtual machines in one department, or in one location? It's easy to back up multiple virtual machines by creating a backup policy and applying that policy to the desired virtual machines. This tutorial explains how to:
+This tutorial describes how to back up multiple virtual machines by using the Azure portal .
-> [!div class="checklist"]
->
-> * Create a Recovery Services vault
-> * Define a backup policy
-> * Apply the backup policy to protect multiple virtual machines
-> * Trigger an on-demand backup job for the protected virtual machines
+When you back up data in Azure, you store that data in an Azure resource called a Recovery Services vault. The Recovery Services vault resource is available from the Settings menu of most Azure services. The benefit of having the Recovery Services vault integrated into the Settings menu of most Azure services is the ease of backing up data. However, working individually with each database or virtual machine in your business is tedious. What if you want to back up the data for all virtual machines in one department, or in one location? It's easy to back up multiple virtual machines by creating a backup policy and applying that policy to the desired virtual machines.
## Sign in to the Azure portal
To set a backup policy to your Azure VMs, follow these steps:
![Screenshot showing the Backup button.](./media/backup-azure-arm-vms-prepare/backup-button.png)
-1. Select **Azure Virtual machines** as the **Datasource type** and select the vault you have created. Then click **Continue**.
+1. On the **Start: Configure Backup** blade, select **Azure Virtual machines**, as the **Datasource type**, and then select the vault you have created. Then click **Continue**.
- ![Screenshot showing the Backup and Backup Goal panes.](./media/backup-azure-arm-vms-prepare/select-backup-goal-1.png)
+ ![Screenshot showing the Backup and Backup Goal blades.](./media/backup-azure-arm-vms-prepare/select-backup-goal-1.png)
1. Assign a Backup policy.
To set a backup policy to your Azure VMs, follow these steps:
![Screenshot showing to add virtual machines.](./media/backup-azure-arm-vms-prepare/add-virtual-machines.png)
-1. The **Select virtual machines** pane will open. Select the VMs you want to back up using the policy. Then select **OK**.
+1. The **Select virtual machines** blade will open. Select the VMs you want to back up using the policy. Then select **OK**.
* The selected VMs are validated. * You can only select VMs in the same region as the vault. * VMs can only be backed up in a single vault.
- ![Screenshot showing the Select virtual machines pane.](./media/backup-azure-arm-vms-prepare/select-vms-to-backup.png)
+ ![Screenshot showing the Select virtual machines blade.](./media/backup-azure-arm-vms-prepare/select-vms-to-backup.png)
>[!NOTE] > All the VMs in the same region and subscription as that of the vault are available to configure backup. When configuring backup, you can browse to the virtual machine name and its resource group, even though you donΓÇÖt have the required permission on those VMs. If your VM is in soft deleted state, then it won't be visible in this list. If you need to re-protect the VM, then you need to wait for the soft delete period to expire or undelete the VM from the soft deleted list. For more information, see [the soft delete for VMs article](soft-delete-virtual-machines.md#soft-delete-for-vms-using-azure-portal).
connectors Connect Common Data Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connect-common-data-service.md
Title: Connect to Microsoft Dataverse, previously Common Data Service
-description: Create and manage rows from Microsoft Dataverse, previously Common Data Service, in workflows using Azure Logic Apps.
+ Title: Connect to Microsoft Dataverse from your workflow
+description: Create and manage rows in Microsoft Dataverse from your workflow in Azure Logic Apps.
ms.suite: integration
Last updated 12/14/2023
-# Connect to Microsoft Dataverse (previously Common Data Service) from workflows in Azure Logic Apps
+# Connect to Microsoft Dataverse from workflows in Azure Logic Apps
[!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)] > [!IMPORTANT] > > On August 30, 2022, the connector operations for Common Data Service 2.0, also known as Microsoft Dataverse
-> (Legacy), migrate to the current Microsoft Dataverse connector. Legacy operations bear the "legacy" label,
+> (Legacy), migrated to the current Microsoft Dataverse connector. Legacy operations bear the "legacy" label,
> while current operations bear the "preview" label. You can use the current Dataverse connector in any > existing or new logic app workflows. For backward compatibility, existing workflows continue to work > with the legacy Dataverse connector. However, make sure to review these workflows, and update them promptly. >
-> Starting October 2023, the legacy version becomes unavailable for new workflows. Existing workflows continue
-> to work, but you *must* use the current Dataverse connector for new workflows. At that time, a timeline for the shutdown date for the legacy actions and triggers will be announced.
->
-> Since November 2020, the Common Data Service connector was renamed Microsoft Dataverse (Legacy).
+> Since October 2023, the legacy version became unavailable for new workflows. Existing workflows continue
+> to work, but you *must* use the current Dataverse connector operations for new workflows. A timeline for
+> the shutdown date for the legacy actions and triggers will be announced. For more information, see
+> [Microsoft Dataverse (legacy) connector for Azure Logic Apps will be deprecated and replaced with another connector](/power-platform/important-changes-coming#microsoft-dataverse-legacy-connector-for-azure-logic-apps-will-be-deprecated-and-replaced-with-another-connector).
-To create and run automated workflows that manage rows in your [Microsoft Dataverse database, formerly Common Data Service database](/powerapps/maker/common-data-service/data-platform-intro), you can use [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the [Microsoft Dataverse connector](/connectors/commondataserviceforapps/). These workflows can create rows, update rows, and perform other operations. You can also get information from your Dataverse database and make the output available for other actions to use in your workflows. For example, when a row is added, updated, or deleted in your Dataverse database, you can send an email by using the Office 365 Outlook connector.
+To create and run automated workflows that create and manage rows in your [Microsoft Dataverse database](/powerapps/maker/common-data-service/data-platform-intro), you can use [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the [Microsoft Dataverse connector](/connectors/commondataserviceforapps/). These workflows can create rows, update rows, and perform other operations. You can also get information from your Dataverse database and make the output available for other actions to use in your workflows. For example, when a row is added, updated, or deleted in your Dataverse database, you can send an email by using the Office 365 Outlook connector.
This guide shows how to create a workflow that creates a task row whenever a new lead row is created.
To stop unwanted notifications, delete the `callbackregistrations` entity from t
### Duplicate 'callbackregistrations' entity
-In Standard logic app workflows, under specific conditions such as instance reallocation or application restart, the Microsoft Dataverse trigger duplicately runs, which results in creating a duplicate `callbackregistrations` entity in your Dataverse database. If you edit a Standard workflow that starts with a Dataverse trigger, check whether this `callbackregistrations` entity is duplicated. If the duplicate exists, manually delete the duplicate `callbackregistrations` entity.
+In Standard logic app workflows, under specific conditions such as instance reallocation or application restart, the Microsoft Dataverse trigger starts a duplicate run, which creates a duplicate `callbackregistrations` entity in your Dataverse database. If you edit a Standard workflow that starts with a Dataverse trigger, check whether this `callbackregistrations` entity is duplicated. If the duplicate exists, manually delete the duplicate `callbackregistrations` entity.
## Next steps
cost-management-billing Cancel Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/cancel-azure-subscription.md
Previously updated : 06/19/2023 Last updated : 02/26/2024
Although not required, Microsoft *recommends* that you take the following action
* Consider migrating your data. See [Move resources to new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md). * Delete all resources and all resource groups. * To later manually delete a subscription, you must first delete all resources associated with the subscription.
- * You may be unable to delete all resources, depending on your configuration. For example, if you have immutable blobs. For more information, see [Immutable Blobs](../../storage/blobs/immutable-storage-overview.md#scenarios-with-version-level-scope).
+ * You might be unable to delete all resources, depending on your configuration. For example, if you have immutable blobs. For more information, see [Immutable Blobs](../../storage/blobs/immutable-storage-overview.md#scenarios-with-version-level-scope).
* If you have any custom roles that reference this subscription in `AssignableScopes`, you should update those custom roles to remove the subscription. If you try to update a custom role after you cancel a subscription, you might get an error. For more information, see [Troubleshoot problems with custom roles](../../role-based-access-control/troubleshooting.md#custom-roles) and [Azure custom roles](../../role-based-access-control/custom-roles.md). > [!NOTE]
An account administrator without the service administrator or subscription owner
## Cancel a subscription in the Azure portal
-Depending on your environment, the cancel subscription experience allows you to cancel a subscription, turn off autorenewal for an associated support plan, and stop all Azure subscription resources.
+Depending on your environment, the cancel subscription experience allows you to:
-If you have a support plan associated with the subscription, it's shown in the cancellation process. Otherwise, it isn't shown.
+- Cancel a subscription
+- Turn off autorenewal for an associated support plan
+- Stop all Azure subscription resources
+
+If you have a support plan associated with the subscription, it appears in the cancellation process. Otherwise, it isn't shown.
If you have any Azure resources associated with the subscription, they're shown in the cancellation process. Otherwise, they're not shown.
+Depending on your environment, you can cancel an Azure support plan with the following these steps:
+
+1. Navigate to the Cost management + Billing Overview page.
+1. Select the support plan that you want to cancel from the **Your subscriptions** page to open up the Support plan page.
+1. Select **Cancel** to cancel your support plan.
+ A subscription owner can navigate in the Azure portal to **Subscriptions** and then start at step 3. 1. In the Azure portal, navigate to **Cost Management + Billing**.
-1. In the left menu, select either **Subscriptions** or **Azure subscriptions**, depending on which is available to you. If you have a support plan, it's shown in the list.
+1. In the left menu, select either **Subscriptions** or **Azure subscriptions**, depending on which is available to you. If you have a support plan, it appears in the list.
1. Select the subscription that you want to cancel. 1. At the top of page, select **Cancel**. 1. If you have any resources associated with the subscription, they're shown on the page. At the top of the page, select **Cancel subscription**.
A subscription owner can navigate in the Azure portal to **Subscriptions** and t
1. Select **Cancel subscription**. :::image type="content" source="./media/cancel-azure-subscription/cancel-subscription-final.png" alt-text="Screenshot showing the Cancel subscription window options." lightbox="./media/cancel-azure-subscription/cancel-subscription-final.png" :::
-After the subscription is canceled, a notification shows that the cancellation is complete. If you have any outstanding charges that haven't been invoiced yet, their estimated charges are shown. If you have any outstanding credits that aren't yet applied to your invoice, the estimated credits that apply to your invoice are shown. For more information about data update frequency, see [Cost and usage data updates and retention](../costs/understand-cost-mgt-data.md#cost-and-usage-data-updates-and-retention).
+After the subscription is canceled, a notification shows that the cancellation is complete. If you have any outstanding charges that aren't invoiced yet, their estimated charges are shown. If you have any outstanding credits that aren't yet applied to your invoice, the estimated credits that apply to your invoice are shown. For more information about data update frequency, see [Cost and usage data updates and retention](../costs/understand-cost-mgt-data.md#cost-and-usage-data-updates-and-retention).
:::image type="content" source="./media/cancel-azure-subscription/cancel-complete.png" alt-text="Screenshot showing that subscription cancellation status." lightbox="./media/cancel-azure-subscription/cancel-complete.png" :::
After your subscription is canceled, Microsoft waits 30 - 90 days before permane
The **Delete subscription** option isn't available until at least 15 minutes after you cancel your subscription.
-Depending on your subscription type, you may not be able to delete a subscription immediately.
+Depending on your subscription type, you might not be able to delete a subscription immediately.
1. Select your subscription on the [Subscriptions](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) page in the Azure portal. 1. Select the subscription that you want to delete. 1. At the top of the subscription page, select **Delete**.
- :::image type="content" source="./media/cancel-azure-subscription/delete-option.png" alt-text="Screenshot showing the Delete option." lightbox="./media/cancel-azure-subscription/delete-option.png" :::
+ :::image type="content" source="./media/cancel-azure-subscription/delete-option.png" alt-text="Screenshot showing the option to Delete." lightbox="./media/cancel-azure-subscription/delete-option.png" :::
1. If necessary, type the name of the subscription and then select **Delete**. - When all required conditions are met, you can delete the subscription. :::image type="content" source="./media/cancel-azure-subscription/type-name-delete.png" alt-text="Screenshot showing where you type the subscription name and Delete." lightbox="./media/cancel-azure-subscription/type-name-delete.png" :::
- - If you have required deletion conditions that aren't met, the following page is shown.
+ - If there are required deletion conditions, but they aren't met, the following page is shown.
:::image type="content" source="./media/cancel-azure-subscription/manual-delete-subscription.png" alt-text="Screenshot showing the Delete your subscription page." lightbox="./media/cancel-azure-subscription/manual-delete-subscription.png" ::: - If **Delete resources** doesn't display a green check mark, then you have resources that must be deleted in order to delete the subscription. You can select **View resources** to navigate to the Resources page to manually delete the resources. After resource deletion, you might need to wait 10 minutes for resource deletion status to update in order to delete the subscription. - If **Manual deletion date** doesn't display a green check mark, you must wait the required period before you can delete the subscription.
Depending on your subscription type, you may not be able to delete a subscriptio
## Prevent unwanted charges
-To prevent unwanted charges on a subscription, you can go to **Resources** menu for the subscription and select the resources that you want to delete. If don't want to have any charges for the subscription, select all of the subscription resources and then **Delete** them. The subscription essentially becomes an empty container with no charges.
+To prevent unwanted charges on a subscription, you can go to **Resources** menu for the subscription and select the resources that you want to delete. If you don't want to have any charges for the subscription, select all of the subscription resources and then **Delete** them. The subscription essentially becomes an empty container with no charges.
:::image type="content" source="./media/cancel-azure-subscription/delete-resources.png" alt-text="Screenshot showing delete resources." lightbox="./media/cancel-azure-subscription/delete-resources.png" :::
If you have a support plan, you might continue to get charged for it. To delete
## Reactivate a subscription
-If you cancel your subscription with Pay-As-You-Go rates accidentally, you can [reactivate it in the Azure portal](subscription-disabled.md).
+If you cancel your subscription with pay-as-you-go rates accidentally, you can [reactivate it in the Azure portal](subscription-disabled.md).
-If your subscription isn't a subscription with Pay-As-You-Go rates, contact support within 90 days of cancellation to reactivate your subscription.
+If your subscription isn't a subscription with pay-as-you-go rates, contact support within 90 days of cancellation to reactivate your subscription.
## Why don't I see the Cancel Subscription option on the Azure portal?
-You may not have the permissions required to cancel a subscription. See [Who can cancel a subscription?](#who-can-cancel-a-subscription) for a description of who can cancel various types of subscriptions.
+You don't have the permissions required to cancel a subscription. See [Who can cancel a subscription](#who-can-cancel-a-subscription) for a description of who can cancel various types of subscriptions.
## How do I delete my Azure Account?
cost-management-billing Mca Setup Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-setup-account.md
Previously updated : 02/13/2024 Last updated : 02/26/2024
Here's an example screenshot showing the Get started experience. We cover each o
If you don't have the enterprise administrator role for the enterprise agreement or the billing account owner role for the Microsoft Customer Agreement, then use the following information to get the access that you need to complete setup.
+>[!NOTE]
+> The Global Administrator role is above the Billing Account Administrator. Global Administrators in a Microsoft Entra ID tenant can add or remove themselves as Billing Account Administrators at any time to the Microsoft Customer Agreement. For more information about elevating access, see [Elevate access to manage billing accounts](elevate-access-global-admin.md).
+ #### If you're not an enterprise administrator on the enrollment You see the following page in the Azure portal if you have a billing account owner role but you're not an enterprise administrator.
cost-management-billing Understand Mca Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/understand-mca-roles.md
Previously updated : 10/10/2022 Last updated : 02/26/2024
A billing account is created when you sign up to use Azure. You use your billing
The following tables show what role you need to complete tasks in the context of the billing account.
+>[!NOTE]
+> The Global Administrator role is above the Billing Account Administrator. Global Administrators in a Microsoft Entra ID tenant can add or remove themselves as Billing Account Administrators at any time to the Microsoft Customer Agreement. For more information about elevating access, see [Elevate access to manage billing accounts](elevate-access-global-admin.md).
++ ### Manage billing account permissions and properties |Task|Billing account owner|Billing account contributor|Billing account reader|
data-factory Data Flow Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-troubleshoot-guide.md
Last updated 01/05/2024
-# Troubleshoot mapping data flows in Azure Data Factory
+# Troubleshoot mapping data flows in Azure Data Factory (ADF)
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
Specific scenarios that can cause internal server errors are shown as follows.
Successful execution of data flows depends on many factors, including the compute size/type, numbers of source/sinks to process, the partition specification, transformations involved, sizes of datasets, the data skewness and so on.<br/>
- For more guidance, see [Integration Runtime performance](concepts-integration-runtime-performance.md).
+ For more information, see [Integration Runtime performance](concepts-integration-runtime-performance.md).
#### Scenario 2: Using debug sessions with parallel activities
- When triggering a run using the data flow debug session with constructs like ForEach in the pipeline, multiple parallel runs can be submitted to the same cluster. This situation can lead to cluster failure problems while running because of resource issues, such as being out of memory.<br/>
+ When you trigger a run using the data flow debug session with constructs like ForEach in the pipeline, multiple parallel runs can be submitted to the same cluster. This situation can lead to cluster failure problems while running because of resource issues, such as being out of memory.<br/>
To submit a run with the appropriate integration runtime configuration defined in the pipeline activity after publishing the changes, select **Trigger Now** or **Debug** > **Use Activity Runtime**.
Specific scenarios that can cause internal server errors are shown as follows.
Transient issues with microservices involved in the execution can cause the run to fail.<br/>
- Configuring retries in the pipeline activity can resolve the problems caused by transient issues. For more guidance, see [Activity Policy](concepts-pipelines-activities.md#activity-json).
+ Configuring retries in the pipeline activity can resolve the problems caused by transient issues. For more information, see [Activity Policy](concepts-pipelines-activities.md#activity-json).
## Common error codes and messages
This section lists common error codes and messages reported by mapping data flow
- **Message**: Partition type has to be roundRobin. - **Cause**: Invalid partition types are provided.-- **Recommendation**: Please update AdobeIntegration settings to make your partition type is RoundRobin.
+- **Recommendation**: Update AdobeIntegration settings to make your partition type is RoundRobin.
### Error code: DF-AdobeIntegration-InvalidPrivacyRegulation -- **Message**: Only privacy regulation that's currently supported is 'GDPR'.
+- **Message**: Only currently supported privacy regulation is 'GDPR'.
- **Cause**: Invalid privacy configurations are provided.-- **Recommendation**: Please update AdobeIntegration settings while only privacy 'GDPR' is supported.
+- **Recommendation**: Update AdobeIntegration settings while only privacy 'GDPR' is supported.
### Error code: DF-AdobeIntegration-KeyColumnMissed
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-AzureDataExplorer-InvalidOperation - **Message**: Blob operation is not supported on older storage accounts. Creating a new storage account may fix the issue.-- **Cause**: Operation is not supported.-- **Recommendation**: Change **Update method** configuration as delete, update and upsert are not supported in Azure Data Explorer.
+- **Cause**: Operation isn't supported.
+- **Recommendation**: Change **Update method** configuration as delete, update, and upsert are not supported in Azure Data Explorer.
### Error code: DF-AzureDataExplorer-ReadTimeout
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-Blob-FunctionNotSupport -- **Message**: This endpoint does not support BlobStorageEvents, SoftDelete or AutomaticSnapshot. Please disable these account features if you would like to use this endpoint.-- **Cause**: Azure Blob Storage events, soft delete or automatic snapshot is not supported in data flows if the Azure Blob Storage linked service is created with service principal or managed identity authentication.-- **Recommendation**: Disable Azure Blob Storage events, soft delete or automatic snapshot feature on the Azure Blob account, or use key authentication to create the linked service.
+- **Message**: This endpoint does not support BlobStorageEvents, SoftDelete, or AutomaticSnapshot. Disable these account features if you would like to use this endpoint.
+- **Cause**: Azure Blob Storage events, soft delete or automatic snapshot isn't supported in data flows if the Azure Blob Storage linked service is created with service principal or managed identity authentication.
+- **Recommendation**: Disable Azure Blob Storage events, soft delete, or automatic snapshot feature on the Azure Blob account, or use key authentication to create the linked service.
### Error code: DF-Blob-InvalidAccountConfiguration
This section lists common error codes and messages reported by mapping data flow
- **Message**: Cloud type is invalid. - **Cause**: An invalid cloud type is provided.-- **Recommendation**: Please check the cloud type in your related Azure Blob linked service.
+- **Recommendation**: Check the cloud type in your related Azure Blob linked service.
### Error code: DF-Cosmos-DeleteDataFailed - **Message**: Failed to delete data from Azure Cosmos DB after 3 times retry. - **Cause**: The throughput on the Azure Cosmos DB collection is small and leads to meeting throttling or row data not existing in Azure Cosmos DB.-- **Recommendation**: Please take the following actions to solve this problem:
+- **Recommendation**: To solve this problem, take the following actions:
- If the error is 404, make sure that the related row data exists in the Azure Cosmos DB collection.
- - If the error is throttling, please increase the Azure Cosmos DB collection throughput or set it to the automatic scale.
- - If the error is request timed out, please set 'Batch size' in the Azure Cosmos DB sink to smaller value, for example 1000.
+ - If the error is throttling, increase the Azure Cosmos DB collection throughput or set it to the automatic scale.
+ - If the error is request timed out, set 'Batch size' in the Azure Cosmos DB sink to smaller value, for example 1000.
### Error code: DF-Cosmos-FailToResetThroughput -- **Message**: Azure Cosmos DB throughput scale operation cannot be performed because another scale operation is in progress, please retry after sometime.
+- **Message**: Azure Cosmos DB throughput scale operation cannot be performed because another scale operation is in progress, retry after sometime.
- **Cause**: The throughput scale operation of the Azure Cosmos DB can't be performed because another scale operation is in progress. - **Recommendation**: Log in to Azure Cosmos DB account, and manually change container throughput to be auto scale or add a custom activity after mapping data flows to reset the throughput.
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-Cosmos-InvalidAccountKey -- **Message**: The input authorization token can't serve the request. Please check that the expected payload is built as per the protocol, and check the key being used.
+- **Message**: The input authorization token can't serve the request. Check that the expected payload is built as per the protocol, and check the key being used.
- **Cause**: There's no enough permission to read/write Azure Cosmos DB data.-- **Recommendation**: Please use the read-write key to access Azure Cosmos DB.
+- **Recommendation**: Use the read-write key to access Azure Cosmos DB.
### Error code: DF-Cosmos-InvalidConnectionMode
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-Cosmos-ShortTypeNotSupport - **Message**: Short data type is not supported in Azure Cosmos DB.-- **Cause**: The short data type is not supported in the Azure Cosmos DB instance.
+- **Cause**: The short data type isn't supported in the Azure Cosmos DB instance.
- **Recommendation**: Add a derived column transformation to convert related columns from short to integer before using them in the Azure Cosmos DB sink transformation. ### Error code: DF-CSVWriter-InvalidQuoteSetting
This section lists common error codes and messages reported by mapping data flow
- **Message**: Column delimiter is required for parse. - **Cause**: The column delimiter is missed.-- **Recommendation**: In your CSV settings, confirm that you have the column delimiter which is required for parse.
+- **Recommendation**: In your CSV settings, confirm that you have the column delimiter, which is required for parse.
### Error code: DF-Delimited-InvalidConfiguration - **Message**: Either one of empty lines or custom header should be specified. - **Cause**: An invalid delimited configuration is provided.-- **Recommendation**: Please update the CSV settings to specify one of empty lines or the custom header.
+- **Recommendation**: Update the CSV settings to specify one of empty lines or the custom header.
### Error code: DF-DELTA-InvalidConfiguration
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-DELTA-KeyColumnMissed -- **Message**: Key column(s) should be specified for non-insertable operations.-- **Cause**: Key column(s) are missed for non-insertable operations.-- **Recommendation**: Specify key column(s) on delta sink to have non-insertable operations.
+- **Message**: Key columns should be specified for non-insertable operations.
+- **Cause**: Key columns are missed for non-insertable operations.
+- **Recommendation**: To have non-insertable operations, specify key columns on delta sink.
### Error code: DF-Dynamics-InvalidNullAlternateKeyColumn - **Message**: Any column value of alternate Key can't be NULL. - **Cause**: Your alternate key column value can't be null. -- **Recommendation**: Confirm that your column value of your alternate key is not NULL.
+- **Recommendation**: Confirm that your column value of your alternate key isn't NULL.
### Error code: DF-Dynamics-TooMuchAlternateKey -- **Cause**: One lookup field with more than one alternate key reference is not valid.
+- **Cause**: One lookup field with more than one alternate key reference isn't valid.
- **Recommendation**: Check your schema mapping and confirm that each lookup field has a single alternate key. ### Error code: DF-Excel-DifferentSchemaNotSupport - **Message**: Read excel files with different schema is not supported now. - **Cause**: Reading excel files with different schemas is not supported now.-- **Recommendation**: Please apply one of following options to solve this problem:
+- **Recommendation**: Apply one of following options to solve this problem:
- Use **ForEach** + **data flow** activity to read Excel worksheets one by one. - Update each worksheet schema to have the same columns manually before reading data.
This section lists common error codes and messages reported by mapping data flow
- **Message**: Data type is not supported. - **Cause**: The data type is not supported.-- **Recommendation**: Please change the data type to **'string'** for related input data columns.
+- **Recommendation**: Change the data type to **'string'** for related input data columns.
### Error code: DF-Excel-InvalidFile
This section lists common error codes and messages reported by mapping data flow
- **Message**: Excel sheet name and index cannot exist at the same time. - **Cause**: The Excel sheet name and index are provided at the same time.-- **Recommendation**: Check the parameter value and specify the sheet name or index to read the Excel data.
+- **Recommendation**: To read the Excel data, check the parameter value and specify the sheet name or index.
### Error code: DF-Excel-WorksheetConfigMissed - **Message**: Excel sheet name or index is required. - **Cause**: An invalid Excel worksheet configuration is provided.-- **Recommendation**: Check the parameter value and specify the sheet name or index to read the Excel data.
+- **Recommendation**: To read the Excel data, check the parameter value and specify the sheet name or index.
### Error code: DF-Excel-WorksheetNotExist - **Message**: Excel worksheet does not exist. - **Cause**: An invalid worksheet name or index is provided.-- **Recommendation**: Check the parameter value and specify a valid sheet name or index to read the Excel data.
+- **Recommendation**: To read the Excel data, check the parameter value and specify a valid sheet name or index.
### Error code: DF-Executor-AcquireStorageMemoryFailed -- **Message**: Transferring unroll memory to storage memory failed. Cluster ran out of memory during execution. Please retry using an integration runtime with more cores and/or memory optimized compute type.
+- **Message**: Transferring unroll memory to storage memory failed. Cluster ran out of memory during execution. Retry using an integration runtime with more cores and/or memory optimized compute type.
- **Cause**: The cluster has insufficient memory.-- **Recommendation**: Please use an integration runtime with more cores and/or the memory optimized compute type.
+- **Recommendation**: Use an integration runtime with more cores and/or the memory optimized compute type.
### Error code: DF-Executor-BlockCountExceedsLimitError
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-Executor-BroadcastFailure -- **Message**: Dataflow execution failed during broadcast exchange. Potential causes include misconfigured connections at sources or a broadcast join timeout error. To ensure the sources are configured correctly, please test the connection or run a source data preview in a Dataflow debug session. To avoid the broadcast join timeout, you can choose the 'Off' broadcast option in the Join/Exists/Lookup transformations. If you intend to use the broadcast option to improve performance then make sure broadcast streams can produce data within 60 secs for debug runs and within 300 secs for job runs. If problem persists, contact customer support.
+- **Message**: Dataflow execution failed during broadcast exchange. Potential causes include misconfigured connections at sources or a broadcast join timeout error. To ensure the sources are configured correctly, test the connection or run a source data preview in a Dataflow debug session. To avoid the broadcast join timeout, you can choose the 'Off' broadcast option in the Join/Exists/Lookup transformations. If you intend to use the broadcast option to improve performance then make sure broadcast streams can produce data within 60 secs for debug runs and within 300 secs for job runs. If problem persists, contact customer support.
- **Cause**: 1. The source connection/configuration error could lead to a broadcast failure in join/exists/lookup transformations.
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-Executor-BroadcastTimeout -- **Message**: Broadcast join timeout error, make sure broadcast stream produces data within 60 secs in debug runs and 300 secs in job runs
+- **Message**: Broadcast join timeout error. Make sure broadcast stream produces data within 60 secs in debug runs and 300 secs in job runs.
- **Cause**: Broadcast has a default timeout of 60 seconds on debug runs and 300 seconds on job runs. The stream chosen for broadcast is too large to produce data within this limit. - **Recommendation**: Check the **Optimize** tab on your data flow transformations for join, exists, and lookup. The default option for broadcast is **Auto**. If **Auto** is set, or if you're manually setting the left or right side to broadcast under **Fixed**, you can either set a larger Azure integration runtime (IR) configuration or turn off broadcast. For the best performance in data flows, we recommend that you allow Spark to broadcast by using **Auto** and use a memory-optimized Azure IR.
- If you're running the data flow in a debug test execution from a debug pipeline run, you might run into this condition more frequently. That's because Azure Data Factory throttles the broadcast timeout to 60 seconds to maintain a faster debugging experience. You can extend the timeout to the 300-second timeout of a triggered run. To do so, you can use the **Debug** > **Use Activity Runtime** option to use the Azure IR defined in your Execute Data Flow pipeline activity.
+ If you're running the data flow in a debug test execution from a debug pipeline run, you might run into this condition more frequently. The more frequent occurence of the error is because Azure Data Factory throttles the broadcast timeout to 60 seconds to maintain a faster debugging experience. You can extend the timeout to the 300-second timeout of a triggered run. To do so, you can use the **Debug** > **Use Activity Runtime** option to use the Azure IR defined in your Execute Data Flow pipeline activity.
-- **Message**: Broadcast join timeout error, you can choose 'Off' of broadcast option in join/exists/lookup transformation to avoid this issue. If you intend to broadcast join option to improve performance, then make sure broadcast stream can produce data within 60 secs in debug runs and 300 secs in job runs.
+- **Message**: Broadcast join timeout error. You can choose 'Off' of broadcast option in join/exists/lookup transformation to avoid this issue. If you intend to broadcast join option to improve performance, then make sure broadcast stream can produce data within 60 secs in debug runs and 300 secs in job runs.
- **Cause**: Broadcast has a default timeout of 60 seconds in debug runs and 300 seconds in job runs. On the broadcast join, the stream chosen for broadcast is too large to produce data within this limit. If a broadcast join isn't used, the default broadcast by dataflow can reach the same limit. - **Recommendation**: Turn off the broadcast option or avoid broadcasting large data streams for which the processing can take more than 60 seconds. Choose a smaller stream to broadcast. Large Azure SQL Data Warehouse tables and source files aren't typically good choices. In the absence of a broadcast join, use a larger cluster if this error occurs.
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-Executor-DriverError -- **Message**: INT96 is legacy timestamp type, which is not supported by ADF Dataflow. Please consider upgrading the column type to the latest types.
+- **Message**: INT96 is a legacy timestamp type, which is not supported by ADF Dataflow. Consider upgrading the column type to the latest types.
- **Cause**: Driver error.-- **Recommendation**: INT96 is a legacy timestamp type that's not supported by Azure Data Factory data flow. Consider upgrading the column type to the latest type.
+- **Recommendation**: INT96 is a legacy timestamp type that's Azure Data Factory data flow doesn't support. Consider upgrading the column type to the latest type.
### Error code: DF-Executor-FieldNotExist
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-Executor-illegalArgument -- **Message**: Please make sure that the access key in your Linked Service is correct
+- **Message**: Make sure that the access key in your Linked Service is correct
- **Cause**: The account name or access key is incorrect. - **Recommendation**: Ensure that the account name or access key specified in your linked service is correct. ### Error code: DF-Executor-IncorrectLinkedServiceConfiguration - **Message**: Possible causes are,
- - The linked service is incorrectly configured as type 'Azure Blob Storage' instead of 'Azure DataLake Storage Gen2' and it has 'Hierarchical namespace' enabled. Please create a new linked service of type 'Azure DataLake Storage Gen2' for the storage account in question.
- - Certain scenarios with any combinations of 'Clear the folder', non-default 'File name option', 'Key' partitioning may fail with a Blob linked service on a 'Hierarchical namespace' enabled storage account. You can disable these dataflow settings (if enabled) and try again in case you do not want to create a new Gen2 linked service.
+ - The linked service is incorrectly configured as type 'Azure Blob Storage' instead of 'Azure DataLake Storage Gen2' and it has 'Hierarchical namespace' enabled. Create a new linked service of type 'Azure DataLake Storage Gen2' for the storage account in question.
+ - Certain scenarios with any combinations of 'Clear the folder', nondefault 'File name option', 'Key' partitioning may fail with a Blob linked service on a 'Hierarchical namespace' enabled storage account. You can disable these dataflow settings (if enabled) and try again in case you do not want to create a new Gen2 linked service.
- **Cause**: Delete operation on the Azure Data Lake Storage Gen2 account failed since its linked service is incorrectly configured as Azure Blob Storage.-- **Recommendation**: Create a new Azure Data Lake Storage Gen2 linked service for the storage account. If that's not feasible, some known scenarios like **Clear the folder**, non-default **File name option**, **Key** partitioning in any combinations may fail with an Azure Blob Storage linked service on a hierarchical namespace enabled storage account. You can disable these data flow settings if you enabled them and try again.
+- **Recommendation**: Create a new Azure Data Lake Storage Gen2 linked service for the storage account. If that's not feasible, some known scenarios like **Clear the folder**, nondefault **File name option**, **Key** partitioning in any combinations may fail with an Azure Blob Storage linked service on a hierarchical namespace enabled storage account. You can disable these data flow settings if you enabled them and try again.
### Error code: DF-Executor-InternalServerError -- **Message**: Failed to execute dataflow with internal server error, please retry later. If issue persists, please contact Microsoft support for further assistance
+- **Message**: Failed to execute dataflow with internal server error, retry later. If issue persists, contact Microsoft support for further assistance.
- **Cause**: The data flow execution is failed because of the system error. - **Recommendation**: To solve this issue, refer to [Internal server errors](#internal-server-errors).
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-Executor-InvalidOutputColumns -- **Message**: The result has 0 output columns. Please ensure at least one column is mapped.
+- **Message**: The result has 0 output columns. Ensure at least one column is mapped.
- **Cause**: No column is mapped.-- **Recommendation**: Please check the sink schema to ensure that at least one column is mapped.
+- **Recommendation**: Check the sink schema to ensure that at least one column is mapped.
### Error code: DF-Executor-InvalidPartitionFileNames -- **Message**: File names cannot have empty value(s) while file name option is set as per partition.
+- **Message**: File names cannot have empty values while file name option is set as per partition.
- **Cause**: Invalid partition file names are provided.-- **Recommendation**: Please check your sink settings to have the right value of file names.
+- **Recommendation**: Check your sink settings to have the right value of file names.
### Error code: DF-Executor-InvalidPath -- **Message**: Path does not resolve to any file(s). Please make sure the file/folder exists and is not hidden.
+- **Message**: Path does not resolve to any files. Make sure the file/folder exists and is not hidden.
- **Cause**: An invalid file/folder path is provided, which can't be found or accessed.-- **Recommendation**: Please check the file/folder path, and make sure it is existed and can be accessed in your storage.
+- **Recommendation**: Check the file/folder path, and make sure it is existed and can be accessed in your storage.
### Error code: DF-Executor-InvalidStageConfiguration
This section lists common error codes and messages reported by mapping data flow
- **Message**: Explicitly broadcasted dataset using left/right option should be small enough to fit in node's memory. You can choose broadcast option 'Off' in join/exists/lookup transformation to avoid this issue or use an integration runtime with higher memory. - **Cause**: The size of the broadcasted table far exceeds the limits of the node memory.-- **Recommendation**: The broadcast left/right option should be used only for smaller dataset size which can fit into node's memory, so make sure to configure the node size appropriately or turn off the broadcast option.
+- **Recommendation**: The broadcast left/right option should only be used for smaller dataset sizes, which can fit into the node's memory. Make sure to configure the node size appropriately or turn off the broadcast option.
### Error code: DF-Executor-OutOfMemorySparkError
This section lists common error codes and messages reported by mapping data flow
- **Message**: Job aborted due to stage failure. Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. - **Cause**: Data flow activity run failed because of transient network issues or one node in spark cluster ran out of memory. - **Recommendation**: Use the following options to solve this problem:
- - Option-1: Use a powerful cluster (both drive and executor nodes have enough memory to handle big data) to run data flow pipelines with setting "Compute type" to "Memory optimized". The settings are shown in the picture below.
+ - Option-1: Use a powerful cluster (both drive and executor nodes have enough memory to handle big data) to run data flow pipelines with setting "Compute type" to "Memory optimized". The settings are shown in the following picture.
:::image type="content" source="media/data-flow-troubleshoot-guide/configure-compute-type.png" alt-text="Screenshot that shows the configuration of Compute type.":::
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-Executor-StoreIsNotDefined -- **Message**: The store configuration is not defined. This error is potentially caused by invalid parameter assignment in the pipeline.
+- **Message**: The store configuration isn't defined. This error can be caused by invalid parameter assignment in the pipeline.
- **Cause**: Invalid store configuration is provided. - **Recommendation**: Check the parameter value assignment in the pipeline. A parameter expression may contain invalid characters. ### Error code: DF-Executor-StringValueNotInQuotes - **Message**: Column operands are not allowed in literal expressions.-- **Cause**: The value for a string parameter or an expected string value is not enclosed in single quotes.-- **Recommendation**: Near the mentioned line number(s) in the data flow script, ensure the value for a string parameter or an expected string value is enclosed in single quotes.
+- **Cause**: The value for a string parameter or an expected string value isn't enclosed in single quotes.
+- **Recommendation**: Near the mentioned line numbers in the data flow script, ensure the value for a string parameter or an expected string value is enclosed in single quotes.
### Error code: DF-Executor-SystemImplicitCartesian -- **Message**: Implicit cartesian product for INNER join is not supported, use CROSS JOIN instead. Columns used in join should create a unique key for rows.
+- **Message**: Implicit cartesian product for INNER join isn't supported. Use CROSS JOIN instead. Columns used in join should create a unique key for rows.
- **Cause**: Implicit cartesian products for INNER joins between logical plans aren't supported. If you're using columns in the join, create a unique key. - **Recommendation**: For non-equality based joins, use CROSS JOIN.
This section lists common error codes and messages reported by mapping data flow
1. For source: In Storage Explorer, grant the managed identity/service principal at least **Execute** permission for ALL upstream folders and the file system, along with **Read** permission for the files to copy. Alternatively, in Access control (IAM), grant the managed identity/service principal at least the **Storage Blob Data Reader** role. 2. For sink: In Storage Explorer, grant the managed identity/service principal at least **Execute** permission for ALL upstream folders and the file system, along with **Write** permission for the sink folder. Alternatively, in Access control (IAM), grant the managed identity/service principal at least the **Storage Blob Data Contributor** role. <br>
- Also please ensure that the network firewall settings in the storage account are configured correctly, as turning on firewall rules for your storage account blocks incoming requests for data by default, unless the requests originate from a service operating within an Azure Virtual Network (VNet) or from allowed public IP addresses.
+ Also ensure that the network firewall settings in the storage account are configured correctly, as turning on firewall rules for your storage account blocks incoming requests for data by default, unless the requests originate from a service operating within an Azure Virtual Network (VNet) or from allowed public IP addresses.
### Error code: DF-Executor-UnreachableStorageAccount
This section lists common error codes and messages reported by mapping data flow
- **Message**: Data flow script cannot be parsed. - **Cause**: The data flow script has parsing errors.-- **Recommendation**: Check for errors (example: missing symbol(s), unwanted symbol(s)) near mentioned line number(s) in the data flow script.
+- **Recommendation**: Check for errors (example: missing symbols, unwanted symbols) near mentioned line numbers in the data flow script.
### Error code: DF-Executor-IncorrectQuery
This section lists common error codes and messages reported by mapping data flow
- **Recommendation**: Check the syntactical correctness of the given query. Ensure to have a non-quoted query string when it is referenced as a pipeline parameter. ### Error code: DF-Executor-ParameterParseError-- **Message**: Parameter stream has parsing errors. Not honoring the datatype of parameter(s) could be one of the causes.-- **Cause**: Parsing errors in given parameter(s).-- **Recommendation**: Check the parameter(s) having errors, ensure the usage of appropriate function(s), and honor the datatype(s) given.
+- **Message**: Parameter stream has parsing errors. Not honoring the datatype of parameters could be one of the causes.
+- **Cause**: Parsing errors in given parameters.
+- **Recommendation**: Check the parameters having errors, ensure the usage of appropriate functions, and honor the datatypes given.
### Error code: DF-File-InvalidSparkFolder - **Message**: Failed to read footer for file. - **Cause**: Folder *_spark_metadata* is created by the structured streaming job.-- **Recommendation**: Delete *_spark_metadata* folder if it exists. For more information, refer to this [article](https://forums.databricks.com/questions/12447/javaioioexception-could-not-read-footer-for-file-f.html).
+- **Recommendation**: Delete *_spark_metadata* folder if it exists.
### Error code: DF-GEN2-InvalidAccountConfiguration - **Message**: Either one of account key or SAS token or tenant/spnId/spnCredential/spnCredentialType or userAuth or miServiceUri/miServiceToken should be specified.-- **Cause**: An invalid credential is provided in the ADLS Gen2 linked service.
+- **Cause**: An invalid credential is provided in the Azure Data Lake Storage (ADLS) Gen2 linked service.
- **Recommendation**: Update the ADLS Gen2 linked service to have the right credential configuration. ### Error code: DF-GEN2-InvalidAuthConfiguration
This section lists common error codes and messages reported by mapping data flow
- **Message**: Service principal credential type is invalid. - **Cause**: The service principal credential type is invalid.-- **Recommendation**: Please update the ADLS Gen2 linked service to set the right service principal credential type.
+- **Recommendation**: Update the ADLS Gen2 linked service to set the right service principal credential type.
### Error code: DF-GEN2-InvalidStorageAccountConfiguration
This section lists common error codes and messages reported by mapping data flow
- **Message**: Blob storage staging properties should be specified. - **Cause**: An invalid staging configuration is provided in the Hive.-- **Recommendation**: Please check if the account key, account name and container are set properly in the related Blob linked service, which is used as staging.
+- **Recommendation**: Check if the account key, account name and container are set properly in the related Blob linked service, which is used as staging.
### Error code: DF-Hive-InvalidDataType -- **Message**: Unsupported Column(s).-- **Cause**: Unsupported Column(s) are provided.
+- **Message**: Unsupported Columns.
+- **Cause**: Unsupported Columns are provided.
- **Recommendation**: Update the column of input data to match the data type supported by the Hive. ### Error code: DF-Hive-InvalidGen2StagingConfiguration - **Message**: ADLS Gen2 storage staging only support service principal key credential. - **Cause**: An invalid staging configuration is provided in the Hive.-- **Recommendation**: Please update the related ADLS Gen2 linked service that is used as staging. Currently, only the service principal key credential is supported.
+- **Recommendation**: Update the related ADLS Gen2 linked service that is used as staging. Currently, only the service principal key credential is supported.
- **Message**: ADLS Gen2 storage staging properties should be specified. Either one of key or tenant/spnId/spn Credential/spnCredentialType or miServiceUri/miServiceToken is required. - **Cause**: An invalid staging configuration is provided in the Hive.
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-JSON-WrongDocumentForm -- **Message**: Malformed records are detected in schema inference. Parse Mode: FAILFAST. It could be because of a wrong selection in document form to parse json file(s). Please try a different 'Document form' (Single document/Document per line/Array of documents) on the json source.-- **Cause**: Wrong document form is selected to parse JSON file(s).
+- **Message**: Malformed records are detected in schema inference. Parse Mode: FAILFAST. It could be because of a wrong selection in document form to parse json files. Please try a different 'Document form' (Single document/Document per line/Array of documents) on the json source.
+- **Cause**: Wrong document form is selected to parse JSON files.
- **Recommendation**: Try different **Document form** (**Single document**/**Document per line**/**Array of documents**) in JSON settings. Most cases of parsing errors are caused by wrong configuration. ### Error code: DF-MICROSOFT365-CONSENTPENDING
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-MSSQL-ErrorRowsFound - **Cause**: Error/Invalid rows were found while writing to Azure SQL Database sink.-- **Recommendation**: Please find the error rows in the rejected data storage location if configured.
+- **Recommendation**: Find the error rows in the rejected data storage location if configured.
### Error code: DF-MSSQL-ExportErrorRowFailed
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-MSSQL-InvalidAuthConfiguration -- **Message**: Only one of the three auth methods (Key, ServicePrincipal and MI) can be specified.
+- **Message**: Only one of the three auth methods (Key, ServicePrincipal, and MI) can be specified.
- **Cause**: An invalid authentication method is provided in the MSSQL linked service. - **Recommendation**: You can only specify one of the three authentication methods (Key, ServicePrincipal and MI) in the related MSSQL linked service.
This section lists common error codes and messages reported by mapping data flow
- **Message**: Either one of user/pwd or tenant/spnId/spnKey or miServiceUri/miServiceToken should be specified. - **Cause**: An invalid credential is provided in the MSSQL linked service.-- **Recommendation**: Please update the related MSSQL linked service with right credentials, and one of **user/pwd** or **tenant/spnId/spnKey** or **miServiceUri/miServiceToken** should be specified.
+- **Recommendation**: Update the related MSSQL linked service with right credentials, and one of **user/pwd** or **tenant/spnId/spnKey** or **miServiceUri/miServiceToken** should be specified.
### Error code: DF-MSSQL-InvalidDataType -- **Message**: Unsupported field(s).-- **Cause**: Unsupported field(s) are provided.
+- **Message**: Unsupported fields.
+- **Cause**: Unsupported fields are provided.
- **Recommendation**: Modify the input data column to match the data type supported by MSSQL. ### Error code: DF-MSSQL-InvalidFirewallSetting - **Message**: The TCP/IP connection to the host has failed. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall. - **Cause**: The SQL database's firewall setting blocks the data flow to access.-- **Recommendation**: Please check the firewall setting for your SQL database, and allow Azure services and resources to access this server.
+- **Recommendation**: Check the firewall setting for your SQL database, and allow Azure services and resources to access this server.
### Error code: DF-MSSQL-InvalidCertificate
This section lists common error codes and messages reported by mapping data flow
- **Message**: Failed to execute dataflow with invalid run mode. - **Cause**: Possible causes are: 1. Only the read mode `fullLoad` can be specified when `enableCdc` is false.
- 1. Only the run mode `incrementalLoad` or `fullAndIncrementalLoad` can be specified when `enableCdc` is true.
+ 1. Only the run modes `incrementalLoad` or `fullAndIncrementalLoad` can be specified when `enableCdc` is true.
1. Only `fullLoad`, `incrementalLoad` or `fullAndIncrementalLoad` can be specified. - **Recommendation**: Reconfigure the activity and run again. If the issue persists, contact Microsoft support for further assistance.
This section lists common error codes and messages reported by mapping data flow
- **Cause**: Mostly you have hidden column settings in your SAP table. When you use SAP mapping data flow to read data from SAP server, it returns all the schema (columns, including hidden ones), but returned data do not contain related values. So, data misalignment happened and led to parse value issue or wrong data value issue. - **Recommendation**: There are two recommendations for this issue:
- 1. Remove hidden settings from the related column(s) through SAP GUI.
- 2. If you want to keep existed SAP settings unchanged, use hidden feature (manually add DSL property `enableProjection:true` in script) in SAP mapping data flow to filter the hidden column(s) and continue to read data.
+ 1. Remove hidden settings from the related columns through the SAP user interface.
+ 2. If you want to keep existed SAP settings unchanged, use hidden feature (manually add DSL property `enableProjection:true` in script) in SAP mapping data flow to filter the hidden columns and continue to read data.
### Error code: DF-SAPODP-ObjectInvalid -- **Cause**: The object name is not found or not released.
+- **Cause**: The object name isn't found or not released.
- **Recommendation**: Check the object name and make sure it is valid and already released. ### Error code: DF-SAPODP-ObjectNameMissed
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-SAPODP-StageAuthInvalid - **Message**: Invalid client secret provided-- **Cause**: The service principal certificate credential of the staging storage is not correct.
+- **Cause**: The service principal certificate credential of the staging storage isn't correct.
- **Recommendation**: Check whether the test connection is successful in your staging storage linked service, and confirm the authentication setting of your staging storage is correct. - **Message**: Failed to authenticate the request to storage-- **Cause**: The key of your staging storage is not correct.
+- **Cause**: The key of your staging storage isn't correct.
- **Recommendation**: Check whether the test connection is successful in your staging storage linked service, and confirm the key of your staging Azure Blob Storage is correct. ### Error code: DF-SAPODP-StageBlobPropertyInvalid
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-SAPODP-StageContainerInvalid - **Message**: Unable to create Azure Blob container-- **Cause**: The input container is not existed in your staging storage.
+- **Cause**: The input container doesn't exist in your staging storage.
- **Recommendation**: Input a valid container name for the staging storage. Reselect another existed container name or create a new container manually with your input name. ### Error code: DF-SAPODP-StageContainerMissed - **Message**: Container or file system is required for staging storage.-- **Cause**: Your container or file system is not specified for staging storage.
+- **Cause**: Your container or file system isn't specified for staging storage.
- **Recommendation**: Specify the container or file system for the staging storage. ### Error code: DF-SAPODP-StageFolderPathMissed - **Message**: Folder path is required for staging storage-- **Cause**: Your staging storage folder path is not specified.
+- **Cause**: Your staging storage folder path isn't specified.
- **Recommendation**: Specify the staging storage folder. ### Error code: DF-SAPODP-StageGen2PropertyInvalid
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-SAPODP-StageStorageServicePrincipalCertNotSupport -- **Message**: Read from staging storage failed: Staging storage auth not support service principal cert.-- **Cause**: The service principal certificate credential is not supported for the staging storage.
+- **Message**: Read from staging storage failed: Staging storage auth doesn't support service principal cert.
+- **Cause**: The service principal certificate credential isn't supported for the staging storage.
- **Recommendation**: Change your authentication to not use the service principal certificate credential. ### Error code: DF-SAPODP-StageStorageTypeInvalid
This section lists common error codes and messages reported by mapping data flow
| Cause analysis | Recommendation | | : | : | | Your SAP server is shut down. | Check your SAP server is started. |
- | Your IP or port of the self-hosted integration runtime is not in SAP network security rule. | Check your IP or port of self-hosted integration runtime is in your SAP network security rule. |
+ | Your IP or port of the self-hosted integration runtime isn't in SAP network security rule. | Check your IP or port of self-hosted integration runtime is in your SAP network security rule. |
| Self-hosted integration runtime proxy issue. | Check your self-hosted integration runtime proxy. | | Incorrect parameters input (e.g. wrong SAP server name or IP). | Check your input parameters: SAP server name, IP. | ### Error code: DF-SAPODP-DependencyNotFound - **Message**: Could not load file or assembly 'sapnco, Version=* - **Cause**: You don't download and install SAP .NET connector on the machine of the self-hosted integration runtime.-- **Recommendation**: Follow [Set up a self-hosted integration runtime](sap-change-data-capture-shir-preparation.md) to set up the self-hosted integration runtime for the SAP CDC connector.
+- **Recommendation**: Follow [Set up a self-hosted integration runtime](sap-change-data-capture-shir-preparation.md) to set up the self-hosted integration runtime for the SAP Change Data Capture (CDC) connector.
### Error code: DF-SAPODP-NoAuthForFunctionModule - **Message**: No REF authorization for function module RODPS_REPL_CONTEXT_GET_LIST
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-SAPODP-SourceNotSupportDelta - **Message**: Source .* does not support deltas-- **Cause**: The ODP context/ODP name you specified does not support delta.
+- **Cause**: The ODP context/ODP name you specified doesn't support delta.
- **Recommendation**: Enable delta mode for your SAP source, or select **Full on every run** as run mode in data flow. For more information, see this [document](https://userapps.support.sap.com/sap/support/knowledge/2752413). ### Error code: DF-SAPODP-SAPI-LIMITATION - **Message**: Error Number 518, Source .* not found, not released or not authorized-- **cause**: Check if your context is SAPI. If so, in SAPI context, you can only extract the relevant extractors for SAP tables.
+- **Cause**: Check if your context is the SAP Service API (SAPI). If so, in SAPI context, you can only extract the relevant extractors for SAP tables.
- **Recommendations**: Refer to this [document](https://userapps.support.sap.com/sap/support/knowledge/2646092). ### Error code: DF-SAPODP-KeyColumnsNotSpecified -- **Message**: Key column(s) should be specified for non-insertable operations (updates/deletes)
+- **Message**: Key columns should be specified for non-insertable operations (updates/deletes)
- **Cause**: This error occurs when you skip selecting **Key Columns** in the sink table. - **Recommendations**: Allowing delete, upsert and update options requires a key column to be specified. Specify one or more columns for the row matching in sink.
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-Snowflake-IncompatibleDataType - **Message**: Expression type does not match column data type, expecting VARIANT but got VARCHAR.-- **Cause**: The column(s) type of input data which is string is different from the related column(s) type in the Snowflake sink transformation which is VARIANT.-- **Recommendation**: For the snowflake VARIANT, it can only accept data flow value which is struct, map or array type. If the value of your input data column(s) is JSON or XML or other string, use a parse transformation before the Snowflake sink transformation to covert value into struct, map or array type.
+- **Cause**: The column's type of input data which is string is different from the related column's type in the Snowflake sink transformation which is VARIANT.
+- **Recommendation**: For the snowflake VARIANT, it can only accept data flow value which is struct, map or array type. If the value of your input data columns is JSON or XML or other string, use a parse transformation before the Snowflake sink transformation to covert value into struct, map or array type.
### Error code: DF-Snowflake-InvalidDataType - **Message**: The spark type is not supported in snowflake. - **Cause**: An invalid data type is provided in the Snowflake.-- **Recommendation**: Please use the derive transformation before applying the Snowflake sink to update the related column of the input data into the string type.
+- **Recommendation**: Use the derive transformation before applying the Snowflake sink to update the related column of the input data into the string type.
### Error code: DF-Snowflake-InvalidStageConfiguration
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-SQLDW-ErrorRowsFound - **Cause**: Error/invalid rows are found when writing to the Azure Synapse Analytics sink.-- **Recommendation**: Please find the error rows in the rejected data storage location if it is configured.
+- **Recommendation**: Find the error rows in the rejected data storage location if it is configured.
### Error code: DF-SQLDW-ExportErrorRowFailed - **Message**: Exception is happened while writing error rows to storage. - **Cause**: An exception happened while writing error rows to the storage.-- **Recommendation**: Please check your rejected data linked service configuration.
+- **Recommendation**: Check your rejected data linked service configuration.
### Error code: DF-SQLDW-IncorrectLinkedServiceConfiguration - **Message**: The linked service is incorrectly configured as type 'Azure Synapse Analytics' instead of 'Azure SQL Database'. Please create a new linked service of type 'Azure SQL Database'<br>
-Note: Please check that the given database is of type 'Dedicated SQL pool (formerly SQL DW)' for linked service type 'Azure Synapse Analytics'.
+Note: Please check that the given database is of type 'Dedicated SQL pool (formerly SQL Data Warehouse)' for linked service type 'Azure Synapse Analytics'.
- **Cause**: The linked service is incorrectly configured as type **Azure Synapse Analytics** instead of **Azure SQL Database**.  - **Recommendation**: Create a new linked service of type **Azure SQL Database**, and check that the given database is of type Dedicated SQL pool (formerly SQL DW) for linked service type **Azure Synapse Analytics**.
Note: Please check that the given database is of type 'Dedicated SQL pool (forme
- **Message**: Blob storage staging properties should be specified. - **Cause**: Invalid blob storage staging settings are provided-- **Recommendation**: Please check if the Blob linked service used for staging has correct properties.
+- **Recommendation**: Check if the Blob linked service used for staging has correct properties.
### Error code: DF-SQLDW-InvalidConfiguration - **Message**: ADLS Gen2 storage staging properties should be specified. Either one of key or tenant/spnId/spnCredential/spnCredentialType or miServiceUri/miServiceToken is required. - **Cause**: Invalid ADLS Gen2 staging properties are provided.-- **Recommendation**: Please update ADLS Gen2 storage staging settings to have one of **key** or **tenant/spnId/spnCredential/spnCredentialType** or **miServiceUri/miServiceToken**.
+- **Recommendation**: Update ADLS Gen2 storage staging settings to have one of **key** or **tenant/spnId/spnCredential/spnCredentialType** or **miServiceUri/miServiceToken**.
### Error code: DF-SQLDW-InvalidGen2StagingConfiguration
Note: Please check that the given database is of type 'Dedicated SQL pool (forme
### Error code: DF-SQLDW-StagingStorageNotSupport - **Message**: Staging Storage with partition DNS enabled is not supported if enable staging. Please uncheck enable staging in sink using Synapse Analytics.-- **Cause**: Staging storage with partition DNS enabled is not supported if you enable staging.
+- **Cause**: Staging storage with partition DNS enabled isn't supported if you enable staging.
- **Recommendations**: Uncheck **Enable staging** in sink when using Azure Synapse Analytics. ### Error code: DF-SQLDW-DataTruncation
Note: Please check that the given database is of type 'Dedicated SQL pool (forme
### Error code: DF-Synapse-DBNotExist -- **Cause**: The database does not exist.
+- **Cause**: The database doesn't exist.
- **Recommendation**: Check if the database exists. ### Error code: DF-Synapse-InvalidDatabaseType - **Message**: Database type is not supported.-- **Cause**: The database type is not supported.
+- **Cause**: The database type isn't supported.
- **Recommendation**: Check the database type and change it to the proper one. ### Error code: DF-Synapse-InvalidFormat - **Message**: Format is not supported.-- **Cause**: The format is not supported.
+- **Cause**: The format isn't supported.
- **Recommendation**: Check the format and change it to the proper one. ### Error code: DF-Synapse-InvalidOperation -- **Cause**: The operation is not supported.
+- **Cause**: The operation isn't supported.
- **Recommendation**: Change **Update method** configuration as delete, update and upsert are not supported in Workspace DB. ### Error code: DF-Synapse-InvalidTableDBName - **Message**: The table/database name is not a valid name for tables/databases. Valid names only contain alphabet characters, numbers and _.-- **Cause**: The table/database name is not valid.
+- **Cause**: The table/database name isn't valid.
- **Recommendation**: Change a valid name for the table/database. Valid names only contain alphabet characters, numbers and `_`. ### Error code: DF-Synapse-StoredProcedureNotSupported
Note: Please check that the given database is of type 'Dedicated SQL pool (forme
### Error code: DF-Xml-UnsupportedExternalReferenceResource - **Message**: External reference resource in xml data file is not supported.-- **Cause**: The external reference resource in the XML data file is not supported.-- **Recommendation**: Update the XML file content when the external reference resource is not supported now.
+- **Cause**: The external reference resource in the XML data file isn't supported.
+- **Recommendation**: Update the XML file content when the external reference resource isn't supported now.
### Error code: GetCommand OutputAsync failed
You may encounter the following issues before the improvement, but after the imp
You are affected if you are in the following conditions: - Using the Delimited Text with the Multiline setting set to True or CDM as the source. - The first row has more than 128 characters.
+ - The row delimiter in data files isn't `\n`.
Before the improvement, the default row delimiter `\n` may be unexpectedly used to parse delimited text files, because when Multiline setting is set to True, it invalidates the row delimiter setting, and the row delimiter is automatically detected based on the first 128 characters. If you fail to detect the actual row delimiter, it would fall back to `\n`.
defender-for-cloud Configure Email Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/configure-email-notifications.md
description: Learn how to fine-tune the Microsoft Defender for Cloud security al
Previously updated : 07/23/2023 Last updated : 02/25/2024
-# Quickstart: Configure email notifications for security alerts
+# Quickstart: configure email notifications for security alerts
Security alerts need to reach the right people in your organization. By default, Microsoft Defender for Cloud emails subscription owners whenever a high-severity alert is triggered for their subscription. This page explains how to customize these notifications.
Use Defender for Cloud's **Email notifications** settings page to define prefere
- ***who* should be notified** - Emails can be sent to select individuals or to anyone with a specified Azure role for a subscription. - ***what* they should be notified about** - Modify the severity levels for which Defender for Cloud should send out notifications.
-To avoid alert fatigue, Defender for Cloud limits the volume of outgoing mails. For each subscription, Defender for Cloud sends:
+To avoid alert fatigue, Defender for Cloud limits the volume of outgoing emails. For each email address, Defender for Cloud sends:
- approximately **four emails per day** for **high-severity** alerts - approximately **two emails per day** for **medium-severity** alerts - approximately **one email per day** for **low-severity** alerts ## Availability
defender-for-cloud Container Image Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/container-image-mapping.md
-# Map Container Images from Code to Cloud
+# Map container images from code to cloud
When a vulnerability is identified in a container image stored in a container registry or running in a Kubernetes cluster, it can be difficult for a security practitioner to trace back to the CI/CD pipeline that first built the container image and identify a developer remediation owner. With DevOps security capabilities in Microsoft Defender Cloud Security Posture Management (CSPM), you can map your cloud-native applications from code to cloud to easily kick off developer remediation workflows and reduce the time to remediation of vulnerabilities in your container images.
When a vulnerability is identified in a container image stored in a container re
- An Azure account with Defender for Cloud onboarded. If you don't already have an Azure account, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - [Azure DevOps](quickstart-onboard-devops.md) or [GitHub](quickstart-onboard-github.md) environment onboarded to Microsoft Defender for Cloud. - For Azure DevOps, [Microsoft Security DevOps (MSDO) Extension](azure-devops-extension.md) installed on the Azure DevOps organization.-- For GitHub, [Microsoft Security DevOps (MSDO) Action](github-action.md) configured in your GitHub repositories.
+- For GitHub, [Microsoft Security DevOps (MSDO) Action](github-action.md) configured in your GitHub repositories. Additionally, the GitHub Workflow must have "**id-token: write"** permissions for federation with Defender for Cloud. For an example, see [this YAML](https://github.com/microsoft/security-devops-action/blob/7e3060ae1e6a9347dd7de6b28195099f39852fe2/.github/workflows/on-push-verification.yml).
- [Defender CSPM](tutorial-enable-cspm-plan.md) enabled. - The container images must be built using [Docker](https://www.docker.com/) and the Docker client must be able to access the Docker server during the build.
The following is an example of an advanced query that utilizes container image m
1. Add the container image mapping tool to your MSDO workflow:
- ```yml
- # Run analyzers
- - name: Run Microsoft Security DevOps Analysis
- uses: microsoft/security-devops-action@latest
- id: msdo
- with:
- include-tools: container-mapping
- ```
+```yml
+name: Build and Map Container Image
+
+on: [push, workflow_dispatch]
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ # Set Permissions
+ permissions:
+ contents: read
+ id-token: write
+ steps:
+ - uses: actions/checkout@v3
+ - uses: actions/setup-python@v4
+ with:
+ python-version: '3.8'
+ # Set Authentication to Container Registry of Choice
+ - name: Azure Container Registry Login
+ uses: Azure/docker-login@v1
+ with:
+ login-server: <containerRegistryLoginServer>
+ username: ${{ secrets.ACR_USERNAME }}
+ password: ${{ secrets.ACR_PASSWORD }}
+ # Build and Push Image
+ - name: Build and Push the Docker image
+ uses: docker/build-push-action@v2
+ with:
+ push: true
+ tags: ${{ secrets.IMAGE_TAG }}
+ file: Dockerfile
+ # Run Mapping Tool in MSDO
+ - name: Run Microsoft Security DevOps Analysis
+ uses: microsoft/security-devops-action@latest
+ id: msdo
+ with:
+ include-tools: container-mapping
+```
After building a container image in a GitHub workflow and pushing it to a registry, see the mapping by using the [Cloud Security Explorer](how-to-manage-cloud-security-explorer.md):
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md
Malware Scanning is charged on a per-gigabyte basis for scanned data. To ensure
By default, the limit is set to 5,000 GB per month per storage account. Once this threshold is exceeded, scanning will cease for the remaining blobs, with a 20-GB confidence interval. For configuration details, refer to [configure Defender for Storage](../storage/common/azure-defender-storage-configure.md). > [!IMPORTANT]
-> Malware scanning in Defender for Storage is not included for free in the first 30 day trial and will be charged from the first day in accordance with the pricing scheme available on the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+> Malware scanning in Defender for Storage is not included for free in the first 30 day trial and will be charged from the first day in accordance with the pricing scheme available on the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). Malware scanning will also incur additional charges for other Azure services - Azure Storage read operations, Azure Storage blob indexing and Azure Event Grid notifications.
### Enablement at scale with granular controls
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Ensure that your SSM Agent has the managed policy [AmazonSSMManagedInstanceCore]
**You must have the SSM Agent for auto provisioning Arc agent on EC2 machines. If the SSM doesn't exist, or is removed from the EC2, the Arc provisioning won't be able to proceed.** > [!NOTE]
-> As part of the cloud formation template that is run during the onboarding process, an automation process is created and triggered every 30 days, over all the EC2s that existed during the initial run of the cloud formation. The goal of this scheduled scan is to ensure that all the relevant EC2s have an IAM profile with the required IAM policy that allows Defender for Cloud to access, manage, and provide the relevant security features (including the Arc agent provisioning). The scan does not apply to EC2s that were created after the run of the cloud formation.
+> As part of the CloudFormation template that is run during the onboarding process, an automation process is created and triggered every 30 days, over all the EC2s that existed during the initial run of the CloudFormation. The goal of this scheduled scan is to ensure that all the relevant EC2s have an IAM profile with the required IAM policy that allows Defender for Cloud to access, manage, and provide the relevant security features (including the Arc agent provisioning). The scan does not apply to EC2s that were created after the run of the CloudFormation.
If you want to manually install Azure Arc on your existing and future EC2 instances, use the [EC2 instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231dee23-84db-44d2-bd9d-c32fbcfb42a3) recommendation to identify instances that don't have Azure Arc installed.
defender-for-cloud Recommendations Reference Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-devops.md
DevOps recommendations don't affect your [secure score](secure-score-security-co
### [GitHub repositories should have dependency vulnerability scanning findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/945f7b1c-8def-4ab3-a44d-1416060104b3/showSecurityCenterCommandBar~/false)
-**Description**: GitHub repositories should have dependency vulnerability scanning findings resolved
+**Description**: GitHub repositories should have dependency vulnerability scanning findings resolved.
**Severity**: Medium
DevOps recommendations don't affect your [secure score](secure-score-security-co
### [GitLab projects should have dependency vulnerability scanning findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/1bc53aae-c92e-406b-9693-d46caf3934fa/showSecurityCenterCommandBar~/false)
-**Description**: GitHub repositories should have dependency vulnerability scanning findings resolved
+**Description**: GitHub repositories should have dependency vulnerability scanning findings resolved.
**Severity**: Medium
DevOps recommendations don't affect your [secure score](secure-score-security-co
### [Code repositories should have secret scanning findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/4e07c7d0-e06c-47d7-a4a9-8c7b748d1b27/showSecurityCenterCommandBar~/false)
-**Description**: DevOps security in Defender for Cloud has found a secret in code repositories.  This should be remediated immediately to prevent a security breach.  Secrets found in repositories can be leaked or discovered by adversaries, leading to compromise of an application or service. For Azure DevOps, the Microsoft Security DevOps CredScan tool only scans builds on which it has been configured to run. Therefore, results may not reflect the complete status of secrets in your repositories.
+**Description**: DevOps security in Defender for Cloud has found a secret in code repositories. This should be remediated immediately to prevent a security breach. Secrets found in repositories can be leaked or discovered by adversaries, leading to compromise of an application or service. For Azure DevOps, the Microsoft Security DevOps CredScan tool only scans builds on which it has been configured to run. Therefore, results might not reflect the complete status of secrets in your repositories.
(No related policy) **Severity**: High
defender-for-cloud Recommendations Reference Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-gcp.md
This article lists all the recommendations you might see in Microsoft Defender f
To learn about actions that you can take in response to these recommendations, see [Remediate recommendations in Defender for Cloud](implement-security-recommendations.md).
-Your secure score is based on the number of security recommendations you've completed. To decide which recommendations to resolve first, look at the severity of each recommendation and its potential impact on your secure score.
+Your secure score is based on the number of security recommendations you completed. To decide which recommendations to resolve first, look at the severity of each recommendation and its potential effect on your secure score.
## GCP Compute recommendations ### [Compute Engine VMs should use the Container-Optimized OS](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/3e33004b-f0b8-488d-85ed-61336c7ad4ca)
-**Description**: This recommendation evaluates the config property of a node pool for the key-value pair, 'imageType': 'COS'.
+**Description**: This recommendation evaluates the config property of a node pool for the key-value pair, 'imageType': 'COS.'
**Severity**: Low ### [Ensure 'Block Project-wide SSH keys' is enabled for VM instances](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/00f8a6a6-cf69-4c11-822e-3ebf4910e545)
-**Description**: It is recommended to use Instance specific SSH key(s) instead of using common/shared project-wide SSH key(s) to access Instances.
-Project-wide SSH keys are stored in Compute/Project-meta-data. Project wide SSH keys can be used to login into all the instances within project. Using project-wide SSH keys eases the SSH key management but if compromised, poses the security risk which can impact all the instances within project.
- It is recommended to use Instance specific SSH keys which can limit the attack surface if the SSH keys are compromised.
+**Description**: It's recommended to use Instance specific SSH key(s) instead of using common/shared project-wide SSH key(s) to access Instances.
+Project-wide SSH keys are stored in Compute/Project-meta-data. Project wide SSH keys can be used to login into all the instances within project. Using project-wide SSH keys eases the SSH key management but if compromised, poses the security risk that can affect all the instances within project.
+ It's recommended to use Instance specific SSH keys that can limit the attack surface if the SSH keys are compromised.
**Severity**: Medium ### [Ensure Compute instances are launched with Shielded VM enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1a4b3b3a-7de9-4aa4-a29b-580d59b43f79)
-**Description**: To defend against advanced threats and ensure that the boot loader and firmware on your VMs are signed and untampered, it is recommended that Compute instances are launched with Shielded VM enabled.
+**Description**: To defend against advanced threats and ensure that the boot loader and firmware on your VMs are signed and untampered, it's recommended that Compute instances are launched with Shielded VM enabled.
Shielded VMs are virtual machines (VMs) on Google Cloud Platform hardened by a set of security controls that help defend against rootkits and bootkits. Shielded VM offers verifiable integrity of your Compute Engine VM instances, so you can be confident your instances haven't been compromised by boot- or kernel-level malware or rootkits. Shielded VM's verifiable integrity is achieved through the use of Secure Boot, virtual trusted platform module (vTPM)-enabled Measured Boot, and integrity monitoring.
-Shielded VM instances run firmware which is signed and verified using Google's Certificate Authority, ensuring that the instance's firmware is unmodified and establishing the root of trust for Secure Boot.
+Shielded VM instances run firmware that is signed and verified using Google's Certificate Authority, ensuring that the instance's firmware is unmodified and establishing the root of trust for Secure Boot.
Integrity monitoring helps you understand and make decisions about the state of your VM instances and the Shielded VM vTPM enables Measured Boot by performing the measurements needed to create a known good boot baseline, called the integrity policy baseline. The integrity policy baseline is used for comparison with measurements from subsequent VM boots to determine if anything has changed. Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails.
Secure Boot helps ensure that the system only runs authentic software by verifyi
### [Ensure 'Enable connecting to serial ports' is not enabled for VM Instance](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/7e060336-2c9e-4289-a2a6-8d301bad47bb)
-**Description**: Interacting with a serial port is often referred to as the serial console, which is similar to using a terminal window, in that input and output is entirely in text mode and there is no graphical interface or mouse support.
+**Description**: Interacting with a serial port is often referred to as the serial console, which is similar to using a terminal window, in that input and output is entirely in text mode and there's no graphical interface or mouse support.
If you enable the interactive serial console on an instance, clients can attempt to connect to that instance from any IP address. Therefore interactive serial console support should be disabled.
-A virtual machine instance has four virtual serial ports. Interacting with a serial port is similar to using a terminal window, in that input and output is entirely in text mode and there is no graphical interface or mouse support.
+A virtual machine instance has four virtual serial ports. Interacting with a serial port is similar to using a terminal window, in that input and output is entirely in text mode and there's no graphical interface or mouse support.
The instance's operating system, BIOS, and other system-level entities often write output to the serial ports, and can accept input such as commands or answers to prompts. Typically, these system-level entities use the first serial port (port 1) and serial port 1 is often referred to as the serial console.
-The interactive serial console does not support IP-based access restrictions such as IP allowlists. If you enable the interactive serial console on an instance, clients can attempt to connect to that instance from any IP address.
+The interactive serial console doesn't support IP-based access restrictions such as IP allowlists. If you enable the interactive serial console on an instance, clients can attempt to connect to that instance from any IP address.
This allows anybody to connect to that instance if they know the correct SSH key, username, project ID, zone, and instance name. Therefore interactive serial console support should be disabled.
Therefore interactive serial console support should be disabled.
### [Ensure 'log_duration' database flag for Cloud SQL PostgreSQL instance is set to 'on'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/272820a7-06ce-44b3-8318-4ec1f82237dc) **Description**: Enabling the log_hostname setting causes the duration of each completed statement to be logged.
- This does not logs the text of the query and thus behaves different from the log_min_duration_statement flag.
- This parameter cannot be changed after session start.
+ This doesn't logs the text of the query and thus behaves different from the log_min_duration_statement flag.
+ This parameter can't be changed after session start.
Monitoring the time taken to execute the queries can be crucial in identifying any resource hogging queries and assessing the performance of the server. Further steps such as load balancing and use of optimized queries can be taken to ensure the performance and stability of the server. This recommendation is applicable to PostgreSQL database instances.
Therefore interactive serial console support should be disabled.
**Description**: The PostgreSQL executor is responsible to execute the plan handed over by the PostgreSQL planner. The executor processes the plan recursively to extract the required set of rows. The "log_executor_stats" flag controls the inclusion of PostgreSQL executor performance statistics in the PostgreSQL logs for each query.
- The "log_executor_stats" flag enables a crude profiling method for logging PostgreSQL executor performance statistics which even though can be useful for troubleshooting, it may increase the amount of logs significantly and have performance overhead.
+ The "log_executor_stats" flag enables a crude profiling method for logging PostgreSQL executor performance statistics, which even though can be useful for troubleshooting, it might increase the amount of logs significantly and have performance overhead.
This recommendation is applicable to PostgreSQL database instances. **Severity**: Low
Therefore interactive serial console support should be disabled.
**Description**: The "log_min_error_statement" flag defines the minimum message severity level that are considered as an error statement. Messages for error statements are logged with the SQL statement.
- Valid values include "DEBUG5", "DEBUG4", "DEBUG3", "DEBUG2", "DEBUG1", "INFO", "NOTICE", "WARNING", "ERROR", "LOG", "FATAL", and "PANIC".
+ Valid values include "DEBUG5," "DEBUG4," "DEBUG3," "DEBUG2," "DEBUG1," "INFO," "NOTICE," "WARNING," "ERROR," "LOG," "FATAL," and "PANIC."
Each severity level includes the subsequent levels mentioned above. Ensure a value of ERROR or stricter is set. Auditing helps in troubleshooting operational problems and also permits forensic analysis.
- If "log_min_error_statement" is not set to the correct value, messages may not be classified as error messages appropriately.
- Considering general log messages as error messages would make is difficult to find actual errors and considering only stricter severity levels as error messages may skip actual errors to log their SQL statements.
+ If "log_min_error_statement" isn't set to the correct value, messages might not be classified as error messages appropriately.
+ Considering general log messages as error messages would make is difficult to find actual errors and considering only stricter severity levels as error messages might skip actual errors to log their SQL statements.
The "log_min_error_statement" flag should be set to "ERROR" or stricter. This recommendation is applicable to PostgreSQL database instances.
Therefore interactive serial console support should be disabled.
**Description**: The PostgreSQL planner/optimizer is responsible to parse and verify the syntax of each query received by the server. If the syntax is correct a "parse tree" is built up else an error is generated. The "log_parser_stats" flag controls the inclusion of parser performance statistics in the PostgreSQL logs for each query.
- The "log_parser_stats" flag enables a crude profiling method for logging parser performance statistics which even though can be useful for troubleshooting, it may increase the amount of logs significantly and have performance overhead.
+ The "log_parser_stats" flag enables a crude profiling method for logging parser performance statistics, which even though can be useful for troubleshooting, it might increase the amount of logs significantly and have performance overhead.
This recommendation is applicable to PostgreSQL database instances. **Severity**: Low
Therefore interactive serial console support should be disabled.
**Description**: The same SQL query can be executed in multiple ways and still produce different results. The PostgreSQL planner/optimizer is responsible to create an optimal execution plan for each query. The "log_planner_stats" flag controls the inclusion of PostgreSQL planner performance statistics in the PostgreSQL logs for each query.
- The "log_planner_stats" flag enables a crude profiling method for logging PostgreSQL planner performance statistics which even though can be useful for troubleshooting, it may increase the amount of logs significantly and have performance overhead.
+ The "log_planner_stats" flag enables a crude profiling method for logging PostgreSQL planner performance statistics, which even though can be useful for troubleshooting, it might increase the amount of logs significantly and have performance overhead.
This recommendation is applicable to PostgreSQL database instances. **Severity**: Low
Therefore interactive serial console support should be disabled.
### [Ensure 'log_statement_stats' database flag for Cloud SQL PostgreSQL instance is set to 'off'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c36e73b7-ee30-4684-a1ad-2b878d2b10bf) **Description**: The "log_statement_stats" flag controls the inclusion of end to end performance statistics of a SQL query in the PostgreSQL logs for each query.
- This cannot be enabled with other module statistics ("log_parser_stats", "log_planner_stats", "log_executor_stats").
+ This can't be enabled with other module statistics (*log_parser_stats*, *log_planner_stats*, *log_executor_stats*).
The "log_statement_stats" flag enables a crude profiling method for logging end to end performance statistics of a SQL query.
- This can be useful for troubleshooting but may increase the amount of logs significantly and have performance overhead.
+ This can be useful for troubleshooting but might increase the amount of logs significantly and have performance overhead.
This recommendation is applicable to PostgreSQL database instances. **Severity**: Low ### [Ensure that Compute instances do not have public IP addresses](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8bdd13ad-a9d2-4910-8b06-9c4cddb55abb)
-**Description**: Compute instances should not be configured to have external IP addresses.
-To reduce your attack surface, Compute instances should not have public IP addresses. Instead, instances should be configured behind load balancers, to minimize the instance's exposure to the internet.
-Instances created by GKE should be excluded because some of them have external IP addresses and cannot be changed by editing the instance settings.
-These VMs have names that start with "gke-" and are labeled "goog-gke-node".
+**Description**: Compute instances shouldn't be configured to have external IP addresses.
+To reduce your attack surface, Compute instances shouldn't have public IP addresses. Instead, instances should be configured behind load balancers, to minimize the instance's exposure to the internet.
+Instances created by GKE should be excluded because some of them have external IP addresses and can't be changed by editing the instance settings.
+These VMs have names that start with *gke-* and are labeled *goog-gke-node*.
**Severity**: High ### [Ensure that instances are not configured to use the default service account](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a107c44c-75e4-4607-b1b0-cd5cfcf249e0)
-**Description**: It is recommended to configure your instance to not use the default Compute Engine service account because it has the Editor role on the project.
+**Description**: It's recommended to configure your instance to not use the default Compute Engine service account because it has the Editor role on the project.
The default Compute Engine service account has the Editor role on the project, which allows read and write access to most Google Cloud Services.
-To defend against privilege escalations if your VM is compromised and prevent an attacker from gaining access to all of your project, it is recommended to not use the default Compute Engine service account.
+To defend against privilege escalations if your VM is compromised and prevent an attacker from gaining access to all of your project, it's recommended to not use the default Compute Engine service account.
Instead, you should create a new service account and assigning only the permissions needed by your instance. The default Compute Engine service account is named `[PROJECT_NUMBER]- compute@developer.gserviceaccount.com`.
-VMs created by GKE should be excluded. These VMs have names that start with "gke-" and are labeled "goog-gke-node".
+VMs created by GKE should be excluded. These VMs have names that start with *gke-* and are labeled *goog-gke-node*.
**Severity**: High ### [Ensure that instances are not configured to use the default service account with full access to all Cloud APIs](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a8c1fcf1-ca66-4fc1-b5e6-51d7f4f76782)
-**Description**: To support principle of least privileges and prevent potential privilege escalation it is recommended that instances are not assigned to default service account "Compute Engine default service account" with Scope "Allow full access to all Cloud APIs".
-Along with ability to optionally create, manage and use user managed custom service accounts, Google Compute Engine provides default service account "Compute Engine default service account" for an instances to access necessary cloud services.
+**Description**: To support principle of least privileges and prevent potential privilege escalation, it's recommended that instances aren't assigned to default service account "Compute Engine default service account" with Scope "Allow full access to all Cloud APIs."
+Along with ability to optionally create, manage, and use user managed custom service accounts, Google Compute Engine provides default service account "Compute Engine default service account" for an instance to access necessary cloud services.
"Project Editor" role is assigned to "Compute Engine default service account" hence, This service account has almost all capabilities over all cloud services except billing.
-However, when "Compute Engine default service account" assigned to an instance it can operate in 3 scopes.
+However, when "Compute Engine default service account" assigned to an instance it can operate in three scopes.
-1. Allow default access: Allows only minimum access required to run an Instance (Least Privileges)
-1. Allow full access to all Cloud APIs: Allow full access to all the cloud APIs/Services (Too much access)
+1. Allow default access: Allows only minimum access required to run an Instance (Least Privileges).
+1. Allow full access to all Cloud APIs: Allow full access to all the cloud APIs/Services (Too much access).
1. Set access for each API: Allows Instance administrator to choose only those APIs that are needed to perform specific business functionality expected by instance
-When an instance is configured with "Compute Engine default service account" with Scope "Allow full access to all Cloud APIs", based on IAM roles assigned to the user(s) accessing Instance,
-it may allow user to perform cloud operations/API calls that user is not supposed to perform leading to successful privilege escalation.
-VMs created by GKE should be excluded. These VMs have names that start with "gke-" and are labeled "goog-gke-node".
+When an instance is configured with "Compute Engine default service account" with Scope "Allow full access to all Cloud APIs," based on IAM roles assigned to the user(s) accessing Instance,
+it might allow user to perform cloud operations/API calls that user isn't supposed to perform leading to successful privilege escalation.
+VMs created by GKE should be excluded. These VMs have names that start with "gke-" and are labeled "goog-gke-node."
**Severity**: Medium ### [Ensure that IP forwarding is not enabled on Instances](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/0ba588a6-4539-4e67-bc62-d7b2b51300fb)
-**Description**: Compute Engine instance cannot forward a packet unless the source IP address of the packet matches the IP address of the instance. Similarly, GCP won't deliver a packet whose destination IP address is different than the IP address of the instance receiving the packet.
+**Description**: Compute Engine instance can't forward a packet unless the source IP address of the packet matches the IP address of the instance. Similarly, GCP won't deliver a packet whose destination IP address is different than the IP address of the instance receiving the packet.
However, both capabilities are required if you want to use instances to help route packets. Forwarding of data packets should be disabled to prevent data loss or information disclosure.
-Compute Engine instance cannot forward a packet unless the source IP address of the packet matches the IP address of the instance. Similarly, GCP won't deliver a packet whose destination IP address is different than the IP address of the instance receiving the packet.
- However, both capabilities are required if you want to use instances to help route packets. To enable this source and destination IP check, disable the canIpForward field, which allows an instance to send and receive packets with non-matching destination or source IPs.
+Compute Engine instance can't forward a packet unless the source IP address of the packet matches the IP address of the instance. Similarly, GCP won't deliver a packet whose destination IP address is different than the IP address of the instance receiving the packet.
+ However, both capabilities are required if you want to use instances to help route packets. To enable this source and destination IP check, disable the canIpForward field, which allows an instance to send and receive packets with nonmatching destination or source IPs.
**Severity**: Medium
Enabling log_checkpoints causes checkpoints and restart points to be logged in t
### [Ensure that the 'log_lock_waits' database flag for Cloud SQL PostgreSQL instance is set to 'on'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8191f530-fde7-4177-827a-43ce0f69ffe7)
-**Description**: Enabling the "log_lock_waits" flag for a PostgreSQL instance creates a log for any session waits that take longer than the alloted "deadlock_timeout" time to acquire a lock.
+**Description**: Enabling the "log_lock_waits" flag for a PostgreSQL instance creates a log for any session waits that take longer than the allotted "deadlock_timeout" time to acquire a lock.
The deadlock timeout defines the time to wait on a lock before checking for any conditions. Frequent run overs on deadlock timeout can be an indication of an underlying issue.
- Logging such waits on locks by enabling the log_lock_waits flag can be used to identify poor performance due to locking delays or if a specially-crafted SQL is attempting to starve resources through holding locks for excessive amounts of time.
+ Logging such waits on locks by enabling the log_lock_waits flag can be used to identify poor performance due to locking delays or if a specially crafted SQL is attempting to starve resources through holding locks for excessive amounts of time.
This recommendation is applicable to PostgreSQL database instances. **Severity**: Low ### [Ensure that the 'log_min_duration_statement' database flag for Cloud SQL PostgreSQL instance is set to '-1'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1c9e237b-419f-4e73-b43a-94b5863dd73e)
-**Description**: The "log_min_duration_statement" flag defines the minimum amount of execution time of a statement in milliseconds where the total duration of the statement is logged. Ensure that "log_min_duration_statement" is disabled, i.e., a value of -1 is set.
- Logging SQL statements may include sensitive information that should not be recorded in logs. This recommendation is applicable to PostgreSQL database instances.
+**Description**: The "log_min_duration_statement" flag defines the minimum amount of execution time of a statement in milliseconds where the total duration of the statement is logged. Ensure that "log_min_duration_statement" is disabled, that is, a value of -1 is set.
+ Logging SQL statements might include sensitive information that shouldn't be recorded in logs. This recommendation is applicable to PostgreSQL database instances.
**Severity**: Low
Enabling log_checkpoints causes checkpoints and restart points to be logged in t
**Description**: The "log_min_error_statement" flag defines the minimum message severity level that is considered as an error statement. Messages for error statements are logged with the SQL statement.
- Valid values include "DEBUG5", "DEBUG4", "DEBUG3", "DEBUG2", "DEBUG1", "INFO", "NOTICE", "WARNING", "ERROR", "LOG", "FATAL", and "PANIC".
+ Valid values include "DEBUG5," "DEBUG4," "DEBUG3," "DEBUG2," "DEBUG1," "INFO," "NOTICE," "WARNING," "ERROR," "LOG," "FATAL," and "PANIC."
Each severity level includes the subsequent levels mentioned above. Note: To effectively turn off logging failing statements, set this parameter to PANIC. ERROR is considered the best practice setting. Changes should only be made in accordance with the organization's logging policy. Auditing helps in troubleshooting operational problems and also permits forensic analysis.
- If "log_min_error_statement" is not set to the correct value, messages may not be classified as error messages appropriately.
- Considering general log messages as error messages would make it difficult to find actual errors, while considering only stricter severity levels as error messages may skip actual errors to log their SQL statements.
+ If "log_min_error_statement" isn't set to the correct value, messages might not be classified as error messages appropriately.
+ Considering general log messages as error messages would make it difficult to find actual errors, while considering only stricter severity levels as error messages might skip actual errors to log their SQL statements.
The "log_min_error_statement" flag should be set in accordance with the organization's logging policy. This recommendation is applicable to PostgreSQL database instances.
Auditing helps in troubleshooting operational problems and also permits forensic
### [Ensure that the 'log_temp_files' database flag for Cloud SQL PostgreSQL instance is set to '0'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/29622fc0-14dc-4d65-a5a8-e9a39ffc4b62)
-**Description**: PostgreSQL can create a temporary file for actions such as sorting, hashing and temporary query results when these operations exceed "work_mem".
- The "log_temp_files" flag controls logging names and the file size when it is deleted.
+**Description**: PostgreSQL can create a temporary file for actions such as sorting, hashing, and temporary query results when these operations exceed "work_mem."
+ The "log_temp_files" flag controls logging names and the file size when it's deleted.
Configuring "log_temp_files" to 0 causes all temporary file information to be logged, while positive values log only files whose size is greater than or equal to the specified number of kilobytes. A value of "-1" disables temporary file information logging.
- If all temporary files are not logged, it may be more difficult to identify potential performance issues that may be due to either poor application coding or deliberate resource starvation attempts.
+ If all temporary files aren't logged, it might be more difficult to identify potential performance issues that might be due to either poor application coding or deliberate resource starvation attempts.
**Severity**: Low
By default, Google Compute Engine encrypts all data at rest. Compute Engine hand
However, if you wanted to control and manage this encryption yourself, you can provide your own encryption keys. If you provide your own encryption keys, Compute Engine uses your key to protect the Google-generated keys used to encrypt and decrypt your data. Only users who can provide the correct key can use resources protected by a customer-supplied encryption key.
-Google does not store your keys on its servers and cannot access your protected data unless you provide the key.
-This also means that if you forget or lose your key, there is no way for Google to recover the key or to recover any data encrypted with the lost key.
+Google doesn't store your keys on its servers and can't access your protected data unless you provide the key.
+This also means that if you forget or lose your key, there's no way for Google to recover the key or to recover any data encrypted with the lost key.
At least business critical VMs should have VM disks encrypted with CSEK. **Severity**: Medium ### [GCP projects should have Azure Arc auto provisioning enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1716d754-8d50-4b90-87b6-0404cad9b4e3)
-**Description**: For full visibility of the security content from Microsoft Defender for servers, GCP VM instances should be connected to Azure Arc. To ensure that all eligible VM instances automatically receive Azure Arc, enable auto-provisioning from Defender for Cloud at the GCP project level. Learn more about [Azure Arc](/azure/azure-arc/servers/overview), and [Microsoft Defender for Servers](/azure/security-center/defender-for-servers-introduction).
+**Description**: For full visibility of the security content from Microsoft Defender for servers, GCP VM instances should be connected to Azure Arc. To ensure that all eligible VM instances automatically receive Azure Arc, enable autoprovisioning from Defender for Cloud at the GCP project level. Learn more about [Azure Arc](/azure/azure-arc/servers/overview), and [Microsoft Defender for Servers](/azure/security-center/defender-for-servers-introduction).
**Severity**: High
At least business critical VMs should have VM disks encrypted with CSEK.
### [GCP VM instances should have OS config agent installed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/20622d8c-2a4f-4a03-9896-a5f2f7ede717)
-**Description**: To receive the full Defender for Servers capabilities using Azure Arc auto-provisioning, GCP VMs should have OS config agent enabled
+**Description**: To receive the full Defender for Servers capabilities using Azure Arc autoprovisioning, GCP VMs should have OS config agent enabled.
**Severity**: High ### [GKE cluster's auto repair feature should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/6aeb69dc-0d01-4228-88e9-7e610891d5dd)
-**Description**: This recommendation evaluates the management property of a node pool for the key-value pair, 'key': 'autoRepair', 'value': true.
+**Description**: This recommendation evaluates the management property of a node pool for the key-value pair, 'key': 'autoRepair,' 'value': true.
**Severity**: Medium ### [GKE cluster's auto upgrade feature should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1680e053-2e9b-4e77-a1c7-793ae286155e)
-**Description**: This recommendation evaluates the management property of a node pool for the key-value pair, 'key': 'autoUpgrade', 'value': true.
+**Description**: This recommendation evaluates the management property of a node pool for the key-value pair, 'key': 'autoUpgrade,' 'value': true.
**Severity**: High
Learn more about [Microsoft Defender for Cloud's security features for container
### [GKE cluster's auto repair feature should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/6aeb69dc-0d01-4228-88e9-7e610891d5dd)
-**Description**: This recommendation evaluates the management property of a node pool for the key-value pair, 'key': 'autoRepair', 'value': true.
+**Description**: This recommendation evaluates the management property of a node pool for the key-value pair, 'key': 'autoRepair,' 'value': true.
**Severity**: Medium ### [GKE cluster's auto upgrade feature should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1680e053-2e9b-4e77-a1c7-793ae286155e)
-**Description**: This recommendation evaluates the management property of a node pool for the key-value pair, 'key': 'autoUpgrade', 'value': true.
+**Description**: This recommendation evaluates the management property of a node pool for the key-value pair, 'key': 'autoUpgrade,' 'value': true.
**Severity**: High
All the [Kubernetes data plane security recommendations](kubernetes-workload-pro
### [Ensure '3625 (trace flag)' database flag for Cloud SQL SQL Server instance is set to 'off'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/631246fb-7192-4709-a0b3-b83e65e6b550)
-**Description**: It is recommended to set "3625 (trace flag)" database flag for Cloud SQL SQL Server instance to "off".
- Trace flags are frequently used to diagnose performance issues or to debug stored procedures or complex computer systems, but they may also be recommended by Microsoft Support to address behavior that is negatively impacting a specific workload.
+**Description**: It's recommended to set "3625 (trace flag)" database flag for Cloud SQL SQL Server instance to "off."
+ Trace flags are frequently used to diagnose performance issues or to debug stored procedures or complex computer systems, but they might also be recommended by Microsoft Support to address behavior that is negatively impacting a specific workload.
All documented trace flags and those recommended by Microsoft Support are fully supported in a production environment when used as directed.
- "3625(trace log)" Limits the amount of information returned to users who are not members of the sysadmin fixed server role, by masking the parameters of some error messages using '******'.
- This can help prevent disclosure of sensitive information, hence this is recommended to disable this flag.
+ "3625(trace log)" Limits the amount of information returned to users who aren't members of the sysadmin fixed server role, by masking the parameters of some error messages using '******.'
+ This can help prevent disclosure of sensitive information. Hence this is recommended to disable this flag.
This recommendation is applicable to SQL Server database instances. **Severity**: Medium ### [Ensure 'external scripts enabled' database flag for Cloud SQL SQL Server instance is set to 'off'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/98b8908a-18b9-46ea-8c52-3f8db1da996f)
-**Description**: It is recommended to set "external scripts enabled" database flag for Cloud SQL SQL Server instance to off.
+**Description**: It's recommended to set "external scripts enabled" database flag for Cloud SQL SQL Server instance to off.
"external scripts enabled" enable the execution of scripts with certain remote language extensions. This property is OFF by default. When Advanced Analytics Services is installed, setup can optionally set this property to true.
All the [Kubernetes data plane security recommendations](kubernetes-workload-pro
### [Ensure 'remote access' database flag for Cloud SQL SQL Server instance is set to 'off'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dddbbe7d-7e32-47d8-b319-39cbb70b8f88)
-**Description**: It is recommended to set "remote access" database flag for Cloud SQL SQL Server instance to "off".
+**Description**: It's recommended to set "remote access" database flag for Cloud SQL SQL Server instance to "off."
The "remote access" option controls the execution of stored procedures from local or remote servers on which instances of SQL Server are running. This default value for this option is 1. This grants permission to run local stored procedures from remote servers or remote stored procedures from the local server.
All the [Kubernetes data plane security recommendations](kubernetes-workload-pro
### [Ensure 'skip_show_database' database flag for Cloud SQL Mysql instance is set to 'on'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9e5b33de-bcfa-4044-93ce-4937bf8f0bbd)
-**Description**: It is recommended to set "skip_show_database" database flag for Cloud SQL Mysql instance to "on".
- 'skip_show_database' database flag prevents people from using the SHOW DATABASES statement if they do not have the SHOW DATABASES privilege.
+**Description**: It's recommended to set "skip_show_database" database flag for Cloud SQL Mysql instance to "on."
+ 'skip_show_database' database flag prevents people from using the SHOW DATABASES statement if they don't have the SHOW DATABASES privilege.
This can improve security if you have concerns about users being able to see databases belonging to other users. Its effect depends on the SHOW DATABASES privilege: If the variable value is ON, the SHOW DATABASES statement is permitted only to users who have the SHOW DATABASES privilege, and the statement displays all database names. If the value is OFF, SHOW DATABASES is permitted to all users, but displays the names of only those databases for which the user has the SHOW DATABASES or other privilege.
All the [Kubernetes data plane security recommendations](kubernetes-workload-pro
**Description**: BigQuery by default encrypts the data as rest by employing Envelope Encryption using Google managed cryptographic keys. The data is encrypted using the data encryption keys and data encryption keys themselves are further encrypted using key encryption keys.
-This is seamless and do not require any additional input from the user.
+This is seamless and does not require any additional input from the user.
However, if you want to have greater control, Customer-managed encryption keys (CMEK) can be used as encryption key management solution for BigQuery Data Sets. BigQuery by default encrypts the data as rest by employing Envelope Encryption using Google managed cryptographic keys.
- This is seamless and does not require any additional input from the user.
+ This is seamless and doesn't require any additional input from the user.
For greater control over the encryption, customer-managed encryption keys (CMEK) can be used as encryption key management solution for BigQuery Data Sets. Setting a Default Customer-managed encryption key (CMEK) for a data set ensure any tables created in future will use the specified CMEK if none other is provided.
-Note: Google does not store your keys on its servers and cannot access your protected data unless you provide the key.
-This also means that if you forget or lose your key, there is no way for Google to recover the key or to recover any data encrypted with the lost key.
+Note: Google doesn't store your keys on its servers and can't access your protected data unless you provide the key.
+This also means that if you forget or lose your key, there's no way for Google to recover the key or to recover any data encrypted with the lost key.
**Severity**: Medium
This also means that if you forget or lose your key, there is no way for Google
**Description**: BigQuery by default encrypts the data as rest by employing Envelope Encryption using Google managed cryptographic keys. The data is encrypted using the data encryption keys and data encryption keys themselves are further encrypted using key encryption keys.
- This is seamless and do not require any additional input from the user.
+ This is seamless and does not require any additional input from the user.
However, if you want to have greater control, Customer-managed encryption keys (CMEK) can be used as encryption key management solution for BigQuery Data Sets. If CMEK is used, the CMEK is used to encrypt the data encryption keys instead of using google-managed encryption keys. BigQuery by default encrypts the data as rest by employing Envelope Encryption using Google managed cryptographic keys.
-This is seamless and does not require any additional input from the user.
+This is seamless and doesn't require any additional input from the user.
For greater control over the encryption, customer-managed encryption keys (CMEK) can be used as encryption key management solution for BigQuery tables. The CMEK is used to encrypt the data encryption keys instead of using google-managed encryption keys. BigQuery stores the table and CMEK association and the encryption/decryption is done automatically. Applying the Default Customer-managed keys on BigQuery data sets ensures that all the new tables created in the future will be encrypted using CMEK but existing tables need to be updated to use CMEK individually.
-Note: Google does not store your keys on its servers and cannot access your protected data unless you provide the key.
- This also means that if you forget or lose your key, there is no way for Google to recover the key or to recover any data encrypted with the lost key.
+Note: Google doesn't store your keys on its servers and can't access your protected data unless you provide the key.
+ This also means that if you forget or lose your key, there's no way for Google to recover the key or to recover any data encrypted with the lost key.
**Severity**: Medium ### [Ensure that BigQuery datasets are not anonymously or publicly accessible](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dab1eea3-7693-4da3-af1b-2f73832655fa)
-**Description**: It is recommended that the IAM policy on BigQuery datasets does not allow anonymous and/or public access.
+**Description**: It's recommended that the IAM policy on BigQuery datasets doesn't allow anonymous and/or public access.
Granting permissions to allUsers or allAuthenticatedUsers allows anyone to access the dataset. Such access might not be desirable if sensitive data is being stored in the dataset.
- Therefore, ensure that anonymous and/or public access to a dataset is not allowed.
+ Therefore, ensure that anonymous and/or public access to a dataset isn't allowed.
**Severity**: High ### [Ensure that Cloud SQL database instances are configured with automated backups](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/afaac6e6-6240-48a2-9f62-4e257b851311)
-**Description**: It is recommended to have all SQL database instances set to enable automated backups.
+**Description**: It's recommended to have all SQL database instances set to enable automated backups.
Backups provide a way to restore a Cloud SQL instance to recover lost data or recover from a problem with that instance. Automated backups need to be set for any instance that contains data that should be protected from loss or damage.
- This recommendation is applicable for SQL Server, PostgreSql, MySql generation 1 and MySql generation 2 instances.
+ This recommendation is applicable for SQL Server, PostgreSql, MySql generation 1, and MySql generation 2 instances.
**Severity**: High
Such access might not be desirable if sensitive data is being stored in the data
**Description**: Database Server should accept connections only from trusted Network(s)/IP(s) and restrict access from the world. To minimize attack surface on a Database server instance, only trusted/known and required IP(s) should be approved to connect to it.
- An authorized network should not have IPs/networks configured to "0.0.0.0/0" which will allow access to the instance from anywhere in the world. Note that authorized networks apply only to instances with public IPs.
+ An authorized network shouldn't have IPs/networks configured to "0.0.0.0/0", which will allow access to the instance from anywhere in the world. Note that authorized networks apply only to instances with public IPs.
**Severity**: High ### [Ensure that Cloud SQL database instances do not have public IPs](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1658239d-caf7-471d-83c5-2e4c44afdcff)
-**Description**: It is recommended to configure Second Generation Sql instance to use private IPs instead of public IPs.
- To lower the organization's attack surface, Cloud SQL databases should not have public IPs.
+**Description**: It's recommended to configure Second Generation Sql instance to use private IPs instead of public IPs.
+ To lower the organization's attack surface, Cloud SQL databases shouldn't have public IPs.
Private IPs provide improved network security and lower latency for your application. **Severity**: High ### [Ensure that Cloud Storage bucket is not anonymously or publicly accessible](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d8305d96-2aa5-458d-92b7-f8418f5f3328)
-**Description**: It is recommended that IAM policy on Cloud Storage bucket does not allows anonymous or public access.
+**Description**: It's recommended that IAM policy on Cloud Storage bucket doesn't allows anonymous or public access.
Allowing anonymous or public access grants permissions to anyone to access bucket content.
- Such access might not be desired if you are storing any sensitive data.
- Hence, ensure that anonymous or public access to a bucket is not allowed.
+ Such access might not be desired if you're storing any sensitive data.
+ Hence, ensure that anonymous or public access to a bucket isn't allowed.
**Severity**: High ### [Ensure that Cloud Storage buckets have uniform bucket-level access enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/64b5cdbc-0633-49af-b63d-a9dc90560196)
-**Description**: It is recommended that uniform bucket-level access is enabled on Cloud Storage buckets.
- It is recommended to use uniform bucket-level access to unify and simplify how you grant access to your Cloud Storage resources.
+**Description**: It's recommended that uniform bucket-level access is enabled on Cloud Storage buckets.
+ It's recommended to use uniform bucket-level access to unify and simplify how you grant access to your Cloud Storage resources.
Cloud Storage offers two systems for granting users permission to access your buckets and objects: Cloud Identity and Access Management (Cloud IAM) and Access Control Lists (ACLs). These systems act in parallel - in order for a user to access a Cloud Storage resource, only one of the systems needs to grant the user permission.
Allowing anonymous or public access grants permissions to anyone to access bucke
In order to support a uniform permissioning system, Cloud Storage has uniform bucket-level access. Using this feature disables ACLs for all Cloud Storage resources: access to Cloud Storage resources then is granted exclusively through Cloud IAM.
- Enabling uniform bucket-level access guarantees that if a Storage bucket is not publicly accessible,
+ Enabling uniform bucket-level access guarantees that if a Storage bucket isn't publicly accessible,
no object in the bucket is publicly accessible either. **Severity**: Medium ### [Ensure that Compute instances have Confidential Computing enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/171e9492-73a7-43de-adce-6bd0a3cf6045)
-**Description**: Google Cloud encrypts data at-rest and in-transit, but customer data must be decrypted for processing. Confidential Computing is a breakthrough technology which encrypts data in-use-while it is being processed.
+**Description**: Google Cloud encrypts data at-rest and in-transit, but customer data must be decrypted for processing. Confidential Computing is a breakthrough technology that encrypts data in-use-while it's being processed.
Confidential Computing environments keep data encrypted in memory and elsewhere outside the central processing unit (CPU). Confidential VMs leverage the Secure Encrypted Virtualization (SEV) feature of AMD EPYC CPUs.
- Customer data will stay encrypted while it is used, indexed, queried, or trained on.
- Encryption keys are generated in hardware, per VM, and not exportable. Thanks to built-in hardware optimizations of both performance and security, there is no significant performance penalty to Confidential Computing workloads.
-Confidential Computing enables customers' sensitive code and other data encrypted in memory during processing. Google does not have access to the encryption keys.
+ Customer data will stay encrypted while it's used, indexed, queried, or trained on.
+ Encryption keys are generated in hardware, per VM, and not exportable. Thanks to built-in hardware optimizations of both performance and security, there's no significant performance penalty to Confidential Computing workloads.
+Confidential Computing enables customers' sensitive code and other data encrypted in memory during processing. Google doesn't have access to the encryption keys.
Confidential VM can help alleviate concerns about risk related to either dependency on Google infrastructure or Google insiders' access to customer data in the clear. **Severity**: High
Confidential VM can help alleviate concerns about risk related to either depende
### [Ensure that retention policies on log buckets are configured using Bucket Lock](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/07ca1398-d477-400a-a9fc-4cfc78f723f9) **Description**: Enabling retention policies on log buckets will protect logs stored in cloud storage buckets from being overwritten or accidentally deleted.
- It is recommended to set up retention policies and configure Bucket Lock on all storage buckets that are used as log sinks.
- Logs can be exported by creating one or more sinks that include a log filter and a destination. As Stackdriver Logging receives new log entries, they are compared against each sink.
+ It's recommended to set up retention policies and configure Bucket Lock on all storage buckets that are used as log sinks.
+ Logs can be exported by creating one or more sinks that include a log filter and a destination. As Stackdriver Logging receives new log entries, they're compared against each sink.
If a log entry matches a sink's filter, then a copy of the log entry is written to the destination. Sinks can be configured to export logs in storage buckets.
- It is recommended to configure a data retention policy for these cloud storage buckets and to lock the data retention policy; thus permanently preventing the policy from being reduced or removed.
+ It's recommended to configure a data retention policy for these cloud storage buckets and to lock the data retention policy; thus permanently preventing the policy from being reduced or removed.
This way, if the system is ever compromised by an attacker or a malicious insider who wants to cover their tracks, the activity logs are definitely preserved for forensics and security investigations. **Severity**: Low ### [Ensure that the Cloud SQL database instance requires all incoming connections to use SSL](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/13872d43-aac6-4018-9c89-507b8fe9be54)
-**Description**: It is recommended to enforce all incoming connections to SQL database instance to use SSL.
+**Description**: It's recommended to enforce all incoming connections to SQL database instance to use SSL.
SQL database connections if successfully trapped (MITM); can reveal sensitive data like credentials, database queries, query outputs etc.
- For security, it is recommended to always use SSL encryption when connecting to your instance.
- This recommendation is applicable for Postgresql, MySql generation 1 and MySql generation 2 instances.
+ For security, it's recommended to always use SSL encryption when connecting to your instance.
+ This recommendation is applicable for Postgresql, MySql generation 1, and MySql generation 2 instances.
**Severity**: High ### [Ensure that the 'contained database authentication' database flag for Cloud SQL on the SQL Server instance is set to 'off'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/658ce98f-ecf1-4c14-967f-3c4faf130fbf)
-**Description**: It is recommended to set "contained database authentication" database flag for Cloud SQL on the SQL Server instance is set to "off".
+**Description**: It's recommended to set "contained database authentication" database flag for Cloud SQL on the SQL Server instance is set to "off."
A contained database includes all database settings and metadata required to define the database and has no configuration dependencies on the instance of the Database Engine where the database is installed. Users can connect to the database without authenticating a login at the Database Engine level. Isolating the database from the Database Engine makes it possible to easily move the database to another instance of SQL Server.
Confidential VM can help alleviate concerns about risk related to either depende
### [Ensure that the 'cross db ownership chaining' database flag for Cloud SQL SQL Server instance is set to 'off'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/26973a34-79a6-46a0-874f-358c8c00af05)
-**Description**: It is recommended to set "cross db ownership chaining" database flag for Cloud SQL SQL Server instance to "off".
+**Description**: It's recommended to set "cross db ownership chaining" database flag for Cloud SQL SQL Server instance to "off."
Use the "cross db ownership" for chaining option to configure cross-database ownership chaining for an instance of Microsoft SQL Server. This server option allows you to control cross-database ownership chaining at the database level or to allow cross-database ownership chaining for all databases.
- Enabling "cross db ownership" is not recommended unless all of the databases hosted by the instance of SQL Server must participate in cross-database ownership chaining and you are aware of the security implications of this setting.
+ Enabling "cross db ownership" isn't recommended unless all of the databases hosted by the instance of SQL Server must participate in cross-database ownership chaining and you're aware of the security implications of this setting.
This recommendation is applicable to SQL Server database instances. **Severity**: Medium ### [Ensure that the 'local_infile' database flag for a Cloud SQL Mysql instance is set to 'off'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/633a87f4-bd71-45ce-9eca-c6bb8cbe8b21)
-**Description**: It is recommended to set the local_infile database flag for a Cloud SQL MySQL instance to off.
+**Description**: It's recommended to set the local_infile database flag for a Cloud SQL MySQL instance to off.
The local_infile flag controls the server-side LOCAL capability for LOAD DATA statements. Depending on the local_infile setting, the server refuses or permits local data loading by clients that have LOCAL enabled on the client side. To explicitly cause the server to refuse LOAD DATA LOCAL statements (regardless of how client programs and libraries are configured at build time or runtime), start mysqld with local_infile disabled. local_infile can also be set at runtime.
-Due to security issues associated with the local_infile flag, it is recommended to disable it. This recommendation is applicable to MySQL database instances.
+Due to security issues associated with the local_infile flag, it's recommended to disable it. This recommendation is applicable to MySQL database instances.
**Severity**: Medium ### [Ensure that the log metric filter and alerts exist for Cloud Storage IAM permission changes](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2e14266c-76ea-4479-915e-4edaae7d78ec)
-**Description**: It is recommended that a metric filter and alarm be established for Cloud Storage Bucket IAM changes.
-Monitoring changes to cloud storage bucket permissions may reduce the time needed to detect and correct permissions on sensitive cloud storage buckets and objects inside the bucket.
+**Description**: It's recommended that a metric filter and alarm be established for Cloud Storage Bucket IAM changes.
+Monitoring changes to cloud storage bucket permissions might reduce the time needed to detect and correct permissions on sensitive cloud storage buckets and objects inside the bucket.
**Severity**: Low ### [Ensure that the log metric filter and alerts exist for SQL instance configuration changes](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9dce022e-f7f9-4725-8a63-c0d4a868b4d3)
-**Description**: It is recommended that a metric filter and alarm be established for SQL instance configuration changes.
-Monitoring changes to SQL instance configuration changes may reduce the time needed to detect and correct misconfigurations done on the SQL server.
-Below are a few of the configurable options which may the impact security posture of an SQL instance:
+**Description**: It's recommended that a metric filter and alarm be established for SQL instance configuration changes.
+Monitoring changes to SQL instance configuration changes might reduce the time needed to detect and correct misconfigurations done on the SQL server.
+Below are a few of the configurable options that might impact the security posture of an SQL instance:
-- Enable auto backups and high availability: Misconfiguration may adversely impact business continuity, disaster recovery, and high availability-- Authorize networks: Misconfiguration may increase exposure to untrusted networks
+- Enable auto backups and high availability: Misconfiguration might adversely impact business continuity, disaster recovery, and high availability
+- Authorize networks: Misconfiguration might increase exposure to untrusted networks
**Severity**: Low ### [Ensure that there are only GCP-managed service account keys for each service account](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/6991b2e9-ae9e-4e99-acb6-037c4b575215)
-**Description**: User managed service accounts should not have user-managed keys.
- Anyone who has access to the keys will be able to access resources through the service account. GCP-managed keys are used by Cloud Platform services such as App Engine and Compute Engine. These keys cannot be downloaded. Google will keep the keys and automatically rotate them on an approximately weekly basis.
+**Description**: User managed service accounts shouldn't have user-managed keys.
+ Anyone who has access to the keys will be able to access resources through the service account. GCP-managed keys are used by Cloud Platform services such as App Engine and Compute Engine. These keys can't be downloaded. Google will keep the keys and automatically rotate them on an approximately weekly basis.
User-managed keys are created, downloadable, and managed by users. They expire 10 years from creation.
-For user-managed keys, the user has to take ownership of key management activities which include:
+For user-managed keys, the user has to take ownership of key management activities, which include:
- Key storage - Key distribution
For user-managed keys, the user has to take ownership of key management activiti
- Protecting the keys from unauthorized users - Key recovery
-Even with key owner precautions, keys can be easily leaked by common development malpractices like checking keys into the source code or leaving them in the Downloads directory, or accidentally leaving them on support blogs/channels. It is recommended to prevent user-managed service account keys.
+Even with key owner precautions, keys can be easily leaked by common development malpractices like checking keys into the source code or leaving them in the Downloads directory, or accidentally leaving them on support blogs/channels. It's recommended to prevent user-managed service account keys.
**Severity**: Low ### [Ensure 'user connections' database flag for Cloud SQL SQL Server instance is set as appropriate](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/91f55b07-083c-4ec5-a2be-4b52bbc2e2df)
-**Description**: It is recommended to set "user connections" database flag for Cloud SQL SQL Server instance according organization-defined value.
+**Description**: It's recommended to set "user connections" database flag for Cloud SQL SQL Server instance according to organization-defined value.
The "user connections" option specifies the maximum number of simultaneous user connections that are allowed on an instance of SQL Server.
- The actual number of user connections allowed also depends on the version of SQL Server that you are using, and also the limits of your application or applications and hardware.
+ The actual number of user connections allowed also depends on the version of SQL Server that you're using, and also the limits of your application or applications and hardware.
SQL Server allows a maximum of 32,767 user connections. Because user connections is a dynamic (self-configuring) option, SQL Server adjusts the maximum number of user connections automatically as needed, up to the maximum value allowable. For example, if only 10 users are logged in, 10 user connection objects are allocated.
- In most cases, you do not have to change the value for this option.
+ In most cases, you don't have to change the value for this option.
The default is 0, which means that the maximum (32,767) user connections are allowed. This recommendation is applicable to SQL Server database instances.
Even with key owner precautions, keys can be easily leaked by common development
### [Ensure 'user options' database flag for Cloud SQL SQL Server instance is not configured](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/fab1e680-86f0-4616-bee9-1b7394e49ade)
-**Description**: It is recommended that, "user options" database flag for Cloud SQL SQL Server instance should not be configured.
+**Description**: It's recommended that, "user options" database flag for Cloud SQL SQL Server instance shouldn't be configured.
The "user options" option specifies global defaults for all users. A list of default query processing options is established for the duration of a user's work session.
- The user options option allows you to change the default values of the SET options (if the server's default settings are not appropriate).
+ The user options option allows you to change the default values of the SET options (if the server's default settings aren't appropriate).
A user can override these defaults by using the SET statement. You can configure user options dynamically for new logins.
- After you change the setting of user options, new login sessions use the new setting; current login sessions are not affected.
+ After you change the setting of user options, new login sessions use the new setting; current login sessions aren't affected.
This recommendation is applicable to SQL Server database instances. **Severity**: Low
Even with key owner precautions, keys can be easily leaked by common development
### [Over-provisioned identities in projects should be investigated to reduce the Permission Creep Index (PCI)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a6cd9b98-3b29-4213-b880-43f0b0897b83)
-**Description**: Over-provisioned identities in projects should be investigated to reduce the Permission Creep Index (PCI) and to safeguard your infrastructure. Reduce the PCI by removing the unused high risk permission assignments. High PCI reflects risk associated with the identities with permissions that exceed their normal or required usage
+**Description**: Over-provisioned identities in projects should be investigated to reduce the Permission Creep Index (PCI) and to safeguard your infrastructure. Reduce the PCI by removing the unused high risk permission assignments. High PCI reflects risk associated with the identities with permissions that exceed their normal or required usage.
**Severity**: Medium
Even with key owner precautions, keys can be easily leaked by common development
### [Cryptographic keys should not have more than three users](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/24eb0365-d63d-43c0-b11f-8b0a1a0842f7)
-**Description**: This recommendation evaluates IAM policies for key rings, projects, and organizations, and retrieves principals with roles that allow them to encrypt, decrypt or sign data using Cloud KMS keys: roles/owner, roles/cloudkms.cryptoKeyEncrypterDecrypter, roles/cloudkms.cryptoKeyEncrypter, roles/cloudkms.cryptoKeyDecrypter, roles/cloudkms.signer, and roles/cloudkms.signerVerifier.
+**Description**: This recommendation evaluates IAM policies for key rings, projects, and organizations, and retrieves principals with roles that allow them to encrypt, decrypt, or sign data using Cloud KMS keys: roles/owner, roles/cloudkms.cryptoKeyEncrypterDecrypter, roles/cloudkms.cryptoKeyEncrypter, roles/cloudkms.cryptoKeyDecrypter, roles/cloudkms.signer, and roles/cloudkms.signerVerifier.
**Severity**: Medium ### [Ensure API keys are not created for a project](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/29ed3416-2035-4d44-986e-0bcbb7de172e)
-**Description**: Keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. It is recommended to use standard authentication flow instead.
+**Description**: Keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. It's recommended to use standard authentication flow instead.
Security risks involved in using API-Keys appear below: 1. API keys are simple encrypted strings
- 2. API keys do not identify the user or the application making the API request
+ 2. API keys don't identify the user or the application making the API request
3. API keys are typically accessible to clients, making it easy to discover and steal an API key
- To avoid the security risk in using API keys, it is recommended to use standard authentication flow instead.
+ To avoid the security risk in using API keys, it's recommended to use standard authentication flow instead.
**Severity**: High ### [Ensure API keys are restricted to only APIs that application needs access](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/54d3b0ae-67b3-4fee-9ac4-f6c784b9d16b)
-**Description**: API keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. It is recommended to restrict API keys to use (call) only APIs required by an application.
+**Description**: API keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. It's recommended to restrict API keys to use (call) only APIs required by an application.
Security risks involved in using API-Keys are below: 1. API keys are simple encrypted strings
- 2. API keys do not identify the user or the application making the API request
+ 2. API keys don't identify the user or the application making the API request
3. API keys are typically accessible to clients, making it easy to discover and steal an API key
-In light of these potential risks, Google recommends using the standard authentication flow instead of API-Keys. However, there are limited cases where API keys are more appropriate. For example, if there is a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API.
+In light of these potential risks, Google recommends using the standard authentication flow instead of API-Keys. However, there are limited cases where API keys are more appropriate. For example, if there's a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API.
In order to reduce attack surfaces by providing least privileges, API-Keys can be restricted to use (call) only APIs required by an application.
In light of these potential risks, Google recommends using the standard authenti
### [Ensure API keys are restricted to use by only specified Hosts and Apps](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/63e0e2db-70c3-4edc-becf-93961d3156ed)
-**Description**: Unrestricted keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. It is recommended to restrict API key usage to trusted hosts, HTTP referrers and apps.
+**Description**: Unrestricted keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. It's recommended to restrict API key usage to trusted hosts, HTTP referrers, and apps.
Security risks involved in using API-Keys appear below: 1. API keys are simple encrypted strings
- 2. API keys do not identify the user or the application making the API request
+ 2. API keys don't identify the user or the application making the API request
3. API keys are typically accessible to clients, making it easy to discover and steal an API key In light of these potential risks, Google recommends using the standard authentication flow instead of API keys. However, there are limited cases where API keys are more appropriate.
-For example, if there is a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API.
+For example, if there's a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API.
- In order to reduce attack vectors, API-Keys can be restricted only to trusted hosts, HTTP referrers and applications.
+ In order to reduce attack vectors, API-Keys can be restricted only to trusted hosts, HTTP referrers, and applications.
**Severity**: High ### [Ensure API keys are rotated every 90 days](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/fbc1ef5d-989e-4b64-8e9d-221b422f9c43)
-**Description**: It is recommended to rotate API keys every 90 days.
+**Description**: It's recommended to rotate API keys every 90 days.
Security risks involved in using API-Keys are listed below: 1. API keys are simple encrypted strings
- 2. API keys do not identify the user or the application making the API request
+ 2. API keys don't identify the user or the application making the API request
3. API keys are typically accessible to clients, making it easy to discover and steal an API key
-Because of these potential risks, Google recommends using the standard authentication flow instead of API Keys. However, there are limited cases where API keys are more appropriate. For example, if there is a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API.
+Because of these potential risks, Google recommends using the standard authentication flow instead of API Keys. However, there are limited cases where API keys are more appropriate. For example, if there's a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API.
- Once a key is stolen, it has no expiration, meaning it may be used indefinitely unless the project owner revokes or regenerates the key. Rotating API keys will reduce the window of opportunity for an access key that is associated with a compromised or terminated account to be used.
+ Once a key is stolen, it has no expiration, meaning it might be used indefinitely unless the project owner revokes or regenerates the key. Rotating API keys will reduce the window of opportunity for an access key that is associated with a compromised or terminated account to be used.
- API keys should be rotated to ensure that data cannot be accessed with an old key that might have been lost, cracked, or stolen.
+ API keys should be rotated to ensure that data can't be accessed with an old key that might have been lost, cracked, or stolen.
**Severity**: High
Because of these potential risks, Google recommends using the standard authentic
**Description**: Google Cloud Key Management Service stores cryptographic keys in a hierarchical structure designed for useful and elegant access control management. The format for the rotation schedule depends on the client library that is used.
- For the gcloud command-line tool, the next rotation time must be in "ISO" or "RFC3339" format, and the rotation period must be in the form "INTEGER[UNIT]", where units can be one of seconds (s), minutes (m), hours (h) or days (d).
- Set a key rotation period and starting time. A key can be created with a specified "rotation period", which is the time between when new key versions are generated automatically.
+ For the gcloud command-line tool, the next rotation time must be in "ISO" or "RFC3339" format, and the rotation period must be in the form "INTEGER[UNIT]," where units can be one of seconds (s), minutes (m), hours (h), or days (d).
+ Set a key rotation period and starting time. A key can be created with a specified "rotation period," which is the time between when new key versions are generated automatically.
A key can also be created with a specified next rotation time. A key is a named object representing a "cryptographic key" used for a specific purpose.
- The key material, the actual bits used for "encryption", can change over time as new key versions are created.
- A key is used to protect some "corpus of data". A collection of files could be encrypted with the same key and people with "decrypt" permissions on that key would be able to decrypt those files.
+ The key material, the actual bits used for "encryption," can change over time as new key versions are created.
+ A key is used to protect some "corpus of data." A collection of files could be encrypted with the same key and people with "decrypt" permissions on that key would be able to decrypt those files.
Therefore, it's necessary to make sure the "rotation period" is set to a specific time. **Severity**: Medium
Because of these potential risks, Google recommends using the standard authentic
- Permissions for actions that modify the state of all GCP services within the project - Manage roles and permissions for a project and all resources within the project - Set up billing for a project
- Granting the owner role to a member (user/Service-Account) will allow that member to modify the Identity and Access Management (IAM) policy. Therefore, grant the owner role only if the member has a legitimate purpose to manage the IAM policy. This is because the project IAM policy contains sensitive access control data. Having a minimal set of users allowed to manage IAM policy will simplify any auditing that may be necessary.
+ Granting the owner role to a member (user/Service-Account) will allow that member to modify the Identity and Access Management (IAM) policy. Therefore, grant the owner role only if the member has a legitimate purpose to manage the IAM policy. This is because the project IAM policy contains sensitive access control data. Having a minimal set of users allowed to manage IAM policy will simplify any auditing that might be necessary.
Project ownership has the highest level of privileges on a project. To avoid misuse of project resources, the project ownership assignment/change actions mentioned above should be monitored and alerted to concerned recipients. - Sending project ownership invites - Acceptance/Rejection of project ownership invite by user
Project ownership has the highest level of privileges on a project. To avoid mis
**Description**: Enabling OS login binds SSH certificates to IAM users and facilitates effective SSH certificate management. Enabling osLogin ensures that SSH keys used to connect to instances are mapped with IAM users. Revoking access to IAM user will revoke all the SSH keys associated with that particular user.
-It facilitates centralized and automated SSH key pair management which is useful in handling cases like response to compromised SSH key pairs and/or revocation of external/third-party/Vendor users.
-To find out which instance causes the project to be unhealthy see recommendation "Ensure oslogin is enabled for all instances".
+It facilitates centralized and automated SSH key pair management, which is useful in handling cases like response to compromised SSH key pairs and/or revocation of external/third-party/Vendor users.
+To find out which instance causes the project to be unhealthy see recommendation "Ensure oslogin is enabled for all instances."
**Severity**: Medium
To find out which instance causes the project to be unhealthy see recommendation
**Description**: Enabling OS login binds SSH certificates to IAM users and facilitates effective SSH certificate management. Enabling osLogin ensures that SSH keys used to connect to instances are mapped with IAM users. Revoking access to IAM user will revoke all the SSH keys associated with that particular user.
-It facilitates centralized and automated SSH key pair management which is useful in handling cases like response to compromised SSH key pairs and/or revocation of external/third-party/Vendor users.
+It facilitates centralized and automated SSH key pair management, which is useful in handling cases like response to compromised SSH key pairs and/or revocation of external/third-party/Vendor users.
**Severity**: Medium ### [Ensure that Cloud Audit Logging is configured properly across all services and all users from a project](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/0b9173aa-68d9-4581-814f-fab4a91aa9af)
-**Description**: It is recommended that Cloud Audit Logging is configured to track all admin activities and read, write access to user data.
+**Description**: It's recommended that Cloud Audit Logging is configured to track all admin activities and read, write access to user data.
Cloud Audit Logging maintains two audit logs for each project, folder, and organization: Admin Activity and Data Access. 1. Admin Activity logs contain log entries for API calls or other administrative actions that modify the configuration or metadata of resources.
- Admin Activity audit logs are enabled for all services and cannot be configured.
+ Admin Activity audit logs are enabled for all services and can't be configured.
1. Data Access audit logs record API calls that create, modify, or read user-provided data. These are disabled by default and should be enabled. There are three kinds of Data Access audit log information: -- Admin read: Records operations that read metadata or configuration information. Admin Activity audit logs record writes of metadata and configuration information that cannot be disabled.
+- Admin read: Records operations that read metadata or configuration information. Admin Activity audit logs record writes of metadata and configuration information that can't be disabled.
- Data read: Records operations that read user-provided data. - Data write: Records operations that write user-provided data.
- It is recommended to have an effective default audit config configured in such a way that:
+ It's recommended to have an effective default audit config configured in such a way that:
1. logtype is set to DATA_READ (to log user activity tracking) and DATA_WRITES (to log changes/tampering to user data). 1. audit config is enabled for all the services supported by the Data Access audit logs feature.
- 1. Logs should be captured for all users, i.e., there are no exempted users in any of the audit config sections. This will ensure overriding the audit config will not contradict the requirement.
+ 1. Logs should be captured for all users, that is, there are no exempted users in any of the audit config sections. This will ensure overriding the audit config will not contradict the requirement.
**Severity**: Medium ### [Ensure that Cloud KMS cryptokeys are not anonymously or publicly accessible](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/fcbcaef9-4bb0-49db-a932-afd64ed221d4)
-**Description**: It is recommended that the IAM policy on Cloud KMS "cryptokeys" should restrict anonymous and/or public access.
+**Description**: It's recommended that the IAM policy on Cloud KMS "cryptokeys" should restrict anonymous and/or public access.
Granting permissions to "allUsers" or "allAuthenticatedUsers" allows anyone to access the dataset. Such access might not be desirable if sensitive data is stored at the location.
- In this case, ensure that anonymous and/or public access to a Cloud KMS "cryptokey" is not allowed.
+ In this case, ensure that anonymous and/or public access to a Cloud KMS "cryptokey" isn't allowed.
**Severity**: High ### [Ensure that corporate login credentials are used](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/67ebdf6b-6197-4e42-bbbf-eaf4e6c20b4c) **Description**: Use corporate login credentials instead of personal accounts, such as Gmail accounts.
- It is recommended fully managed corporate Google accounts be used for increased visibility, auditing, and controlling access to Cloud Platform resources.
- Gmail accounts based outside of the user's organization, such as personal accounts, should not be used for business purposes.
+ It's recommended fully managed corporate Google accounts be used for increased visibility, auditing, and controlling access to Cloud Platform resources.
+ Gmail accounts based outside of the user's organization, such as personal accounts, shouldn't be used for business purposes.
**Severity**: High ### [Ensure that IAM users are not assigned the Service Account User or Service Account Token Creator roles at project level](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/54c381fe-a80a-4038-8a9d-c166d2922ea9)
-**Description**: It is recommended to assign the "Service Account User (iam.serviceAccountUser)" and "Service Account Token Creator (iam.serviceAccountTokenCreator)" roles to a user for a specific service account rather than assigning the role to a user at project level.
+**Description**: It's recommended to assign the "Service Account User (iam.serviceAccountUser)" and "Service Account Token Creator (iam.serviceAccountTokenCreator)" roles to a user for a specific service account rather than assigning the role to a user at project level.
A service account is a special Google account that belongs to an application or a virtual machine (VM), instead of to an individual end-user. Application/VM-Instance uses the service account to call the service's Google API so that users aren't directly involved. In addition to being an identity, a service account is a resource that has IAM policies attached to it. These policies determine who can use the service account. Users with IAM roles to update the App Engine and Compute Engine instances (such as App Engine Deployer or Compute Instance Admin) can effectively run code as the service accounts used to run these instances, and indirectly gain access to all the resources for which the service accounts have access.
- Similarly, SSH access to a Compute Engine instance may also provide the ability to execute code as that instance/Service account.
+ Similarly, SSH access to a Compute Engine instance might also provide the ability to execute code as that instance/Service account.
Based on business needs, there could be multiple user-managed service accounts configured for a project.
- Granting the "iam.serviceAccountUser" or "iam.serviceAserviceAccountTokenCreatorccountUser" roles to a user for a project gives the user access to all service accounts in the project, including service accounts that may be created in the future.
- This can result in elevation of privileges by using service accounts and corresponding "Compute Engine instances".
- In order to implement "least privileges" best practices, IAM users should not be assigned the "Service Account User" or "Service Account Token Creator" roles at the project level. Instead, these roles should be assigned to a user for a specific service account, giving that user access to the service account. The "Service Account User" allows a user to bind a service account to a long-running job service, whereas the "Service Account Token Creator" role allows a user to directly impersonate (or assert) the identity of a service account.
+ Granting the "iam.serviceAccountUser" or "iam.serviceAserviceAccountTokenCreatorccountUser" roles to a user for a project gives the user access to all service accounts in the project, including service accounts that might be created in the future.
+ This can result in elevation of privileges by using service accounts and corresponding "Compute Engine instances."
+ In order to implement "least privileges" best practices, IAM users shouldn't be assigned the "Service Account User" or "Service Account Token Creator" roles at the project level. Instead, these roles should be assigned to a user for a specific service account, giving that user access to the service account. The "Service Account User" allows a user to bind a service account to a long-running job service, whereas the "Service Account Token Creator" role allows a user to directly impersonate (or assert) the identity of a service account.
**Severity**: Medium ### [Ensure that Separation of duties is enforced while assigning KMS related roles to users](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/14007242-eadd-4d15-ad54-97201351c0ec)
-**Description**: It is recommended that the principle of 'Separation of Duties' is enforced while assigning KMS related roles to users.
+**Description**: It's recommended that the principle of 'Separation of Duties' is enforced while assigning KMS related roles to users.
The built-in/predefined IAM role "Cloud KMS Admin" allows the user/identity to create, delete, and manage service account(s). The built-in/predefined IAM role "Cloud KMS CryptoKey Encrypter/Decrypter" allows the user/identity (with adequate privileges on concerned resources) to encrypt and decrypt data at rest using an encryption key(s). The built-in/predefined IAM role Cloud KMS CryptoKey Encrypter allows the user/identity (with adequate privileges on concerned resources) to encrypt data at rest using an encryption key(s). The built-in/predefined IAM role "Cloud KMS CryptoKey Decrypter" allows the user/identity (with adequate privileges on concerned resources) to decrypt data at rest using an encryption key(s).
- Separation of duties is the concept of ensuring that one individual does not have all necessary permissions to be able to complete a malicious action.
- In Cloud KMS, this could be an action such as using a key to access and decrypt data a user should not normally have access to.
+ Separation of duties is the concept of ensuring that one individual doesn't have all necessary permissions to be able to complete a malicious action.
+ In Cloud KMS, this could be an action such as using a key to access and decrypt data a user shouldn't normally have access to.
Separation of duties is a business control typically used in larger organizations, meant to help avoid security or privacy incidents and errors.
- It is considered best practice. No user(s) should have Cloud KMS Admin and any of the "Cloud KMS CryptoKey Encrypter/Decrypter", "Cloud KMS CryptoKey Encrypter", "Cloud KMS CryptoKey Decrypter" roles assigned at the same time.
+ It's considered best practice. No user(s) should have Cloud KMS Admin and any of the "Cloud KMS CryptoKey Encrypter/Decrypter," "Cloud KMS CryptoKey Encrypter," "Cloud KMS CryptoKey Decrypter" roles assigned at the same time.
**Severity**: High ### [Ensure that Separation of duties is enforced while assigning service account related roles to users](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9e8cb9ac-87ee-424b-a9d2-0d41e411d18f)
-**Description**: It is recommended that the principle of 'Separation of Duties' is enforced while assigning service-account related roles to users.
+**Description**: It's recommended that the principle of 'Separation of Duties' is enforced while assigning service-account related roles to users.
The built-in/predefined IAM role "Service Account admin" allows the user/identity to create, delete, and manage service account(s). The built-in/predefined IAM role "Service Account User" allows the user/identity (with adequate privileges on Compute and App Engine) to assign service account(s) to Apps/Compute Instances.
- Separation of duties is the concept of ensuring that one individual does not have all necessary permissions to be able to complete a malicious action.
- In Cloud IAM - service accounts, this could be an action such as using a service account to access resources that user should not normally have access to.
- Separation of duties is a business control typically used in larger organizations, meant to help avoid security or privacy incidents and errors. It is considered best practice.
+ Separation of duties is the concept of ensuring that one individual doesn't have all necessary permissions to be able to complete a malicious action.
+ In Cloud IAM - service accounts, this could be an action such as using a service account to access resources that user shouldn't normally have access to.
+ Separation of duties is a business control typically used in larger organizations, meant to help avoid security or privacy incidents and errors. It's considered best practice.
No user should have "Service Account Admin" and "Service Account User" roles assigned at the same time. **Severity**: Medium
There are three kinds of Data Access audit log information:
### [Ensure that sinks are configured for all log entries](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/194b473e-7c5a-4754-b1ae-76591fe11b5c)
-**Description**: It is recommended to create a sink that will export copies of all the log entries. This can help aggregate logs from multiple projects and export them to a Security Information and Event Management (SIEM).
- Log entries are held in Stackdriver Logging. To aggregate logs, export them to a SIEM. To keep them longer, it is recommended to set up a log sink. Exporting involves writing a filter that selects the log entries to export, and choosing a destination in Cloud Storage, BigQuery, or Cloud Pub/Sub.
- The filter and destination are held in an object called a sink. To ensure all log entries are exported to sinks, ensure that there is no filter configured for a sink. Sinks can be created in projects, organizations, folders, and billing accounts.
+**Description**: It's recommended to create a sink that will export copies of all the log entries. This can help aggregate logs from multiple projects and export them to a Security Information and Event Management (SIEM).
+ Log entries are held in Stackdriver Logging. To aggregate logs, export them to a SIEM. To keep them longer, it's recommended to set up a log sink. Exporting involves writing a filter that selects the log entries to export, and choosing a destination in Cloud Storage, BigQuery, or Cloud Pub/Sub.
+ The filter and destination are held in an object called a sink. To ensure all log entries are exported to sinks, ensure that there's no filter configured for a sink. Sinks can be created in projects, organizations, folders, and billing accounts.
**Severity**: Low
Configuring the metric filter and alerts for audit configuration changes ensures
### [Ensure that the log metric filter and alerts exist for Custom Role changes](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ba27e90d-311d-409d-8c69-7dfac0a1351c)
-**Description**: It is recommended that a metric filter and alarm be established for changes to Identity and Access Management (IAM) role creation, deletion and updating activities.
+**Description**: It's recommended that a metric filter and alarm be established for changes to Identity and Access Management (IAM) role creation, deletion, and updating activities.
Google Cloud IAM provides predefined roles that give granular access to specific Google Cloud Platform resources and prevent unwanted access to other resources. However, to cater to organization-specific needs, Cloud IAM also provides the ability to create custom roles. Project owners and administrators with the Organization Role Administrator role or the IAM Role Administrator role can create custom roles. Monitoring role creation, deletion and updating activities will help in identifying any over-privileged role at early stages. **Severity**: Low
Google Cloud IAM provides predefined roles that give granular access to specific
### [Ensure user-managed/external keys for service accounts are rotated every 90 days or less](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/0007dd31-9e95-460d-82bd-ae3e9e623161) **Description**: Service Account keys consist of a key ID (Private_key_Id) and Private key, which are used to sign programmatic requests users make to Google cloud services accessible to that particular service account.
- It is recommended that all Service Account keys are regularly rotated.
- Rotating Service Account keys will reduce the window of opportunity for an access key that is associated with a compromised or terminated account to be used. Service Account keys should be rotated to ensure that data cannot be accessed with an old key that might have been lost, cracked, or stolen.
- Each service account is associated with a key pair managed by Google Cloud Platform (GCP). It is used for service-to-service authentication within GCP. Google rotates the keys daily.
- GCP provides the option to create one or more user-managed (also called external key pairs) key pairs for use from outside GCP (for example, for use with Application Default Credentials). When a new key pair is created, the user is required to download the private key (which is not retained by Google). </br> With external keys, users are responsible for keeping the private key secure and other management operations such as key rotation. External keys can be managed by the IAM API, gcloud command-line tool, or the Service Accounts page in the Google Cloud Platform Console.</br> GCP facilitates up to 10 external service account keys per service account to facilitate key rotation.
+ It's recommended that all Service Account keys are regularly rotated.
+ Rotating Service Account keys will reduce the window of opportunity for an access key that is associated with a compromised or terminated account to be used. Service Account keys should be rotated to ensure that data can't be accessed with an old key that might have been lost, cracked, or stolen.
+ Each service account is associated with a key pair managed by Google Cloud Platform (GCP). It's used for service-to-service authentication within GCP. Google rotates the keys daily.
+ GCP provides the option to create one or more user-managed (also called external key pairs) key pairs for use from outside GCP (for example, for use with Application Default Credentials). When a new key pair is created, the user is required to download the private key (which isn't retained by Google).
+
+With external keys, users are responsible for keeping the private key secure and other management operations such as key rotation. External keys can be managed by the IAM API, gcloud command-line tool, or the Service Accounts page in the Google Cloud Platform Console.
+
+GCP facilitates up to 10 external service account keys per service account to facilitate key rotation.
**Severity**: Medium
Google Cloud IAM provides predefined roles that give granular access to specific
### [Unused identities in your GCP environment should be removed (Preview)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/257e9506-fd47-4123-a8ef-92017f845906)
-**Description**: It is imperative to identify unused identities as they pose significant security risks. These identities often involve bad practices, such as excessive permissions and mismanaged keys that leaves organizations open to credential misuse or exploitation and increases your resource`s attack surface. Inactive identities are human and non-human entities that have not performed any action on any resource in the last 90 days. Service account keys can become a security risk if not managed carefully.
+**Description**: It's imperative to identify unused identities as they pose significant security risks. These identities often involve bad practices, such as excessive permissions and mismanaged keys that leave organizations open to credential misuse or exploitation and increases your resource`s attack surface. Inactive identities are human and nonhuman entities that haven't performed any action on any resource in the last 90 days. Service account keys can become a security risk if not managed carefully.
**Severity**: Medium ### [GCP overprovisioned identities should have only the necessary permissions (Preview)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/fa210cff-18da-474a-ac60-8f93f7c6f4c9)
-**Description**: An over-provisioned active identity is an identity that has access to privileges that they have not used. Over-provisioned active identities, especially for non-human accounts that have very defined actions and responsibilities, can increase the blast radius in the event of a user, key, or resource compromise The principle of least privilege states that a resource should only have access to the exact resources it needs in order to function. This principle was developed to address the risk of compromised identities granting an attacker access to a wide range of resources.
+**Description**: An over-provisioned active identity is an identity that has access to privileges that they haven't used. Over-provisioned active identities, especially for nonhuman accounts that have very defined actions and responsibilities, can increase the blast radius in the event of a user, key, or resource compromise The principle of least privilege states that a resource should only have access to the exact resources it needs in order to function. This principle was developed to address the risk of compromised identities granting an attacker access to a wide range of resources.
**Severity**: Medium
Google Cloud IAM provides predefined roles that give granular access to specific
### [Egress deny rule should be set on a firewall to block unwanted outbound traffic](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2acc6ce9-c9a7-4d91-b7c8-f2314ecbf8af)
-**Description**: This recommendation evaluates whether the destinationRanges property in the firewall is set to 0.0.0.0/0 and the denied property contains the key-value pair, 'IPProtocol': 'all'.
+**Description**: This recommendation evaluates whether the destinationRanges property in the firewall is set to 0.0.0.0/0 and the denied property contains the key-value pair, 'IPProtocol': 'all.'
**Severity**: Low
Google Cloud IAM provides predefined roles that give granular access to specific
**Description**: Access to VMs should be restricted by firewall rules that allow only IAP traffic by ensuring only connections proxied by the IAP are allowed. To ensure that load balancing works correctly health checks should also be allowed. IAP ensure that access to VMs is controlled by authenticating incoming requests.
- However if the VM is still accessible from IP addresses other than the IAP it may still be possible to send unauthenticated requests to the instance.
- Care must be taken to ensure that loadblancer health checks are not blocked as this would stop the loadbalancer from correctly knowing the health of the VM and loadbalancing correctly.
+ However if the VM is still accessible from IP addresses other than the IAP it might still be possible to send unauthenticated requests to the instance.
+ Care must be taken to ensure that loadblancer health checks aren't blocked as this would stop the load balancer from correctly knowing the health of the VM and load balancing correctly.
**Severity**: Medium ### [Ensure legacy networks do not exist for a project](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/44995f9b-5963-4a92-8e99-6d68acbc187c)
-**Description**: In order to prevent use of legacy networks, a project should not have a legacy network configured.
+**Description**: In order to prevent use of legacy networks, a project shouldn't have a legacy network configured.
Legacy networks have a single network IPv4 prefix range and a single gateway IP address for the whole network. The network is global in scope and spans all cloud regions.
- Subnetworks cannot be created in a legacy network and are unable to switch from legacy to auto or custom subnet networks. Legacy networks can have an impact for high network traffic projects and are subject to a single point of contention or failure.
+ Subnetworks can't be created in a legacy network and are unable to switch from legacy to auto or custom subnet networks. Legacy networks can have an impact for high network traffic projects and are subject to a single point of contention or failure.
**Severity**: Medium
IAP ensure that access to VMs is controlled by authenticating incoming requests.
The performance hit is dependent on the configuration of the environment and the host name resolution setup. This parameter can only be set in the "postgresql.conf" file or on the server command line. Logging hostnames can incur overhead on server performance as for each statement logged, DNS resolution will be required to convert IP address to hostname.
- Depending on the setup, this may be non-negligible.
+ Depending on the setup, this might be non-negligible.
Additionally, the IP addresses that are logged can be resolved to their DNS names later when reviewing the logs excluding the cases where dynamic hostnames are used. This recommendation is applicable to PostgreSQL database instances.
IAP ensure that access to VMs is controlled by authenticating incoming requests.
**Description**: Secure Sockets Layer (SSL) policies determine what port Transport Layer Security (TLS) features clients are permitted to use when connecting to load balancers. To prevent usage of insecure features, SSL policies should use (a) at least TLS 1.2 with the MODERN profile; or (b) the RESTRICTED profile, because it effectively requires clients to use TLS 1.2 regardless of the chosen minimum TLS version;
- or (3) a CUSTOM profile that does not support any of the following features:
+ or (3) a CUSTOM profile that doesn't support any of the following features:
TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_GCM_SHA384 TLS_RSA_WITH_AES_128_CBC_SHA
Load balancers are used to efficiently distribute traffic across multiple server
GCP customers can configure load balancer SSL policies with a minimum TLS version (1.0, 1.1, or 1.2) that clients can use to establish a connection, along with a profile (Compatible, Modern, Restricted, or Custom) that specifies permissible cipher suites. To comply with users using outdated protocols, GCP load balancers can be configured to permit insecure cipher suites. In fact, the GCP default SSL policy uses a minimum TLS version of 1.0 and a Compatible profile, which allows the widest range of insecure cipher suites.
- As a result, it is easy for customers to configure a load balancer without even knowing that they are permitting outdated cipher suites.
+ As a result, it's easy for customers to configure a load balancer without even knowing that they're permitting outdated cipher suites.
**Severity**: Medium
Load balancers are used to efficiently distribute traffic across multiple server
**Description**: Cloud DNS logging records the queries from the name servers within your VPC to Stackdriver. Logged queries can come from Compute Engine VMs, GKE containers, or other GCP resources provisioned within the VPC.
-Security monitoring and forensics cannot depend solely on IP addresses from VPC flow logs, especially when considering the dynamic IP usage of cloud resources, HTTP virtual host routing,
+Security monitoring and forensics can't depend solely on IP addresses from VPC flow logs, especially when considering the dynamic IP usage of cloud resources, HTTP virtual host routing,
and other technology that can obscure the DNS name used by a client from the IP address. Monitoring of Cloud DNS logs provides visibility to DNS names requested by the clients within the VPC. These logs can be monitored for anomalous domain names, evaluated against threat intelligence, and
and TCP/443 (DNS over HTTPS) to prevent client from using external DNS name serv
### [Ensure that DNSSEC is enabled for Cloud DNS](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/33509176-9e4d-4238-84ec-984ba67019fa)
-**Description**: Cloud Domain Name System (DNS) is a fast, reliable and cost-effective domain name system that powers millions of domains on the internet.
+**Description**: Cloud Domain Name System (DNS) is a fast, reliable, and cost-effective domain name system that powers millions of domains on the internet.
Domain Name System Security Extensions (DNSSEC) in Cloud DNS enables domain owners to take easy steps to protect their domains against DNS hijacking and man-in-the-middle and other attacks. Domain Name System Security Extensions (DNSSEC) adds security to the DNS protocol by enabling DNS responses to be validated. Having a trustworthy DNS that translates a domain name like `www.example.com` into its associated IP address is an increasingly important building block of today's web-based applications. Attackers can hijack this process of domain/IP lookup and redirect users to a malicious site through DNS hijacking and man-in-the-middle attacks. DNSSEC helps mitigate the risk of such attacks by cryptographically signing DNS records.
- As a result, it prevents attackers from issuing fake DNS responses that may misdirect browsers to nefarious websites.
+ As a result, it prevents attackers from issuing fake DNS responses that might misdirect browsers to nefarious websites.
**Severity**: Medium ### [Ensure that RDP access is restricted from the Internet](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8bc8464f-f32a-4b3c-954e-48f9db2d9bcf) **Description**: GCP Firewall Rules are specific to a VPC Network. Each rule either allows or denies traffic when its conditions are met. Its conditions allow users to specify the type of traffic, such as ports and protocols, and the source or destination of the traffic, including IP addresses, subnets, and instances.
-Firewall rules are defined at the VPC network level and are specific to the network in which they are defined. The rules themselves cannot be shared among networks. Firewall rules only support IPv4 traffic.
+Firewall rules are defined at the VPC network level and are specific to the network in which they're defined. The rules themselves can't be shared among networks. Firewall rules only support IPv4 traffic.
When specifying a source for an ingress rule or a destination for an egress rule by address, an IPv4 address or IPv4 block in CIDR notation can be used. Generic (0.0.0.0/0) incoming traffic from the Internet to a VPC or VM instance using RDP on Port 3389 can be avoided. GCP Firewall Rules within a VPC Network. These rules apply to outgoing (egress) traffic from instances and incoming (ingress) traffic to instances in the network. Egress and ingress traffic flows are controlled even if the traffic stays within the network (for example, instance-to-instance communication). For an instance to have outgoing Internet access, the network must have a valid Internet gateway route or custom route whose destination IP is specified.
When specifying a source for an ingress rule or a destination for an egress rule
### [Ensure that RSASHA1 is not used for the key-signing key in Cloud DNS DNSSEC](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/87356ecc-b718-442d-af22-677bceaeae06)
-**Description**: DNSSEC algorithm numbers in this registry may be used in CERT RRs.
+**Description**: DNSSEC algorithm numbers in this registry might be used in CERT RRs.
Zone signing (DNSSEC) and transaction security mechanisms (SIG(0) and TSIG) make use of particular subsets of these algorithms. The algorithm used for key signing should be a recommended one and it should be strong.
- Domain Name System Security Extensions (DNSSEC) algorithm numbers in this registry may be used in CERT RRs.
+ Domain Name System Security Extensions (DNSSEC) algorithm numbers in this registry might be used in CERT RRs.
Zonesigning (DNSSEC) and transaction security mechanisms (SIG(0) and TSIG) make use of particular subsets of these algorithms. The algorithm used for key signing should be a recommended one and it should be strong. When enabling DNSSEC for a managed zone, or creating a managed zone with DNSSEC, the user can select the DNSSEC signing algorithms and the denial-of-existence type.
- Changing the DNSSEC settings is only effective for a managed zone if DNSSEC is not already enabled.
- If there is a need to change the settings for a managed zone where it has been enabled, turn DNSSEC off and then re-enable it with different settings.
+ Changing the DNSSEC settings is only effective for a managed zone if DNSSEC isn't already enabled.
+ If there's a need to change the settings for a managed zone where it has been enabled, turn off DNSSEC and then re-enable it with different settings.
**Severity**: Medium ### [Ensure that RSASHA1 is not used for the zone-signing key in Cloud DNS DNSSEC](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/117ad72e-fed7-4dc8-995d-39919b9ba2d9)
-**Description**: DNSSEC algorithm numbers in this registry may be used in CERT RRs.
+**Description**: DNSSEC algorithm numbers in this registry might be used in CERT RRs.
Zone signing (DNSSEC) and transaction security mechanisms (SIG(0) and TSIG) make use of particular subsets of these algorithms. The algorithm used for key signing should be a recommended one and it should be strong.
- DNSSEC algorithm numbers in this registry may be used in CERT RRs.
+ DNSSEC algorithm numbers in this registry might be used in CERT RRs.
Zonesigning (DNSSEC) and transaction security mechanisms (SIG(0) and TSIG) make use of particular subsets of these algorithms. The algorithm used for key signing should be a recommended one and it should be strong.
- When enabling DNSSEC for a managed zone, or creating a managed zone with DNSSEC, the DNSSEC signing algorithms and the denial-of-existence type can be selected.
- Changing the DNSSEC settings is only effective for a managed zone if DNSSEC is not already enabled.
- If the need exists to change the settings for a managed zone where it has been enabled, turn DNSSEC off and then re-enable it with different settings.
+ When enabling DNSSEC for a managed zone, or creating a managed zone with DNSSEC, the DNSSEC signing algorithms, and the denial-of-existence type can be selected.
+ Changing the DNSSEC settings is only effective for a managed zone if DNSSEC isn't already enabled.
+ If the need exists to change the settings for a managed zone where it has been enabled, turn off DNSSEC and then re-enable it with different settings.
**Severity**: Medium ### [Ensure that SSH access is restricted from the internet](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9f88a5b8-2853-4b3f-a4c7-33f225cae99a) **Description**: GCP Firewall Rules are specific to a VPC Network. Each rule either allows or denies traffic when its conditions are met. Its conditions allow the user to specify the type of traffic, such as ports and protocols, and the source or destination of the traffic, including IP addresses, subnets, and instances.
-Firewall rules are defined at the VPC network level and are specific to the network in which they are defined. The rules themselves cannot be shared among networks. Firewall rules only support IPv4 traffic.
+Firewall rules are defined at the VPC network level and are specific to the network in which they're defined. The rules themselves can't be shared among networks. Firewall rules only support IPv4 traffic.
When specifying a source for an ingress rule or a destination for an egress rule by address, only an IPv4 address or IPv4 block in CIDR notation can be used. Generic (0.0.0.0/0) incoming traffic from the internet to VPC or VM instance using SSH on Port 22 can be avoided. GCP Firewall Rules within a VPC Network apply to outgoing (egress) traffic from instances and incoming (ingress) traffic to instances in the network. Egress and ingress traffic flows are controlled even if the traffic stays within the network (for example, instance-to-instance communication). For an instance to have outgoing Internet access, the network must have a valid Internet gateway route or custom route whose destination IP is specified.
-This route simply defines the path to the Internet, to avoid the most general (0.0.0.0/0) destination IP Range specified from the Internet through SSH with the default Port '22'.
+This route simply defines the path to the Internet, to avoid the most general (0.0.0.0/0) destination IP Range specified from the Internet through SSH with the default Port '22.'
Generic access from the Internet to a specific IP Range needs to be restricted. **Severity**: High ### [Ensure that the default network does not exist in a project](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ea1989f3-de6c-4389-8b6c-c8b9a3df1595)
-**Description**: To prevent use of "default" network, a project should not have a "default" network.
+**Description**: To prevent use of "default" network, a project shouldn't have a "default" network.
The default network has a preconfigured network configuration and automatically generates the following insecure firewall rules: - default-allow-internal: Allows ingress connections for all protocols and ports among instances in the network.
This route simply defines the path to the Internet, to avoid the most general (0
- default-allow-rdp: Allows ingress connections on TCP port 3389(RDP) from any source to any instance in the network. - default-allow-icmp: Allows ingress ICMP traffic from any source to any instance in the network.
-These automatically created firewall rules do not get audit logged and cannot be configured to enable firewall rule logging.
+These automatically created firewall rules don't get audit logged and can't be configured to enable firewall rule logging.
Furthermore, the default network is an auto mode network, which means that its subnets use the same predefined range of IP addresses, and as a result, it's not possible to use Cloud VPN or VPC Network Peering with the default network. Based on organization security and networking requirements, the organization should create a new network and delete the default network.
Based on organization security and networking requirements, the organization sho
### [Ensure that the log metric filter and alerts exist for VPC network changes](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/59aef38a-19c2-4663-97a7-4c82a98dbab5)
-**Description**: It is recommended that a metric filter and alarm be established for Virtual Private Cloud (VPC) network changes.
-It is possible to have more than one VPC within a project. In addition, it is also possible to create a peer connection between two VPCs enabling network traffic to route between VPCs.
-Monitoring changes to a VPC will help ensure VPC traffic flow is not getting impacted.
+**Description**: It's recommended that a metric filter and alarm be established for Virtual Private Cloud (VPC) network changes.
+It's possible to have more than one VPC within a project. In addition, it's also possible to create a peer connection between two VPCs enabling network traffic to route between VPCs.
+Monitoring changes to a VPC will help ensure VPC traffic flow isn't getting impacted.
**Severity**: Low ### [Ensure that the log metric filter and alerts exist for VPC Network Firewall rule changes](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4a7723f9-ee51-4a2b-a4e5-2497a20c1964)
-**Description**: It is recommended that a metric filter and alarm be established for Virtual Private Cloud (VPC) Network Firewall rule changes.
-Monitoring for Create or Update Firewall rule events gives insight to network access changes and may reduce the time it takes to detect suspicious activity.
+**Description**: It's recommended that a metric filter and alarm be established for Virtual Private Cloud (VPC) Network Firewall rule changes.
+Monitoring for Create or Update Firewall rule events gives insight to network access changes and might reduce the time it takes to detect suspicious activity.
**Severity**: Low ### [Ensure that the log metric filter and alerts exist for VPC network route changes](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/b5c8e32b-a400-4d4b-8d2d-c5afbd4a6997)
-**Description**: It is recommended that a metric filter and alarm be established for Virtual Private Cloud (VPC) network route changes.
+**Description**: It's recommended that a metric filter and alarm be established for Virtual Private Cloud (VPC) network route changes.
Google Cloud Platform (GCP) routes define the paths network traffic takes from a VM instance to another destination. The other destination can be inside the organization VPC network (such as another VM) or outside of it. Every route consists of a destination and a next hop. Traffic whose destination IP is within the destination range is sent to the next hop for delivery. Monitoring changes to route tables will help ensure that all VPC traffic flows through an expected path.
Monitoring changes to route tables will help ensure that all VPC traffic flows t
### [Ensure that the 'log_connections' database flag for Cloud SQL PostgreSQL instance is set to 'on'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4016e27f-a451-4e24-9222-39d7d107ad74)
-**Description**: Enabling the log_connections setting causes each attempted connection to the server to be logged, along with successful completion of client authentication. This parameter cannot be changed after the session starts.
-PostgreSQL does not log attempted connections by default. Enabling the log_connections setting will create log entries for each attempted connection as well as successful completion of client authentication which can be useful in troubleshooting issues and to determine any unusual connection attempts to the server.
+**Description**: Enabling the log_connections setting causes each attempted connection to the server to be logged, along with successful completion of client authentication. This parameter can't be changed after the session starts.
+PostgreSQL doesn't log attempted connections by default. Enabling the log_connections setting will create log entries for each attempted connection as well as successful completion of client authentication, which can be useful in troubleshooting issues and to determine any unusual connection attempts to the server.
This recommendation is applicable to PostgreSQL database instances. **Severity**: Medium
PostgreSQL does not log attempted connections by default. Enabling the log_conne
### [Ensure that the 'log_disconnections' database flag for Cloud SQL PostgreSQL instance is set to 'on'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a86f62be-7402-4797-91dc-8ba2b976cb74) **Description**: Enabling the log_disconnections setting logs the end of each session, including the session duration.
-PostgreSQL does not log session details such as duration and session end by default. Enabling the log_disconnections setting will create log entries at the end of each session which can be useful in troubleshooting issues and determine any unusual activity across a time period.
+PostgreSQL doesn't log session details such as duration and session end by default. Enabling the log_disconnections setting will create log entries at the end of each session, which can be useful in troubleshooting issues and determine any unusual activity across a time period.
The log_disconnections and log_connections work hand in hand and generally, the pair would be enabled/disabled together. This recommendation is applicable to PostgreSQL database instances. **Severity**: Medium
The log_disconnections and log_connections work hand in hand and generally, the
### [Ensure that VPC Flow Logs is enabled for every subnet in a VPC Network](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/25631aaa-3866-43ac-860f-22c12bff1a4b) **Description**: Flow Logs is a feature that enables users to capture information about the IP traffic going to and from network interfaces in the organization's VPC Subnets. Once a flow log is created, the user can view and retrieve its data in Stackdriver Logging.
- It is recommended that Flow Logs be enabled for every business-critical VPC subnet.
+ It's recommended that Flow Logs be enabled for every business-critical VPC subnet.
VPC networks and subnetworks provide logically isolated and secure network partitions where GCP resources can be launched. When Flow Logs is enabled for a subnet, VMs within that subnet start reporting on all Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) flows. Each VM samples the TCP and UDP flows it sees, inbound and outbound, whether the flow is to or from another VM, a host in the on-premises datacenter, a Google service, or a host on the Internet. If two GCP VMs are communicating, and both are in subnets that have VPC Flow Logs enabled, both VMs report the flows. Flow Logs supports the following use cases: 1. Network monitoring. 2. Understanding network usage and optimizing network traffic expenses. 3. Network forensics. 4. Real-time security analysis
Flow Logs provide visibility into network traffic for each VM inside the subnet
udp:3389 sctp:22
- The sourceRanges property contains a combination of IP ranges that includes any non-private IP address and the allowed property contains a combination of rules that permit either all tcp ports or all udp ports.
+ The sourceRanges property contains a combination of IP ranges that includes any nonprivate IP address and the allowed property contains a combination of rules that permit either all tcp ports or all udp ports.
**Severity**: High
defender-for-cloud Recommendations Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference.md
Recommendations in Defender for Cloud are based on the [Microsoft cloud security
To learn about actions that you can take in response to these recommendations, see [Remediate recommendations in Defender for Cloud](implement-security-recommendations.md).
-Your secure score is based on the number of security recommendations you've completed. To decide which recommendations to resolve first, look at the severity of each recommendation and its potential impact on your secure score.
+Your secure score is based on the number of security recommendations you completed. To decide which recommendations to resolve first, look at the severity of each recommendation and its potential impact on your secure score.
> [!TIP] > If a recommendation's description says *No related policy*, usually it's because that recommendation is dependent on a different recommendation and *its* policy.
Your secure score is based on the number of security recommendations you've comp
### [API App should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/bf82a334-13b6-ca57-ea75-096fc2ffce50) **Description**: Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks.
-(Related policy: [API App should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb7ddfbdc-1260-477d-91fd-98bd9be789a6))
+(Related policy: [API App should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb7ddfbdc-1260-477d-91fd-98bd9be789a6)).
**Severity**: Medium ### [CORS should not allow every resource to access API Apps](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e40df93c-7a7c-1b0a-c787-9987ceb98e54) **Description**: Cross-Origin Resource Sharing (CORS) should not allow all domains to access your API app. Allow only required domains to interact with your API app.
-(Related policy: [CORS should not allow every resource to access your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2f358c20a6-3f9e-4f0e-97ff-c6ce485e2aac))
+(Related policy: [CORS should not allow every resource to access your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2f358c20a6-3f9e-4f0e-97ff-c6ce485e2aac)).
**Severity**: Low ### [CORS should not allow every resource to access Function Apps](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/7b3d4796-9400-2904-692b-4a5ede7f0a1e) **Description**: Cross-Origin Resource Sharing (CORS) should not allow all domains to access your Function app. Allow only required domains to interact with your Function app.
-(Related policy: [CORS should not allow every resource to access your Function Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2f0820b7b9-23aa-4725-a1ce-ae4558f718e5))
+(Related policy: [CORS should not allow every resource to access your Function Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2f0820b7b9-23aa-4725-a1ce-ae4558f718e5)).
**Severity**: Low ### [CORS should not allow every resource to access Web Applications](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/df4d1739-47f0-60c7-1706-3731fea6ab03) **Description**: Cross-Origin Resource Sharing (CORS) should not allow all domains to access your web application. Allow only required domains to interact with your web app.
-(Related policy: [CORS should not allow every resource to access your Web Applications](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2f5744710e-cc2f-4ee8-8809-3b11e89f4bc9))
+(Related policy: [CORS should not allow every resource to access your Web Applications](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2f5744710e-cc2f-4ee8-8809-3b11e89f4bc9)).
**Severity**: Low
Your secure score is based on the number of security recommendations you've comp
**Description**: Audit enabling of diagnostic logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised
-(No related policy)
+(No related policy).
**Severity**: Medium ### [Ensure API app has Client Certificates Incoming client certificates set to On](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ce2768c3-a7c7-1bbf-22cd-f9db675a9807) **Description**: Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app.
-(Related policy: [Ensure API app has 'Client Certificates (Incoming client certificates)' set to 'On'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0c192fe8-9cbb-4516-85b3-0ade8bd03886))
+(Related policy: [Ensure API app has 'Client Certificates (Incoming client certificates)' set to 'On'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0c192fe8-9cbb-4516-85b3-0ade8bd03886)).
**Severity**: Medium ### [FTPS should be required in API apps](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/67fc622b-4ce6-8c52-08ae-9f830036b757) **Description**: Enable FTPS enforcement for enhanced security
-(Related policy: [FTPS only should be required in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f9a1b8c48-453a-4044-86c3-d8bfd823e4f5))
+(Related policy: [FTPS only should be required in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f9a1b8c48-453a-4044-86c3-d8bfd823e4f5)).
**Severity**: High ### [FTPS should be required in function apps](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/972a6579-f38f-c0b9-1b4b-a5bbeba3ab5b) **Description**: Enable FTPS enforcement for enhanced security
-(Related policy: [FTPS only should be required in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f399b2637-a50f-4f95-96f8-3a145476eb15))
+(Related policy: [FTPS only should be required in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f399b2637-a50f-4f95-96f8-3a145476eb15)).
**Severity**: High ### [FTPS should be required in web apps](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/19beaa2a-a126-b4dd-6d35-617f6cc83fca) **Description**: Enable FTPS enforcement for enhanced security
-(Related policy: [FTPS should be required in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f4d24b6d4-5e53-4a4f-a7f4-618fa573ee4b))
+(Related policy: [FTPS should be required in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f4d24b6d4-5e53-4a4f-a7f4-618fa573ee4b)).
**Severity**: High ### [Function App should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/cb0acdc6-0846-fd48-debe-9905af151b6d) **Description**: Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks.
-(Related policy: [Function App should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab))
+(Related policy: [Function App should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab)).
**Severity**: Medium ### [Function apps should have Client Certificates (Incoming client certificates) enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c2ab4bea-c663-3259-a4cd-03a8feb02825) **Description**: Client certificates allow for the app to request a certificate for incoming requests. Only clients with valid certificates will be able to reach the app.
-(Related policy: [Function apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2feaebaea7-8013-4ceb-9d14-7eb32271373c))
+(Related policy: [Function apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2feaebaea7-8013-4ceb-9d14-7eb32271373c)).
**Severity**: Medium
This enables you to recreate activity trails for investigation purposes if a sec
**Description**: Periodically, newer versions are released for Java either due to security flaws or to include additional functionality. Using the latest Python version for API apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version.
-(Related policy: [Ensure that 'Java version' is the latest, if used as a part of the API app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f88999f4c-376a-45c8-bcb3-4058f713cf39))
+(Related policy: [Ensure that 'Java version' is the latest, if used as a part of the API app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f88999f4c-376a-45c8-bcb3-4058f713cf39)).
**Severity**: Medium
Using the latest Python version for API apps is recommended to benefit from secu
**Description**: For enhanced authentication security, use a managed identity. On Azure, managed identities eliminate the need for developers to have to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens.
-(Related policy: [Managed identity should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef))
+(Related policy: [Managed identity should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef)).
**Severity**: Medium
On Azure, managed identities eliminate the need for developers to have to manage
**Description**: For enhanced authentication security, use a managed identity. On Azure, managed identities eliminate the need for developers to have to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens.
-(Related policy: [Managed identity should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0da106f2-4ca3-48e8-bc85-c638fe6aea8f))
+(Related policy: [Managed identity should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0da106f2-4ca3-48e8-bc85-c638fe6aea8f)).
**Severity**: Medium
On Azure, managed identities eliminate the need for developers to have to manage
**Description**: For enhanced authentication security, use a managed identity. On Azure, managed identities eliminate the need for developers to have to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens.
-(Related policy: [Managed identity should be used in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2b9ad585-36bc-4615-b300-fd4435808332))
+(Related policy: [Managed identity should be used in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2b9ad585-36bc-4615-b300-fd4435808332)).
**Severity**: Medium
Microsoft Defender for App Service can discover attacks on your applications and
Important: Remediating this recommendation will result in charges for protecting your App Service plans. If you don't have any App Service plans in this subscription, no charges will be incurred. If you create any App Service plans on this subscription in the future, they will automatically be protected and charges will begin at that time. Learn more in [Protect your web apps and APIs](/azure/defender-for-cloud/defender-for-app-service-introduction).
-(Related policy: [Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2f2913021d-f2fd-4f3d-b958-22354e2bdbcb))
+(Related policy: [Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2f2913021d-f2fd-4f3d-b958-22354e2bdbcb)).
**Severity**: High
Learn more in [Protect your web apps and APIs](/azure/defender-for-cloud/defende
**Description**: Periodically, newer versions are released for PHP software either due to security flaws or to include additional functionality. Using the latest PHP version for API apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version.
-(Related policy: [Ensure that 'PHP version' is the latest, if used as a part of the API app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1bc1795e-d44a-4d48-9b3b-6fff0fd5f9ba))
+(Related policy: [Ensure that 'PHP version' is the latest, if used as a part of the API app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1bc1795e-d44a-4d48-9b3b-6fff0fd5f9ba)).
**Severity**: Medium
Using the latest PHP version for API apps is recommended to benefit from securit
**Description**: Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for API apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version.
-(Related policy: [Ensure that 'Python version' is the latest, if used as a part of the API app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f74c3584d-afae-46f7-a20a-6f8adba71a16))
+(Related policy: [Ensure that 'Python version' is the latest, if used as a part of the API app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f74c3584d-afae-46f7-a20a-6f8adba71a16)).
**Severity**: Medium ### [Remote debugging should be turned off for API App](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9172da4e-9571-6e33-2b5b-d742847f3be7) **Description**: Remote debugging requires inbound ports to be opened on an API app. Remote debugging should be turned off.
-(Related policy: [Remote debugging should be turned off for API Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2fe9c8d085-d9cc-4b17-9cdc-059f1f01f19e))
+(Related policy: [Remote debugging should be turned off for API Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2fe9c8d085-d9cc-4b17-9cdc-059f1f01f19e)).
**Severity**: Low ### [Remote debugging should be turned off for Function App](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/093c685b-56dd-13a3-8ed5-887a001837a2) **Description**: Remote debugging requires inbound ports to be opened on an Azure Function app. Remote debugging should be turned off.
-(Related policy: [Remote debugging should be turned off for Function Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2f0e60b895-3786-45da-8377-9c6b4b6ac5f9))
+(Related policy: [Remote debugging should be turned off for Function Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2f0e60b895-3786-45da-8377-9c6b4b6ac5f9)).
**Severity**: Low ### [Remote debugging should be turned off for Web Applications](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/64b8637e-4e1d-76a9-0fc9-c1e487a97ed8) **Description**: Remote debugging requires inbound ports to be opened on a web application. Remote debugging is currently enabled. If you no longer need to use remote debugging, it should be turned off.
-(Related policy: [Remote debugging should be turned off for Web Applications](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2fcb510bfd-1cba-4d9f-a230-cb0976f4bb71))
+(Related policy: [Remote debugging should be turned off for Web Applications](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2fcb510bfd-1cba-4d9f-a230-cb0976f4bb71)).
**Severity**: Low ### [TLS should be updated to the latest version for API apps](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/5a659d57-117d-bb18-65f6-54e51da1bb9b) **Description**: Upgrade to the latest TLS version.
-(Related policy: [Latest TLS version should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f8cb6aa8b-9e41-4f4e-aa25-089a7ac2581e))
+(Related policy: [Latest TLS version should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f8cb6aa8b-9e41-4f4e-aa25-089a7ac2581e)).
**Severity**: High ### [TLS should be updated to the latest version for function apps](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/15be5f3c-e0a4-c0fa-fbff-8e50339b4b22) **Description**: Upgrade to the latest TLS version.
-(Related policy: [Latest TLS version should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff9d614c5-c173-4d56-95a7-b4437057d193))
+(Related policy: [Latest TLS version should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff9d614c5-c173-4d56-95a7-b4437057d193)).
**Severity**: High ### [TLS should be updated to the latest version for web apps](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2a54c352-7ca4-4bae-ad46-47ecd9595bd2) **Description**: Upgrade to the latest TLS version.
-(Related policy: [Latest TLS version should be used in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b))
+(Related policy: [Latest TLS version should be used in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b)).
**Severity**: High ### [Web Application should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1b351b29-41ca-6df5-946c-c190a56be5fe) **Description**: Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks.
-(Related policy: [Web Application should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fa4af4a39-4135-47fb-b175-47fbdf85311d))
+(Related policy: [Web Application should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fa4af4a39-4135-47fb-b175-47fbdf85311d)).
**Severity**: Medium
Using the latest Python version for API apps is recommended to benefit from secu
**Description**: Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app.
-(Related policy: [Ensure WEB app has 'Client Certificates (Incoming client certificates)' set to 'On'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f5bb220d9-2698-4ee4-8404-b9c30c9df609))
+(Related policy: [Ensure WEB app has 'Client Certificates (Incoming client certificates)' set to 'On'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f5bb220d9-2698-4ee4-8404-b9c30c9df609)).
**Severity**: Medium
Only clients that have a valid certificate will be able to reach the app.
### [Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/35f45c95-27cf-4e52-891f-8390d1de5828) **Description**: Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Defender for Cloud uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications.
-(Related policy: [Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f47a6b606-51aa-4496-8bb7-64b11cf66adc))
+(Related policy: [Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f47a6b606-51aa-4496-8bb7-64b11cf66adc)).
**Severity**: High ### [Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1234abcd-1b53-4fd4-9835-2c2fa3935313) **Description**: Monitor for changes in behavior on groups of machines configured for auditing by Defender for Cloud's adaptive application controls. Defender for Cloud uses machine learning to analyze the running processes on your machines and suggest a list of known-safe applications. These are presented as recommended apps to allow in adaptive application control policies.
-(Related policy: [Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f123a3936-f020-408a-ba0c-47873faf1534))
+(Related policy: [Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f123a3936-f020-408a-ba0c-47873faf1534)).
**Severity**: High ### [Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/22441184-2f7b-d4a0-e00b-4c5eaef4afc9) **Description**: Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more in [Detailed steps: Create and manage SSH keys for authentication to a Linux VM in Azure](/azure/virtual-machines/linux/create-ssh-keys-detailed).
-(Related policy: [Audit Linux machines that are not using SSH key for authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f630c64f9-8b6b-4c64-b511-6544ceff6fd6))
+(Related policy: [Audit Linux machines that are not using SSH key for authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f630c64f9-8b6b-4c64-b511-6544ceff6fd6)).
**Severity**: Medium ### [Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/b12bc79e-4f12-44db-acda-571820191ddc) **Description**: It is important to enable encryption of Automation account variable assets when storing sensitive data.
-(Related policy: [Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2f3657f5a0-770e-44a3-b44e-9431ba1e9735))
+(Related policy: [Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2f3657f5a0-770e-44a3-b44e-9431ba1e9735)).
**Severity**: High
Only clients that have a valid certificate will be able to reach the app.
Azure Backup is an Azure-native, cost-effective, data protection solution. It creates recovery points that are stored in geo-redundant recovery vaults. When you restore from a recovery point, you can restore the whole VM or specific files.
-(Related policy: [Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f013e242c-8828-4970-87b3-ab247555486d))
+(Related policy: [Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f013e242c-8828-4970-87b3-ab247555486d)).
**Severity**: Low ### [Container hosts should be configured securely](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/0677209d-e675-2c6f-e91a-54cef2878663) **Description**: Remediate vulnerabilities in security configuration on machines with Docker installed to protect them from attacks.
-(Related policy: [Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fe8cbc669-f12d-49eb-93e7-9273119e9933))
+(Related policy: [Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fe8cbc669-f12d-49eb-93e7-9273119e9933)).
**Severity**: High ### [Diagnostic logs in Azure Stream Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f11b27f2-8c49-5bb4-eff5-e1e5384bf95e) **Description**: Enable logs and retain them for up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised.
-(Related policy: [Diagnostic logs in Azure Stream Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff9be5368-9bf5-4b84-9e0a-7850da98bb46))
+(Related policy: [Diagnostic logs in Azure Stream Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff9be5368-9bf5-4b84-9e0a-7850da98bb46)).
**Severity**: Low ### [Diagnostic logs in Batch accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/32771b45-220c-1a8b-584e-fdd5a2584a66) **Description**: Enable logs and retain them for up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised.
-(Related policy: [Diagnostic logs in Batch accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f428256e6-1fac-4f48-a757-df34c2b3336d))
+(Related policy: [Diagnostic logs in Batch accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f428256e6-1fac-4f48-a757-df34c2b3336d)).
**Severity**: Low ### [Diagnostic logs in Event Hubs should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1597605a-0faf-5860-eb74-462ae2e9fc21) **Description**: Enable logs and retain them for up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised.
-(Related policy: [Diagnostic logs in Event Hubs should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f83a214f7-d01a-484b-91a9-ed54470c9a6a))
+(Related policy: [Diagnostic logs in Event Hubs should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f83a214f7-d01a-484b-91a9-ed54470c9a6a)).
**Severity**: Low ### [Diagnostic logs in Logic Apps should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/91387f44-7e43-4ecc-55f0-46f5adee3dd5) **Description**: To ensure you can recreate activity trails for investigation purposes when a security incident occurs or your network is compromised, enable logging. If your diagnostic logs aren't being sent to a Log Analytics workspace, Azure Storage account, or Azure Event Hubs, ensure you've configured diagnostic settings to send platform metrics and platform logs to the relevant destinations. Learn more in Create diagnostic settings to send platform logs and metrics to different destinations.
-(Related policy: [Diagnostic logs in Logic Apps should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f34f95f76-5386-4de7-b824-0d8478470c9d))
+(Related policy: [Diagnostic logs in Logic Apps should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f34f95f76-5386-4de7-b824-0d8478470c9d)).
**Severity**: Low ### [Diagnostic logs in Search services should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dea5192e-1bb3-101b-b70c-4646546f5e1e) **Description**: Enable logs and retain them for up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised.
-(Related policy: [Diagnostic logs in Search services should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb4330a05-a843-4bc8-bf9a-cacce50c67f4))
+(Related policy: [Diagnostic logs in Search services should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb4330a05-a843-4bc8-bf9a-cacce50c67f4)).
**Severity**: Low ### [Diagnostic logs in Service Bus should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f19ab7d9-5ff2-f8fd-ab3b-0bf95dcb6889) **Description**: Enable logs and retain them for up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised.
-(Related policy: [Diagnostic logs in Service Bus should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff8d36e2f-389b-4ee4-898d-21aeb69a0f45))
+(Related policy: [Diagnostic logs in Service Bus should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff8d36e2f-389b-4ee4-898d-21aeb69a0f45)).
**Severity**: Low ### [Diagnostic logs in Virtual Machine Scale Sets should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/961eb649-3ea9-f8c2-6595-88e9a3aeedeb) **Description**: Enable logs and retain them for up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised.
-(Related policy: [Diagnostic logs in Virtual Machine Scale Sets should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7c1b1214-f927-48bf-8882-84f0af6588b1))
+(Related policy: [Diagnostic logs in Virtual Machine Scale Sets should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7c1b1214-f927-48bf-8882-84f0af6588b1)).
**Severity**: Low
When you restore from a recovery point, you can restore the whole VM or specific
### [Endpoint protection health issues on virtual machine scale sets should be resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e71020c2-860c-3235-cd39-04f3f8c936d2) **Description**: Remediate endpoint protection health failures on your virtual machine scale sets to protect them from threats and vulnerabilities.
-(Related policy: [Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f26a828e1-e88f-464e-bbb3-c134a282b9de))
+(Related policy: [Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f26a828e1-e88f-464e-bbb3-c134a282b9de)).
**Severity**: Low
Learn more about how endpoint protection for machines is evaluated in [Endpoint
### [Endpoint protection should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/21300918-b2e3-0346-785f-c77ff57d243b) **Description**: Install an endpoint protection solution on your virtual machines scale sets, to protect them from threats and vulnerabilities.
-(Related policy: [Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f26a828e1-e88f-464e-bbb3-c134a282b9de))
+(Related policy: [Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f26a828e1-e88f-464e-bbb3-c134a282b9de)).
**Severity**: High
Learn more about [Trusted launch for Azure virtual machines](/azure/virtual-mach
### [Guest Configuration extension should be installed on machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/6c99f570-2ce7-46bc-8175-cde013df43bc)
-**Description**: To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as '[Windows Exploit guard should be enabled](https://aka.ms/gcpol)'.
-(Related policy: [Virtual machines should have the Guest Configuration extension](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2fae89ebca-1c92-4898-ac2c-9f63decb045c))
+**Description**: To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as [Windows Exploit guard should be enabled](https://aka.ms/gcpol).
+(Related policy: [Virtual machines should have the Guest Configuration extension](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2fae89ebca-1c92-4898-ac2c-9f63decb045c)).
**Severity**: Medium ### [Install endpoint protection solution on virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/83f577bd-a1b6-b7e1-0891-12ca19d1e6df) **Description**: Install an endpoint protection solution on your virtual machines, to protect them from threats and vulnerabilities.
-(Related policy: [Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2faf6cd1bd-1635-48cb-bde7-5b15693900b9))
+(Related policy: [Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2faf6cd1bd-1635-48cb-bde7-5b15693900b9)).
**Severity**: High
Learn more about [Trusted launch for Azure virtual machines](/azure/virtual-mach
### [Log Analytics agent should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/45cfe080-ceb1-a91e-9743-71551ed24e94) **Description**: Defender for Cloud collects data from your Azure virtual machines (VMs) to monitor for security vulnerabilities and threats. Data is collected using the [Log Analytics agent](/azure/azure-monitor/platform/log-analytics-agent), formerly known as the Microsoft Monitoring Agent (MMA), which reads various security-related configurations and event logs from the machine and copies the data to your workspace for analysis. You'll also need to follow that procedure if your VMs are used by an Azure managed service such as Azure Kubernetes Service or Azure Service Fabric. You cannot configure auto-provisioning of the agent for Azure virtual machine scale sets. To deploy the agent on virtual machine scale sets (including those used by Azure managed services such as Azure Kubernetes Service and Azure Service Fabric), follow the procedure in the remediation steps.
-(Related policy: [Log Analytics agent should be installed on your virtual machine scale sets for Azure Security Center monitoring](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fa3a6ea0c-e018-4933-9ef0-5aaa1501449b))
+(Related policy: [Log Analytics agent should be installed on your virtual machine scale sets for Azure Security Center monitoring](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fa3a6ea0c-e018-4933-9ef0-5aaa1501449b)).
**Severity**: High ### [Log Analytics agent should be installed on virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d1db3318-01ff-16de-29eb-28b344515626) **Description**: Defender for Cloud collects data from your Azure virtual machines (VMs) to monitor for security vulnerabilities and threats. Data is collected using the [Log Analytics agent](/azure/azure-monitor/platform/log-analytics-agent), formerly known as the Microsoft Monitoring Agent (MMA), which reads various security-related configurations and event logs from the machine and copies the data to your Log Analytics workspace for analysis. This agent is also required if your VMs are used by an Azure managed service such as Azure Kubernetes Service or Azure Service Fabric. We recommend configuring [auto-provisioning](/azure/defender-for-cloud/enable-data-collection) to automatically deploy the agent. If you choose not to use auto-provisioning, manually deploy the agent to your VMs using the instructions in the remediation steps.
-(Related policy: [Log Analytics agent should be installed on your virtual machine for Azure Security Center monitoring](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fa4fe33eb-e377-4efb-ab31-0784311bc499))
+(Related policy: [Log Analytics agent should be installed on your virtual machine for Azure Security Center monitoring](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fa4fe33eb-e377-4efb-ab31-0784311bc499)).
**Severity**: High
Learn more about [Trusted launch for Azure virtual machines](/azure/virtual-mach
### [Machines should be configured securely](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c476dc48-8110-4139-91af-c8d940896b98) **Description**: Remediate vulnerabilities in security configuration on your machines to protect them from attacks.
-(Related policy: [Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15))
+(Related policy: [Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15)).
**Severity**: Low
Learn more about [Trusted launch for Azure virtual machines](/azure/virtual-mach
### [Machines should have a vulnerability assessment solution](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ffff0522-1e88-47fc-8382-2a80ba848f5d) **Description**: Defender for Cloud regularly checks your connected machines to ensure they're running vulnerability assessment tools. Use this recommendation to deploy a vulnerability assessment solution.
-(Related policy: [A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f501541f7-f7e7-4cd6-868c-4190fdad3ac9))
+(Related policy: [A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f501541f7-f7e7-4cd6-868c-4190fdad3ac9)).
**Severity**: Medium ### [Machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1195afff-c881-495e-9bc5-1486211ae03f) **Description**: Resolve the findings from the vulnerability assessment solutions on your virtual machines.
-(Related policy: [A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f501541f7-f7e7-4cd6-868c-4190fdad3ac9))
+(Related policy: [A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f501541f7-f7e7-4cd6-868c-4190fdad3ac9)).
**Severity**: Low ### [Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/805651bc-6ecd-4c73-9b55-97a19d0582d0) **Description**: Defender for Cloud has identified some overly permissive inbound rules for management ports in your Network Security Group. Enable just-in-time access control to protect your VM from internet-based brute-force attacks. Learn more in [Understanding just-in-time (JIT) VM access](/azure/defender-for-cloud/just-in-time-access-overview).
-(Related policy: [Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb0f33259-77d7-4c9e-aac6-3aabcfae693c))
+(Related policy: [Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb0f33259-77d7-4c9e-aac6-3aabcfae693c)).
**Severity**: High
You can use this information to quickly remediate security issues and improve th
Important: Remediating this recommendation will result in charges for protecting your servers. If you don't have any servers in this subscription, no charges will be incurred. If you create any servers on this subscription in the future, they will automatically be protected and charges will begin at that time. Learn more in [Introduction to Microsoft Defender for servers](/azure/defender-for-cloud/defender-for-servers-introduction).
-(Related policy: [Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2f4da35fc9-c9e7-4960-aec9-797fe7d9051d))
+(Related policy: [Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2f4da35fc9-c9e7-4960-aec9-797fe7d9051d)).
**Severity**: High
Learn more in [Introduction to Microsoft Defender for servers](/azure/defender-f
### [Secure Boot should be enabled on supported Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/69ad830b-d98c-b1cf-2158-9d69d38c7093)
-**Description**: Enable Secure Boot on supported Windows virtual machines to mitigate against malicious and unauthorized changes to the boot chain. Once enabled, only trusted bootloaders, kernel and kernel drivers will be allowed to run. This assessment only applies to trusted launch enabled Windows virtual machines.
+**Description**: Enable Secure Boot on supported Windows virtual machines to mitigate against malicious and unauthorized changes to the boot chain. Once enabled, only trusted bootloaders, kernel, and kernel drivers will be allowed to run. This assessment only applies to trusted launch enabled Windows virtual machines.
Important: Trusted launch requires the creation of new virtual machines.
Learn more about [Trusted launch for Azure virtual machines](/azure/virtual-mach
### [Service Fabric clusters should have the ClusterProtectionLevel property set to EncryptAndSign](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/7f04fc0c-4a3d-5c7e-ce19-666cb871b510)
-**Description**: Service Fabric provides three levels of protection (None, Sign and EncryptAndSign) for node-to-node communication using a primary cluster certificate. Set the protection level to ensure that all node-to-node messages are encrypted and digitally signed.
-(Related policy: [Service Fabric clusters should have the ClusterProtectionLevel property set to EncryptAndSign](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f617c02be-7f02-4efd-8836-3180d47b6c68))
+**Description**: Service Fabric provides three levels of protection (None, Sign, and EncryptAndSign) for node-to-node communication using a primary cluster certificate. Set the protection level to ensure that all node-to-node messages are encrypted and digitally signed.
+(Related policy: [Service Fabric clusters should have the ClusterProtectionLevel property set to EncryptAndSign](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f617c02be-7f02-4efd-8836-3180d47b6c68)).
**Severity**: High ### [Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/03afeb6f-7634-adb3-0a01-803b0b9cb611) **Description**: Perform Client authentication only via Azure Active Directory in Service Fabric
-(Related policy: [Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb54ed75b-3e1a-44ac-a333-05ba39b99ff0))
+(Related policy: [Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb54ed75b-3e1a-44ac-a333-05ba39b99ff0)).
**Severity**: High ### [System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/bd20bd91-aaf1-7f14-b6e4-866de2f43146) **Description**: Install missing system security and critical updates to secure your Windows and Linux virtual machine scale sets.
-(Related policy: [System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fc3f317a7-a95c-4547-b7e7-11017ebdf2fe))
+(Related policy: [System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fc3f317a7-a95c-4547-b7e7-11017ebdf2fe)).
**Severity**: High ### [System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4ab6e3c5-74dd-8b35-9ab9-f61b30875b27) **Description**: Install missing system security and critical updates to secure your Windows and Linux virtual machines and computers
-(Related policy: [System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2f86b3d65f-7626-441e-b690-81a8b71cff60))
+(Related policy: [System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2f86b3d65f-7626-441e-b690-81a8b71cff60)).
**Severity**: High
Learn more about [Trusted launch for Azure virtual machines](/azure/virtual-mach
### [Virtual machine scale sets should be configured securely](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8941d121-f740-35f6-952c-6561d2b38d36) **Description**: Remediate vulnerabilities in security configuration on your virtual machine scale sets to protect them from attacks.
-(Related policy: [Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4))
+(Related policy: [Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4)).
**Severity**: High ### [Virtual machines guest attestation status should be healthy](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/b7604066-ed76-45f9-a5c1-c97e4812dc55)
-**Description**: Guest attestation is performed by sending a trusted log (TCGLog) to an attestation server. The server uses these logs to determine whether boot components are trustworthy. This assessment is intended to detect compromises of the boot chain which might be the result of a bootkit or rootkit infection.
+**Description**: Guest attestation is performed by sending a trusted log (TCGLog) to an attestation server. The server uses these logs to determine whether boot components are trustworthy. This assessment is intended to detect compromises of the boot chain, which might be the result of a bootkit or rootkit infection.
This assessment only applies to Trusted Launch enabled virtual machines that have the Guest Attestation extension installed. (No related policy)
This assessment only applies to Trusted Launch enabled virtual machines that hav
### [Virtual machines' Guest Configuration extension should be deployed with system-assigned managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/69133b6b-695a-43eb-a763-221e19556755) **Description**: The Guest Configuration extension requires a system assigned managed identity. Azure virtual machines in the scope of this policy will be non-compliant when they have the Guest Configuration extension installed but do not have a system assigned managed identity. [Learn more](https://aka.ms/gcpol)
-(Related policy: [Guest Configuration extension should be deployed to Azure virtual machines with system assigned managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fd26f7642-7545-4e18-9b75-8c9bbdee3a9a))
+(Related policy: [Guest Configuration extension should be deployed to Azure virtual machines with system assigned managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fd26f7642-7545-4e18-9b75-8c9bbdee3a9a)).
**Severity**: Medium
Available resources and information about this tool & migration:
[Overview of Virtual machines (classic) deprecation, step by step process for migration & available Microsoft resources.](/azure/virtual-machines/classic-vm-deprecation?toc=/azure/virtual-machines/windows/toc.json&bc=/azure/virtual-machines/windows/breadcrumb/toc.json) [Details about Migrate to Azure Resource Manager migration tool.](/azure/virtual-machines/migration-classic-resource-manager-deep-dive?toc=/azure/virtual-machines/windows/toc.json&bc=/azure/virtual-machines/windows/breadcrumb/toc.json) [Migrate to Azure Resource Manager migration tool using PowerShell.](/azure/virtual-machines/windows/migration-classic-resource-manager-ps)
-(Related policy: [Virtual machines should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1d84d5fb-01f6-4d12-ba4f-4a26081d403d))
+(Related policy: [Virtual machines should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1d84d5fb-01f6-4d12-ba4f-4a26081d403d)).
**Severity**: High
Learn more about [Trusted launch for Azure virtual machines](/azure/virtual-mach
### [Vulnerabilities in security configuration on your Linux machines should be remediated (powered by Guest Configuration)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1f655fb7-63ca-4980-91a3-56dbc2b715c6) **Description**: Remediate vulnerabilities in security configuration on your Linux machines to protect them from attacks.
-(Related policy: [Linux machines should meet requirements for the Azure security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ffc9b3da7-8347-4380-8e70-0a0361d8dedd))
+(Related policy: [Linux machines should meet requirements for the Azure security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ffc9b3da7-8347-4380-8e70-0a0361d8dedd)).
**Severity**: Low
Learn more about [Trusted launch for Azure virtual machines](/azure/virtual-mach
### [Windows Defender Exploit Guard should be enabled on machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/22489c48-27d1-4e40-9420-4303ad9cffef) **Description**: Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only).
-(Related policy: [Audit Windows machines on which Windows Defender Exploit Guard is not enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2fbed48b13-6647-468e-aa2f-1af1d3f4dd40))
+(Related policy: [Audit Windows machines on which Windows Defender Exploit Guard is not enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2fbed48b13-6647-468e-aa2f-1af1d3f4dd40)).
**Severity**: Medium ### [Windows web servers should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/87448ec1-55f6-3746-3f79-0f35beee76b4) **Description**: To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines.
-(Related policy: [Audit Windows web servers that are not using secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f5752e6d6-1206-46d8-8ab1-ecc2f71a8112))
+(Related policy: [Audit Windows web servers that are not using secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f5752e6d6-1206-46d8-8ab1-ecc2f71a8112)).
**Severity**: High ### [[Preview]: Linux virtual machines should enable Azure Disk Encryption or EncryptionAtHost](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/a40cc620-e72c-fdf4-c554-c6ca2cd705c0) **Description**: By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys; temp disks and data caches aren't encrypted, and data isn't encrypted when flowing between compute and storage resources. Use Azure Disk Encryption or EncryptionAtHost to encrypt all this data. Visit [https://aka.ms/diskencryptioncomparison](https://aka.ms/diskencryptioncomparison) to compare encryption offerings. This policy requires two prerequisites to be deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol).
-(Related policy: [[Preview]: Linux virtual machines should enable Azure Disk Encryption or EncryptionAtHost](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2fca88aadc-6e2b-416c-9de2-5a0f01d1693f))
+(Related policy: [[Preview]: Linux virtual machines should enable Azure Disk Encryption or EncryptionAtHost](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2fca88aadc-6e2b-416c-9de2-5a0f01d1693f)).
**Severity**: High ### [[Preview]: Windows virtual machines should enable Azure Disk Encryption or EncryptionAtHost](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/0cb5f317-a94b-6b80-7212-13a9cc8826af) **Description**: By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys; temp disks and data caches aren't encrypted, and data isn't encrypted when flowing between compute and storage resources. Use Azure Disk Encryption or EncryptionAtHost to encrypt all this data. Visit [https://aka.ms/diskencryptioncomparison](https://aka.ms/diskencryptioncomparison) to compare encryption offerings. This policy requires two prerequisites to be deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol).
-(Related policy: [[Preview]: Windows virtual machines should enable Azure Disk Encryption or EncryptionAtHost](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f3dc5edcd-002d-444c-b216-e123bbfa37c0))
+(Related policy: [[Preview]: Windows virtual machines should enable Azure Disk Encryption or EncryptionAtHost](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f3dc5edcd-002d-444c-b216-e123bbfa37c0)).
**Severity**: High ### [Virtual machines and virtual machine scale sets should have encryption at host enabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/efbbd784-656d-473a-9863-ea7693bfcd2a)
-**Description**: Use encryption at host to get end-to-end encryption for your virtual machine and virtual machine scale set data. Encryption at host enables encryption at rest for your temporary disk and OS/data disk caches. Temporary and ephemeral OS disks are encrypted with platform-managed keys when encryption at host is enabled. OS/data disk caches are encrypted at rest with either customer-managed or platform-managed key, depending on the encryption type selected on the disk. Learn more at [Use the Azure portal to enable end-to-end encryption using encryption at host](/azure/virtual-machines/disks-enable-host-based-encryption-portal). (Related policy: [Virtual machines and virtual machine scale sets should have encryption at host enabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ffc4d8e41-e223-45ea-9bf5-eada37891d87))
+**Description**: Use encryption at host to get end-to-end encryption for your virtual machine and virtual machine scale set data. Encryption at host enables encryption at rest for your temporary disk and OS/data disk caches. Temporary and ephemeral OS disks are encrypted with platform-managed keys when encryption at host is enabled. OS/data disk caches are encrypted at rest with either customer-managed or platform-managed key, depending on the encryption type selected on the disk. Learn more at [Use the Azure portal to enable end-to-end encryption using encryption at host](/azure/virtual-machines/disks-enable-host-based-encryption-portal). (Related policy: [Virtual machines and virtual machine scale sets should have encryption at host enabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ffc4d8e41-e223-45ea-9bf5-eada37891d87)).
**Severity**: Medium ### [(Preview) Azure Stack HCI servers should meet Secured-core requirements](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f56c47221-b8b7-446e-9ab7-c7c9dc07f0ad)
-**Description**: Ensure that all Azure Stack HCI servers meet the Secured-core requirements. (Related policy: [Guest Configuration extension should be installed on machines - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6c99f570-2ce7-46bc-8175-cde013df43bc))
+**Description**: Ensure that all Azure Stack HCI servers meet the Secured-core requirements. (Related policy: [Guest Configuration extension should be installed on machines - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6c99f570-2ce7-46bc-8175-cde013df43bc)).
**Severity**: Low ### [(Preview) Azure Stack HCI servers should have consistently enforced application control policies](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7384fde3-11b0-4047-acbd-b3cf3cc8ce07)
-**Description**: At a minimum, apply the Microsoft WDAC base policy in enforced mode on all Azure Stack HCI servers. Applied Windows Defender Application Control (WDAC) policies must be consistent across servers in the same cluster. (Related policy: [Guest Configuration extension should be installed on machines - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6c99f570-2ce7-46bc-8175-cde013df43bc))
+**Description**: At a minimum, apply the Microsoft WDAC base policy in enforced mode on all Azure Stack HCI servers. Applied Windows Defender Application Control (WDAC) policies must be consistent across servers in the same cluster. (Related policy: [Guest Configuration extension should be installed on machines - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6c99f570-2ce7-46bc-8175-cde013df43bc)).
**Severity**: High ### [(Preview) Azure Stack HCI systems should have encrypted volumes](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fae95f12a-b6fd-42e0-805c-6b94b86c9830)
-**Description**: Use BitLocker to encrypt the OS and data volumes on Azure Stack HCI systems. (Related policy: [Guest Configuration extension should be installed on machines - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6c99f570-2ce7-46bc-8175-cde013df43bc))
+**Description**: Use BitLocker to encrypt the OS and data volumes on Azure Stack HCI systems. (Related policy: [Guest Configuration extension should be installed on machines - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6c99f570-2ce7-46bc-8175-cde013df43bc)).
**Severity**: High ### [(Preview) Host and VM networking should be protected on Azure Stack HCI systems](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2faee306e7-80b0-46f3-814c-d3d3083ed034)
-**Description**: Protect data on the Azure Stack HCI host's network and on virtual machine network connections. (Related policy: [Guest Configuration extension should be installed on machines - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6c99f570-2ce7-46bc-8175-cde013df43bc))
+**Description**: Protect data on the Azure Stack HCI host's network and on virtual machine network connections. (Related policy: [Guest Configuration extension should be installed on machines - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6c99f570-2ce7-46bc-8175-cde013df43bc)).
**Severity**: Low
Learn more about [Trusted launch for Azure virtual machines](/azure/virtual-mach
**Description**: Recommendations to use customer-managed keys for encryption of data at rest are not assessed by default, but are available to enable for applicable scenarios. Data is encrypted automatically using platform-managed keys, so the use of customer-managed keys should only be applied when obligated by compliance or restrictive policy requirements. To enable this recommendation, navigate to your Security Policy for the applicable scope, and update the *Effect* parameter for the corresponding policy to audit or enforce the use of customer-managed keys. Learn more in [Manage security policies](/azure/defender-for-cloud/tutorial-security-policy). Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys (CMK) are commonly required to meet regulatory compliance standards. CMKs enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about CMK encryption at <https://aka.ms/acr/CMK>.
-(Related policy: [Container registries should be encrypted with a customer-managed key (CMK)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580))
+(Related policy: [Container registries should be encrypted with a customer-managed key (CMK)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580)).
**Severity**: Low
Learn more in [Introduction to Microsoft Defender for Containers](/azure/defende
**Description**: Azure Policy add-on for Kubernetes extends [Gatekeeper](https://github.com/open-policy-agent/gatekeeper) v3, an admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/) (OPA), to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. Defender for Cloud requires the Add-on to audit and enforce security capabilities and compliance inside your clusters. [Learn more](/azure/governance/policy/concepts/policy-for-kubernetes). Requires Kubernetes v1.14.0 or later.
-(Related policy: [Azure Policy Add-on for Kubernetes service (AKS) should be installed and enabled on your clusters](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0a15ec92-a229-4763-bb14-0ea34a568f8d))
+(Related policy: [Azure Policy Add-on for Kubernetes service (AKS) should be installed and enabled on your clusters](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0a15ec92-a229-4763-bb14-0ea34a568f8d)).
**Severity**: High
Requires Kubernetes v1.14.0 or later.
### [Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9b828565-a0ed-61c2-6bf3-1afc99a9b2ca) **Description**: Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific public IP addresses or address ranges. If your registry doesn't have an IP/firewall rule or a configured virtual network, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: <https://aka.ms/acr/portal/public-network> and here <https://aka.ms/acr/vnet>.
-(Related policy: [Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fd0793b48-0edc-4296-a390-4c75d1bdfd71))
+(Related policy: [Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fd0793b48-0edc-4296-a390-4c75d1bdfd71)).
**Severity**: Medium
Requires Kubernetes v1.14.0 or later.
### [Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/13e7d036-6903-821c-6018-962938929bf0) **Description**: Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: <https://aka.ms/acr/private-link>.
-(Related policy: [Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fe8eef0a8-67cf-4eb4-9386-14b0e78733d4))
+(Related policy: [Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fe8eef0a8-67cf-4eb4-9386-14b0e78733d4)).
**Severity**: Medium
Requires Kubernetes v1.14.0 or later.
### [Kubernetes API server should be configured with restricted access](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1a2b5b4c-f80d-46e7-ac81-b51a9fb363de) **Description**: To ensure that only applications from allowed networks, machines, or subnets can access your cluster, restrict access to your Kubernetes API server. You can restrict access by defining authorized IP ranges, or by setting up your API servers as private clusters as explained in [Create a private Azure Kubernetes Service cluster](/azure/aks/private-clusters).
-(Related policy: [Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0e246bcf-5f6f-4f87-bc6f-775d4712c7ea))
+(Related policy: [Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0e246bcf-5f6f-4f87-bc6f-775d4712c7ea)).
**Severity**: High
Requires Kubernetes v1.14.0 or later.
### [Role-Based Access Control should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/b0fdc63a-38e7-4bab-a7c4-2c2665abbaa9) **Description**: To provide granular filtering on the actions that users can perform, use [Role-Based Access Control (RBAC)](/azure/aks/concepts-identity#role-based-access-controls-rbac) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies.
-(Related policy: [Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fac4a19c2-fa67-49b4-8ae5-0b2e78c49457))
+(Related policy: [Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fac4a19c2-fa67-49b4-8ae5-0b2e78c49457)).
**Severity**: High
Learn more in [Introduction to Microsoft Defender for Containers](/azure/defende
We recommend setting limits for containers to ensure the runtime prevents the container from using more than the configured resource limit.
-(Related policy: [Ensure container CPU and memory resource limits do not exceed the specified limits in Kubernetes cluster](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fe345eecc-fa47-480f-9e88-67dcc122b164))
+(Related policy: [Ensure container CPU and memory resource limits do not exceed the specified limits in Kubernetes cluster](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fe345eecc-fa47-480f-9e88-67dcc122b164)).
**Severity**: Medium
We recommend setting limits for containers to ensure the runtime prevents the co
### [Container images should be deployed from trusted registries only](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8d244d29-fa00-4332-b935-c3a51d525417) **Description**:
-Images running on your Kubernetes cluster should come from known and monitored container image registries. Trusted registries reduce your cluster's exposure risk by limiting the potential for the introduction of unknown vulnerabilities, security issues and malicious images.
+Images running on your Kubernetes cluster should come from known and monitored container image registries. Trusted registries reduce your cluster's exposure risk by limiting the potential for the introduction of unknown vulnerabilities, security issues, and malicious images.
-(Related policy: [Ensure only allowed container images in Kubernetes cluster](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ffebd0533-8e55-448f-b837-bd0e06f16469))
+(Related policy: [Ensure only allowed container images in Kubernetes cluster](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ffebd0533-8e55-448f-b837-bd0e06f16469)).
**Severity**: High
Images running on your Kubernetes cluster should come from known and monitored c
**Description**: Containers shouldn't run with privilege escalation to root in your Kubernetes cluster. The AllowPrivilegeEscalation attribute controls whether a process can gain more privileges than its parent process.
-(Related policy: [Kubernetes clusters should not allow container privilege escalation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1c6e92c9-99f0-4e55-9cf2-0c234dc48f99))
+(Related policy: [Kubernetes clusters should not allow container privilege escalation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1c6e92c9-99f0-4e55-9cf2-0c234dc48f99)).
**Severity**: Medium
The AllowPrivilegeEscalation attribute controls whether a process can gain more
### [Containers sharing sensitive host namespaces should be avoided](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/802c0637-5a8c-4c98-abd7-7c96d89d6010) **Description**: To protect against privilege escalation outside the container, avoid pod access to sensitive host namespaces (host process ID and host IPC) in a Kubernetes cluster.
-(Related policy: [Kubernetes cluster containers should not share host process ID or host IPC namespace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f47a1ee2f-2a2a-4576-bf2a-e0e36709c2b8))
+(Related policy: [Kubernetes cluster containers should not share host process ID or host IPC namespace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f47a1ee2f-2a2a-4576-bf2a-e0e36709c2b8)).
**Severity**: Medium
The AllowPrivilegeEscalation attribute controls whether a process can gain more
**Description**: Containers running on Kubernetes clusters should be limited to allowed AppArmor profiles only. ;AppArmor (Application Armor) is a Linux security module that protects an operating system and its applications from security threats. To use it, a system administrator associates an AppArmor security profile with each program.
-(Related policy: [Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f511f5417-5d12-434d-ab2e-816901e72a5e))
+(Related policy: [Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f511f5417-5d12-434d-ab2e-816901e72a5e)).
**Severity**: High
The AllowPrivilegeEscalation attribute controls whether a process can gain more
### [Immutable (read-only) root filesystem should be enforced for containers](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/27d6f0e9-b4d5-468b-ae7e-03d5473fd864) **Description**: Containers should run with a read only root file system in your Kubernetes cluster. Immutable filesystem protects containers from changes at run-time with malicious binaries being added to PATH.
-(Related policy: [Kubernetes cluster containers should run with a read only root file system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fdf49d893-a74c-421d-bc95-c663042e5b80))
+(Related policy: [Kubernetes cluster containers should run with a read only root file system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fdf49d893-a74c-421d-bc95-c663042e5b80)).
**Severity**: Medium
The AllowPrivilegeEscalation attribute controls whether a process can gain more
### [Kubernetes clusters should be accessible only over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c6d87087-9ebe-b31f-b452-0bf3bbbaccd2) **Description**: Use of HTTPS ensures authentication and protects data in transit from network layer eavesdropping attacks. This capability is currently generally available for Kubernetes Service (AKS), and in preview for AKS Engine and Azure Arc-enabled Kubernetes. For more info, visit <https://aka.ms/kubepolicydoc>
-(Related policy: [Enforce HTTPS ingress in Kubernetes cluster](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1a5b4dca-0b6f-4cf5-907c-56316bc1bf3d))
+(Related policy: [Enforce HTTPS ingress in Kubernetes cluster](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1a5b4dca-0b6f-4cf5-907c-56316bc1bf3d)).
**Severity**: High
The AllowPrivilegeEscalation attribute controls whether a process can gain more
### [Kubernetes clusters should disable automounting API credentials](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/32060ac3-f17f-4848-db8e-e7cf2c9a53eb) **Description**: Disable automounting API credentials to prevent a potentially compromised Pod resource to run API commands against Kubernetes clusters. For more information, see <https://aka.ms/kubepolicydoc>.
-(Related policy: [Kubernetes clusters should disable automounting API credentials](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f423dd1ba-798e-40e4-9c4d-b6902674b423))
+(Related policy: [Kubernetes clusters should disable automounting API credentials](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f423dd1ba-798e-40e4-9c4d-b6902674b423)).
**Severity**: High
The AllowPrivilegeEscalation attribute controls whether a process can gain more
### [Kubernetes clusters should not use the default namespace](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ff87e0b4-17df-d338-5b19-80e71e0dcc9d) **Description**: Prevent usage of the default namespace in Kubernetes clusters to protect against unauthorized access for ConfigMap, Pod, Secret, Service, and ServiceAccount resource types. For more information, see <https://aka.ms/kubepolicydoc>.
-(Related policy: [Kubernetes clusters should not use the default namespace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f9f061a12-e40d-4183-a00e-171812443373))
+(Related policy: [Kubernetes clusters should not use the default namespace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f9f061a12-e40d-4183-a00e-171812443373)).
**Severity**: Low
The AllowPrivilegeEscalation attribute controls whether a process can gain more
### [Least privileged Linux capabilities should be enforced for containers](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/11c95609-3553-430d-b788-fd41cde8b2db) **Description**: To reduce attack surface of your container, restrict Linux capabilities and grant specific privileges to containers without granting all the privileges of the root user. We recommend dropping all capabilities, then adding those that are required
-(Related policy: [Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fc26596ff-4d70-4e6a-9a30-c2506bd2f80c))
+(Related policy: [Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fc26596ff-4d70-4e6a-9a30-c2506bd2f80c)).
**Severity**: Medium
The AllowPrivilegeEscalation attribute controls whether a process can gain more
**Description**: To prevent unrestricted host access, avoid privileged containers whenever possible.
-Privileged containers have all of the root capabilities of a host machine. They can be used as entry points for attacks and to spread malicious code or malware to compromised applications, hosts and networks.
-(Related policy: [Do not allow privileged containers in Kubernetes cluster](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f95edb821-ddaf-4404-9732-666045e056b4))
+Privileged containers have all of the root capabilities of a host machine. They can be used as entry points for attacks and to spread malicious code or malware to compromised applications, hosts, and networks.
+(Related policy: [Do not allow privileged containers in Kubernetes cluster](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f95edb821-ddaf-4404-9732-666045e056b4)).
**Severity**: Medium
Privileged containers have all of the root capabilities of a host machine. They
### [Running containers as root user should be avoided](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9b795646-9130-41a4-90b7-df9eae2437c8) **Description**: Containers shouldn't run as root users in your Kubernetes cluster. Running a process as the root user inside a container runs it as root on the host. If there's a compromise, an attacker has root in the container, and any misconfigurations become easier to exploit.
-(Related policy: [Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff06ddb64-5fa3-4b77-b166-acb36f7f6042))
+(Related policy: [Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff06ddb64-5fa3-4b77-b166-acb36f7f6042)).
**Severity**: High
Privileged containers have all of the root capabilities of a host machine. They
### [Services should listen on allowed ports only](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/add45209-73f6-4fa5-a5a5-74a451b07fbe) **Description**: To reduce the attack surface of your Kubernetes cluster, restrict access to the cluster by limiting services access to the configured ports.
-(Related policy: [Ensure services listen only on allowed ports in Kubernetes cluster](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f233a2a17-77ca-4fb1-9b6b-69223d272a44))
+(Related policy: [Ensure services listen only on allowed ports in Kubernetes cluster](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f233a2a17-77ca-4fb1-9b6b-69223d272a44)).
**Severity**: Medium
Privileged containers have all of the root capabilities of a host machine. They
### [Usage of host networking and ports should be restricted](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ebc68898-5c0f-4353-a426-4a5f1e737b12) **Description**: Restrict pod access to the host network and the allowable host port range in a Kubernetes cluster. Pods created with the hostNetwork attribute enabled will share the node's network space. To avoid compromised container from sniffing network traffic, we recommend not putting your pods on the host network. If you need to expose a container port on the node's network, and using a Kubernetes Service node port does not meet your needs, another possibility is to specify a hostPort for the container in the pod spec.
-(Related policy: [Kubernetes cluster pods should only use approved host network and port range](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f82985f06-dc18-4a48-bc1c-b9f4f0098cfe))
+(Related policy: [Kubernetes cluster pods should only use approved host network and port range](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f82985f06-dc18-4a48-bc1c-b9f4f0098cfe)).
**Severity**: Medium
Privileged containers have all of the root capabilities of a host machine. They
### [Usage of pod HostPath volume mounts should be restricted to a known list to restrict node access from compromised containers](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f0debc84-981c-4a0d-924d-aa4bd7d55fef) **Description**: We recommend limiting pod HostPath volume mounts in your Kubernetes cluster to the configured allowed host paths. If there's a compromise, the container node access from the containers should be restricted.
-(Related policy: [Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f098fc59e-46c7-4d99-9b16-64990e543d75))
+(Related policy: [Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f098fc59e-46c7-4d99-9b16-64990e543d75)).
**Severity**: Medium
Privileged containers have all of the root capabilities of a host machine. They
### [Azure registry container images should have vulnerabilities resolved (powered by Qualys)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648) **Description**: Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks.
-(Related policy: [Vulnerabilities in Azure Container Registry images should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f5f0f936f-2f01-4bf5-b6be-d423792fa562))
+(Related policy: [Vulnerabilities in Azure Container Registry images should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f5f0f936f-2f01-4bf5-b6be-d423792fa562)).
**Severity**: High
Privileged containers have all of the root capabilities of a host machine. They
### [Azure registry container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5) **Description**: Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment.
-(Related policy: [Vulnerabilities in Azure Container Registry images should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f5f0f936f-2f01-4bf5-b6be-d423792fa562))
+(Related policy: [Vulnerabilities in Azure Container Registry images should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f5f0f936f-2f01-4bf5-b6be-d423792fa562)).
**Severity**: High
Privileged containers have all of the root capabilities of a host machine. They
**Description**: Recommendations to use customer-managed keys for encryption of data at rest are not assessed by default, but are available to enable for applicable scenarios. Data is encrypted automatically using platform-managed keys, so the use of customer-managed keys should only be applied when obligated by compliance or restrictive policy requirements. To enable this recommendation, navigate to your Security Policy for the applicable scope, and update the *Effect* parameter for the corresponding policy to audit or enforce the use of customer-managed keys. Learn more in [Manage security policies](/azure/defender-for-cloud/tutorial-security-policy). Use customer-managed keys to manage the encryption at rest of your Azure Cosmos DB. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys (CMK) are commonly required to meet regulatory compliance standards. CMKs enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about CMK encryption at <https://aka.ms/cosmosdb-cmk>.
-(Related policy: [Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1f905d99-2ab7-462c-a6b0-f709acca6c8f))
+(Related policy: [Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1f905d99-2ab7-462c-a6b0-f709acca6c8f)).
**Severity**: Low
Use customer-managed keys to manage the encryption at rest of your Azure Cosmos
**Description**: Recommendations to use customer-managed keys for encryption of data at rest are not assessed by default, but are available to enable for applicable scenarios. Data is encrypted automatically using platform-managed keys, so the use of customer-managed keys should only be applied when obligated by compliance or restrictive policy requirements. To enable this recommendation, navigate to your Security Policy for the applicable scope, and update the *Effect* parameter for the corresponding policy to audit or enforce the use of customer-managed keys. Learn more in [Manage security policies](/azure/defender-for-cloud/tutorial-security-policy). Manage encryption at rest of your Azure Machine Learning workspace data with customer-managed keys (CMK). By default, customer data is encrypted with service-managed keys, but CMKs are commonly required to meet regulatory compliance standards. CMKs enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about CMK encryption at <https://aka.ms/azureml-workspaces-cmk>.
-(Related policy: [Azure Machine Learning workspaces should be encrypted with a customer-managed key (CMK)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fba769a63-b8cc-4b2d-abf6-ac33c7204be8))
+(Related policy: [Azure Machine Learning workspaces should be encrypted with a customer-managed key (CMK)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fba769a63-b8cc-4b2d-abf6-ac33c7204be8)).
**Severity**: Low
Customer-managed keys (CMK) are commonly required to meet regulatory compliance
**Description**: Recommendations to use customer-managed keys for encryption of data at rest are not assessed by default, but are available to enable for applicable scenarios. Data is encrypted automatically using platform-managed keys, so the use of customer-managed keys should only be applied when obligated by compliance or restrictive policy requirements. To enable this recommendation, navigate to your Security Policy for the applicable scope, and update the *Effect* parameter for the corresponding policy to audit or enforce the use of customer-managed keys. Learn more in [Manage security policies](/azure/defender-for-cloud/tutorial-security-policy). Use customer-managed keys to manage the encryption at rest of your MySQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys (CMK) are commonly required to meet regulatory compliance standards. CMKs enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management.
-(Related policy: [Bring your own key data protection should be enabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f83cef61d-dbd1-4b20-a4fc-5fbc7da10833))
+(Related policy: [Bring your own key data protection should be enabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f83cef61d-dbd1-4b20-a4fc-5fbc7da10833)).
**Severity**: Low
Use customer-managed keys to manage the encryption at rest of your MySQL servers
**Description**: Recommendations to use customer-managed keys for encryption of data at rest are not assessed by default, but are available to enable for applicable scenarios. Data is encrypted automatically using platform-managed keys, so the use of customer-managed keys should only be applied when obligated by compliance or restrictive policy requirements. To enable this recommendation, navigate to your Security Policy for the applicable scope, and update the *Effect* parameter for the corresponding policy to audit or enforce the use of customer-managed keys. Learn more in [Manage security policies](/azure/defender-for-cloud/tutorial-security-policy). Use customer-managed keys to manage the encryption at rest of your PostgreSQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys (CMK) are commonly required to meet regulatory compliance standards. CMKs enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management.
-(Related policy: [Bring your own key data protection should be enabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f18adea5e-f416-4d0f-8aa8-d24321e3e274))
+(Related policy: [Bring your own key data protection should be enabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f18adea5e-f416-4d0f-8aa8-d24321e3e274)).
**Severity**: Low
Use customer-managed keys to manage the encryption at rest of your PostgreSQL se
**Description**: Recommendations to use customer-managed keys for encryption of data at rest are not assessed by default, but are available to enable for applicable scenarios. Data is encrypted automatically using platform-managed keys, so the use of customer-managed keys should only be applied when obligated by compliance or restrictive policy requirements. To enable this recommendation, navigate to your Security Policy for the applicable scope, and update the *Effect* parameter for the corresponding policy to audit or enforce the use of customer-managed keys. Learn more in [Manage security policies](/azure/defender-for-cloud/tutorial-security-policy). Implementing Transparent Data Encryption (TDE) with your own key provides you with increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement.
-(Related policy: [SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f048248b0-55cd-46da-b1ff-39efd52db260))
+(Related policy: [SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f048248b0-55cd-46da-b1ff-39efd52db260)).
**Severity**: Low
Implementing Transparent Data Encryption (TDE) with your own key provides you wi
**Description**: Recommendations to use customer-managed keys for encryption of data at rest are not assessed by default, but are available to enable for applicable scenarios. Data is encrypted automatically using platform-managed keys, so the use of customer-managed keys should only be applied when obligated by compliance or restrictive policy requirements. To enable this recommendation, navigate to your Security Policy for the applicable scope, and update the *Effect* parameter for the corresponding policy to audit or enforce the use of customer-managed keys. Learn more in [Manage security policies](/azure/defender-for-cloud/tutorial-security-policy). Implementing Transparent Data Encryption (TDE) with your own key provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement.
-(Related policy: [SQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0d134df8-db83-46fb-ad72-fe0c9428c8dd))
+(Related policy: [SQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0d134df8-db83-46fb-ad72-fe0c9428c8dd)).
**Severity**: Low
Implementing Transparent Data Encryption (TDE) with your own key provides increa
**Description**: Recommendations to use customer-managed keys for encryption of data at rest are not assessed by default, but are available to enable for applicable scenarios. Data is encrypted automatically using platform-managed keys, so the use of customer-managed keys should only be applied when obligated by compliance or restrictive policy requirements. To enable this recommendation, navigate to your Security Policy for the applicable scope, and update the *Effect* parameter for the corresponding policy to audit or enforce the use of customer-managed keys. Learn more in [Manage security policies](/azure/defender-for-cloud/tutorial-security-policy). Secure your storage account with greater flexibility using customer-managed keys (CMKs). When you specify a CMK, that key is used to protect and control access to the key that encrypts your data. Using CMKs provides additional capabilities to control rotation of the key encryption key or cryptographically erase data.
-(Related policy: [Storage accounts should use customer-managed key (CMK) for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6fac406b-40ca-413b-bf8e-0bf964659c25))
+(Related policy: [Storage accounts should use customer-managed key (CMK) for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6fac406b-40ca-413b-bf8e-0bf964659c25)).
**Severity**: Low
Secure your storage account with greater flexibility using customer-managed keys
### [API Management services should use a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/74e7dcff-317f-9635-41d2-ead5019acc99)
-**Description**: Azure Virtual Network deployment provides enhanced security, isolation and allows you to place your API Management service in a non-internet routable network that you control access to. These networks can then be connected to your on-premises networks using various VPN technologies, which enables access to your backend services within the network and/or on-premises. The developer portal and API gateway, can be configured to be accessible either from the Internet or only within the virtual network.
-(Related policy: [API Management services should use a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fef619a2c-cc4d-4d03-b2ba-8c94a834d85b))
+**Description**: Azure Virtual Network deployment provides enhanced security, isolation, and allows you to place your API Management service in a non-internet routable network that you control access to. These networks can then be connected to your on-premises networks using various VPN technologies, which enables access to your backend services within the network and/or on-premises. The developer portal and API gateway can be configured to be accessible either from the Internet or only within the virtual network.
+(Related policy: [API Management services should use a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fef619a2c-cc4d-4d03-b2ba-8c94a834d85b)).
**Severity**: Medium ### [App Configuration should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8318c3a1-fcac-2e1d-9582-50912e5578e5) **Description**: Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your app configuration instances instead of the entire service, you'll also be protected against data leakage risks. Learn more at: <https://aka.ms/appconfig/private-endpoint>.
-(Related policy: [App Configuration should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fca610c1d-041c-4332-9d88-7ed3094967c7))
+(Related policy: [App Configuration should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fca610c1d-041c-4332-9d88-7ed3094967c7)).
**Severity**: Medium
Secure your storage account with greater flexibility using customer-managed keys
### [Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/94208a8b-16e8-4e5b-abbd-4e81c9d02bee) **Description**: Enable auditing on your SQL Server to track database activities across all databases on the server and save them in an audit log.
-(Related policy: [Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9))
+(Related policy: [Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9)).
**Severity**: Low ### [Auto provisioning of the Log Analytics agent should be enabled on subscriptions](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/af849052-4299-0692-acc0-bffcbe9e440c) **Description**: To monitor for security vulnerabilities and threats, Microsoft Defender for Cloud collects data from your Azure virtual machines. Data is collected by the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA), which reads various security-related configurations and event logs from the machine and copies the data to your Log Analytics workspace for analysis. We recommend enabling auto provisioning to automatically deploy the agent to all supported Azure VMs and any new ones that are created.
-(Related policy: [Auto provisioning of the Log Analytics agent should be enabled on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f475aae12-b88a-4572-8b36-9b712b2b3a17))
+(Related policy: [Auto provisioning of the Log Analytics agent should be enabled on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f475aae12-b88a-4572-8b36-9b712b2b3a17)).
**Severity**: Low ### [Azure Cache for Redis should reside within a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/be264018-593c-1162-bd5e-b74a39396652) **Description**: Azure Virtual Network (VNet) deployment provides enhanced security and isolation for your Azure Cache for Redis, as well as subnets, access control policies, and other features to further restrict access. When an Azure Cache for Redis instance is configured with a VNet, it is not publicly addressable and can only be accessed from virtual machines and applications within the VNet.
-(Related policy: [Azure Cache for Redis should reside within a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7d092e0a-7acd-40d2-a975-dca21cae48c4))
+(Related policy: [Azure Cache for Redis should reside within a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7d092e0a-7acd-40d2-a975-dca21cae48c4)).
**Severity**: Medium ### [Azure Database for MySQL should have an Azure Active Directory administrator provisioned](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/8af8a87b-7aa6-4c83-b22b-36801896177b/) **Description**: Provision an Azure AD administrator for your Azure Database for MySQL to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services
-(Related policy: [An Azure Active Directory administrator should be provisioned for MySQL servers](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f146412e9-005c-472b-9e48-c87b72ac229e))
+(Related policy: [An Azure Active Directory administrator should be provisioned for MySQL servers](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f146412e9-005c-472b-9e48-c87b72ac229e)).
**Severity**: Medium
-### [Azure Database for PostgreSQL should have an Azure Active Directory administrator provisioned](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/b20d1b00-11a8-4ce7-b477-4ea6e147c345/subscriptionIds~/%5B%220cd6095b-b140-41ec-ad1d-32f2f7493386%22%2C%220ee78edb-a0ad-456c-a0a2-901bf542c102%22%2C%2284ca48fe-c942-42e5-b492-d56681d058fa%22%2C%22b2a328a7-ffff-4c09-b643-a4758cf170bc%22%2C%22eef8b6d5-94da-4b36-9327-a662f2674efb%22%2C%228d5565a3-dec1-4ee2-86d6-8aabb315eec4%22%2C%22e0fd569c-e34a-4249-8c24-e8d723c7f054%22%2C%22dad45786-32e5-4ef3-b90e-8e0838fbadb6%22%2C%22a5f9f0d3-a937-4de5-8cf3-387fce51e80c%22%2C%220368444d-756e-4ca6-9ecd-e964248c227a%22%2C%22e686ef8c-d35d-4e9b-92f8-caaaa7948c0a%22%2C%222145a411-d149-4010-84d4-40fe8a55db44%22%2C%22212f9889-769e-45ae-ab43-6da33674bd26%22%2C%22615f5f56-4ba9-45cf-b644-0c09d7d325c8%22%2C%22487bb485-b5b0-471e-9c0d-10717612f869%22%2C%22cb9eb375-570a-4e75-b83a-77dd942bee9f%22%2C%224bbecc02-f2c3-402a-8e01-1dfb1ffef499%22%2C%22432a7068-99ae-4975-ad38-d96b71172cdf%22%2C%22c0620f27-ac38-468c-a26b-264009fe7c41%22%2C%22a1920ebd-59b7-4f19-af9f-5e80599e88e4%22%2C%22b43a6159-1bea-4fa2-9407-e875fdc0ff55%22%2C%22d07c0080-170c-4c24-861d-9c817742986a%22%2C%22ae71ef11-a03f-4b4f-a0e6-ef144727c711%22%2C%2255a24be0-d9c3-4ecd-86b6-566c7aac2512%22%2C%227afc2d66-d5b4-4e84-970b-a782e3e4cc46%22%2C%2252a442a2-31e9-42f9-8e3e-4b27dbf82673%22%2C%228c4b5b03-3b24-4ed0-91f5-a703cd91b412%22%2C%22e01de573-132a-42ac-9ee2-f9dea9dd2717%22%2C%22b5c0b80f-5932-4d47-ae25-cd617dac90ce%22%2C%22e4e06275-58d1-4081-8f1b-be12462eb701%22%2C%229b4236fe-df75-4289-bf00-40628ed41fd9%22%2C%2221d8f407-c4c4-452e-87a4-e609bfb86248%22%2C%227d411d23-59e5-4e2e-8566-4f59de4544f2%22%2C%22b74d5345-100f-408a-a7ca-47abb52ba60d%22%2C%22f30787b9-82a8-4e74-bb0f-f12d64ecc496%22%2C%22482e1993-01d4-4b16-bff4-1866929176a1%22%2C%2226596251-f2f3-4e31-8a1b-f0754e32ad73%22%2C%224628298e-882d-4f12-abf4-a9f9654960bb%22%2C%224115b323-4aac-47f4-bb13-22af265ed58b%22%2C%22911e3904-5112-4232-a7ee-0d1811363c28%22%2C%22cd0fa82d-b6b6-4361-b002-050c32f71353%22%2C%22dd4c2dac-db51-4cd0-b734-684c6cc360c1%22%2C%22d2c9544f-4329-4642-b73d-020e7fef844f%22%2C%22bac420ed-c6fc-4a05-8ac1-8c0c52da1d6e%22%2C%2250ff7bc0-cd15-49d5-abb2-e975184c2f65%22%2C%223cd95ff9-ac62-4b5c-8240-0cd046687ea0%22%2C%2213723929-6644-4060-a50a-cc38ebc5e8b1%22%2C%2209fa8e83-d677-474f-8f73-2a954a0b0ea4%22%2C%22ca38bc19-cf50-48e2-bbe6-8c35b40212d8%22%2C%22bf163a87-8506-4eb3-8d14-c2dc95908830%22%2C%221278a874-89fc-418c-b6b9-ac763b000415%22%2C%223b2fda06-3ef6-454a-9dd5-994a548243e9%22%2C%226560575d-fa06-4e7d-95fb-f962e74efd7a%22%2C%22c3547baf-332f-4d8f-96bd-0659b39c7a59%22%2C%222f96ae42-240b-4228-bafa-26d8b7b03bf3%22%2C%2229de2cfc-f00a-43bb-bdc8-3108795bd282%22%2C%22a1ffc958-d2c7-493e-9f1e-125a0477f536%22%2C%2254b875cc-a81a-4914-8bfd-1a36bc7ddf4d%22%2C%22407ff5d7-0113-4c5c-8534-f5cfb09298f5%22%2C%22365a62ee-6166-4d37-a936-03585106dd50%22%2C%226d17b59e-06c4-4203-89d2-de793ebf5452%22%2C%229372b318-ed3a-4504-95a6-941201300f78%22%2C%223c1bb38c-82e3-4f8d-a115-a7110ba70d05%22%2C%22c6dcd830-359f-44d0-b4d4-c1ba95e86f48%22%2C%2209e8ad18-7bdb-43b8-80c4-43ee53460e0b%22%2C%22dcbdac96-1896-478d-89fc-c95ed43f4596%22%2C%22d23422cf-c0f2-4edc-a306-6e32b181a341%22%2C%228c2c7b23-848d-40fe-b817-690d79ad9dfd%22%2C%221163fbbe-27e7-4b0f-8466-195fe5417043%22%2C%223905431d-c062-4c17-8fd9-c51f89f334c4%22%2C%227ea26ded-0260-4e78-9336-285d4d9e33d2%22%2C%225ccdbd03-f1b1-4b59-a609-300685e17ce3%22%2C%22bcdc6eb0-74cd-40b6-b3a9-584b33cea7b6%22%2C%22d557e825-27b1-4819-8af5-dc2429af91c9%22%2C%222bb50811-92b6-43a1-9d80-745962d9c759%22%2C%22409111bf-3097-421c-ad68-a44e716edf58%22%2C%2249e3f635-484a-43d1-b953-b29e1871ba88%22%2C%22b77ec8a9-04ed-48d2-a87a-e5887b978ba6%22%2C%22075423e9-7d33-4166-8bdf-3920b04e3735%22%2C%22ef143bbb-6a7e-4a3f-b64f-2f23330e0116%22%2C%2224afc59a-f969-4f83-95c9-3b70f52d833d%22%2C%22a8783cc5-1171-4c34-924f-6f71a20b21ec%22%2C%220079a9bb-e218-496a-9880-d27ad6192f52%22%2C%226f53185c-ea09-4fc3-9075-318dec805303%22%2C%22588845a8-a4a7-4ab1-83a1-1388452e8c0c%22%2C%22b68b2f37-1d37-4c2f-80f6-c23de402792e%22%2C%22eec2de82-6ab2-4a84-ae5f-57e9a10bf661%22%2C%22227531a4-d775-435b-a878-963ed8d0d18f%22%2C%228cff5d56-95fb-4a74-ab9d-079edb45313e%22%2C%22e72e5254-f265-4e95-9bd2-9ee8e7329051%22%2C%228ae1955e-f748-4273-a507-10159ba940f9%22%2C%22f6869ac6-2a40-404f-acd3-d07461be771a%22%2C%2285b3dbca-5974-4067-9669-67a141095a76%22%2C%228168a4f2-74d6-4663-9951-8e3a454937b7%22%2C%229ec1d932-0f3f-486c-acc6-e7d78b358f9b%22%2C%2279f57c16-00fe-48da-87d4-5192e86cd047%22%2C%22bac044cf-49e1-4843-8dda-1ce9662606c8%22%2C%22009d0e9f-a42a-470e-b315-82496a88cf0f%22%2C%2268f3658f-0090-4277-a500-f02227aaee97%22%5D/showSecurityCenterCommandBar~/false/assessmentOwners~/null)
+### [Azure Database for PostgreSQL should have an Azure Active Directory administrator provisioned](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/b20d1b00-11a8-4ce7-b477-4ea6e147c345)
**Description**: Provision an Azure AD administrator for your Azure Database for PostgreSQL to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services
-(Related policy: [An Azure Active Directory administrator should be provisioned for PostgreSQL servers](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb4dec045-250a-48c2-b5cc-e0c4eec8b5b4))
+(Related policy: [An Azure Active Directory administrator should be provisioned for PostgreSQL servers](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb4dec045-250a-48c2-b5cc-e0c4eec8b5b4)).
**Severity**: Medium ### [Azure Cosmos DB accounts should have firewall rules](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/276b1952-c364-852b-11e5-657f0fa34dc6) **Description**: Firewall rules should be defined on your Azure Cosmos DB accounts to prevent traffic from unauthorized sources. Accounts that have at least one IP rule defined with the virtual network filter enabled are deemed compliant. Accounts disabling public access are also deemed compliant.
-(Related policy: [Azure Cosmos DB accounts should have firewall rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f862e97cf-49fc-4a5c-9de4-40d4e2e7c8eb))
+(Related policy: [Azure Cosmos DB accounts should have firewall rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f862e97cf-49fc-4a5c-9de4-40d4e2e7c8eb)).
**Severity**: Medium ### [Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/bef092f5-bea7-3df3-1ee8-4376dd9c111e) **Description**: Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid domains instead of the entire service, you'll also be protected against data leakage risks. Learn more at: <https://aka.ms/privateendpoints>.
-(Related policy: [Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f9830b652-8523-49cc-b1b3-e17dce1127ca))
+(Related policy: [Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f9830b652-8523-49cc-b1b3-e17dce1127ca)).
**Severity**: Medium ### [Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/bdac9c7b-b9b8-f572-0450-f161c430861c) **Description**: Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your topics instead of the entire service, you'll also be protected against data leakage risks. Learn more at: <https://aka.ms/privateendpoints>.
-(Related policy: [Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f4b90e17e-8448-49db-875e-bd83fb6f804f))
+(Related policy: [Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f4b90e17e-8448-49db-875e-bd83fb6f804f)).
**Severity**: Medium ### [Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/692343df-7e70-b082-7b0e-67f97146cea3) **Description**: Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure Machine Learning workspaces instead of the entire service, you'll also be protected against data leakage risks. Learn more at: <https://aka.ms/azureml-workspaces-privatelink>.
-(Related policy: [Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f40cec1dd-a100-4920-b15b-3024fe8901ab))
+(Related policy: [Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f40cec1dd-a100-4920-b15b-3024fe8901ab)).
**Severity**: Medium ### [Azure SignalR Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/b6f84d18-0137-3176-6aa1-f4d9ac95155c) **Description**: Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your SignalR resources instead of the entire service, you'll also be protected against data leakage risks. Learn more at: <https://aka.ms/asrs/privatelink>.
-(Related policy: [Azure SignalR Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f53503636-bcc9-4748-9663-5348217f160f))
+(Related policy: [Azure SignalR Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f53503636-bcc9-4748-9663-5348217f160f)).
**Severity**: Medium ### [Azure Spring Cloud should use network injection](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4c768356-5ad2-e3cc-c799-252b27d3865a) **Description**: Azure Spring Cloud instances should use virtual network injection for the following purposes: 1. Isolate Azure Spring Cloud from Internet. 2. Enable Azure Spring Cloud to interact with systems in either on premises data centers or Azure service in other virtual networks. 3. Empower customers to control inbound and outbound network communications for Azure Spring Cloud.
-(Related policy: [Azure Spring Cloud should use network injection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2faf35e2a4-ef96-44e7-a9ae-853dd97032c4))
+(Related policy: [Azure Spring Cloud should use network injection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2faf35e2a4-ef96-44e7-a9ae-853dd97032c4)).
**Severity**: Medium
-### [Azure SQL Managed Instance authentication mode should be Azure Active Directory Only](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/e2750e59-9a37-4ad5-b584-013932d9682d/subscriptionIds~/%5B%220cd6095b-b140-41ec-ad1d-32f2f7493386%22%2C%220ee78edb-a0ad-456c-a0a2-901bf542c102%22%2C%2284ca48fe-c942-42e5-b492-d56681d058fa%22%2C%22b2a328a7-ffff-4c09-b643-a4758cf170bc%22%2C%22eef8b6d5-94da-4b36-9327-a662f2674efb%22%2C%228d5565a3-dec1-4ee2-86d6-8aabb315eec4%22%2C%22e0fd569c-e34a-4249-8c24-e8d723c7f054%22%2C%22dad45786-32e5-4ef3-b90e-8e0838fbadb6%22%2C%22a5f9f0d3-a937-4de5-8cf3-387fce51e80c%22%2C%220368444d-756e-4ca6-9ecd-e964248c227a%22%2C%22e686ef8c-d35d-4e9b-92f8-caaaa7948c0a%22%2C%222145a411-d149-4010-84d4-40fe8a55db44%22%2C%22212f9889-769e-45ae-ab43-6da33674bd26%22%2C%22615f5f56-4ba9-45cf-b644-0c09d7d325c8%22%2C%22487bb485-b5b0-471e-9c0d-10717612f869%22%2C%22cb9eb375-570a-4e75-b83a-77dd942bee9f%22%2C%224bbecc02-f2c3-402a-8e01-1dfb1ffef499%22%2C%22432a7068-99ae-4975-ad38-d96b71172cdf%22%2C%22c0620f27-ac38-468c-a26b-264009fe7c41%22%2C%22a1920ebd-59b7-4f19-af9f-5e80599e88e4%22%2C%22b43a6159-1bea-4fa2-9407-e875fdc0ff55%22%2C%22d07c0080-170c-4c24-861d-9c817742986a%22%2C%22ae71ef11-a03f-4b4f-a0e6-ef144727c711%22%2C%2255a24be0-d9c3-4ecd-86b6-566c7aac2512%22%2C%227afc2d66-d5b4-4e84-970b-a782e3e4cc46%22%2C%2252a442a2-31e9-42f9-8e3e-4b27dbf82673%22%2C%228c4b5b03-3b24-4ed0-91f5-a703cd91b412%22%2C%22e01de573-132a-42ac-9ee2-f9dea9dd2717%22%2C%22b5c0b80f-5932-4d47-ae25-cd617dac90ce%22%2C%22e4e06275-58d1-4081-8f1b-be12462eb701%22%2C%229b4236fe-df75-4289-bf00-40628ed41fd9%22%2C%2221d8f407-c4c4-452e-87a4-e609bfb86248%22%2C%227d411d23-59e5-4e2e-8566-4f59de4544f2%22%2C%22b74d5345-100f-408a-a7ca-47abb52ba60d%22%2C%22f30787b9-82a8-4e74-bb0f-f12d64ecc496%22%2C%22482e1993-01d4-4b16-bff4-1866929176a1%22%2C%2226596251-f2f3-4e31-8a1b-f0754e32ad73%22%2C%224628298e-882d-4f12-abf4-a9f9654960bb%22%2C%224115b323-4aac-47f4-bb13-22af265ed58b%22%2C%22911e3904-5112-4232-a7ee-0d1811363c28%22%2C%22cd0fa82d-b6b6-4361-b002-050c32f71353%22%2C%22dd4c2dac-db51-4cd0-b734-684c6cc360c1%22%2C%22d2c9544f-4329-4642-b73d-020e7fef844f%22%2C%22bac420ed-c6fc-4a05-8ac1-8c0c52da1d6e%22%2C%2250ff7bc0-cd15-49d5-abb2-e975184c2f65%22%2C%223cd95ff9-ac62-4b5c-8240-0cd046687ea0%22%2C%2213723929-6644-4060-a50a-cc38ebc5e8b1%22%2C%2209fa8e83-d677-474f-8f73-2a954a0b0ea4%22%2C%22ca38bc19-cf50-48e2-bbe6-8c35b40212d8%22%2C%22bf163a87-8506-4eb3-8d14-c2dc95908830%22%2C%221278a874-89fc-418c-b6b9-ac763b000415%22%2C%223b2fda06-3ef6-454a-9dd5-994a548243e9%22%2C%226560575d-fa06-4e7d-95fb-f962e74efd7a%22%2C%22c3547baf-332f-4d8f-96bd-0659b39c7a59%22%2C%222f96ae42-240b-4228-bafa-26d8b7b03bf3%22%2C%2229de2cfc-f00a-43bb-bdc8-3108795bd282%22%2C%22a1ffc958-d2c7-493e-9f1e-125a0477f536%22%2C%2254b875cc-a81a-4914-8bfd-1a36bc7ddf4d%22%2C%22407ff5d7-0113-4c5c-8534-f5cfb09298f5%22%2C%22365a62ee-6166-4d37-a936-03585106dd50%22%2C%226d17b59e-06c4-4203-89d2-de793ebf5452%22%2C%229372b318-ed3a-4504-95a6-941201300f78%22%2C%223c1bb38c-82e3-4f8d-a115-a7110ba70d05%22%2C%22c6dcd830-359f-44d0-b4d4-c1ba95e86f48%22%2C%2209e8ad18-7bdb-43b8-80c4-43ee53460e0b%22%2C%22dcbdac96-1896-478d-89fc-c95ed43f4596%22%2C%22d23422cf-c0f2-4edc-a306-6e32b181a341%22%2C%228c2c7b23-848d-40fe-b817-690d79ad9dfd%22%2C%221163fbbe-27e7-4b0f-8466-195fe5417043%22%2C%223905431d-c062-4c17-8fd9-c51f89f334c4%22%2C%227ea26ded-0260-4e78-9336-285d4d9e33d2%22%2C%225ccdbd03-f1b1-4b59-a609-300685e17ce3%22%2C%22bcdc6eb0-74cd-40b6-b3a9-584b33cea7b6%22%2C%22d557e825-27b1-4819-8af5-dc2429af91c9%22%2C%222bb50811-92b6-43a1-9d80-745962d9c759%22%2C%22409111bf-3097-421c-ad68-a44e716edf58%22%2C%2249e3f635-484a-43d1-b953-b29e1871ba88%22%2C%22b77ec8a9-04ed-48d2-a87a-e5887b978ba6%22%2C%22075423e9-7d33-4166-8bdf-3920b04e3735%22%2C%22ef143bbb-6a7e-4a3f-b64f-2f23330e0116%22%2C%2224afc59a-f969-4f83-95c9-3b70f52d833d%22%2C%22a8783cc5-1171-4c34-924f-6f71a20b21ec%22%2C%220079a9bb-e218-496a-9880-d27ad6192f52%22%2C%226f53185c-ea09-4fc3-9075-318dec805303%22%2C%22588845a8-a4a7-4ab1-83a1-1388452e8c0c%22%2C%22b68b2f37-1d37-4c2f-80f6-c23de402792e%22%2C%22eec2de82-6ab2-4a84-ae5f-57e9a10bf661%22%2C%22227531a4-d775-435b-a878-963ed8d0d18f%22%2C%228cff5d56-95fb-4a74-ab9d-079edb45313e%22%2C%22e72e5254-f265-4e95-9bd2-9ee8e7329051%22%2C%228ae1955e-f748-4273-a507-10159ba940f9%22%2C%22f6869ac6-2a40-404f-acd3-d07461be771a%22%2C%2285b3dbca-5974-4067-9669-67a141095a76%22%2C%228168a4f2-74d6-4663-9951-8e3a454937b7%22%2C%229ec1d932-0f3f-486c-acc6-e7d78b358f9b%22%2C%2279f57c16-00fe-48da-87d4-5192e86cd047%22%2C%22bac044cf-49e1-4843-8dda-1ce9662606c8%22%2C%22009d0e9f-a42a-470e-b315-82496a88cf0f%22%2C%2268f3658f-0090-4277-a500-f02227aaee97%22%5D/showSecurityCenterCommandBar~/false/assessmentOwners~/null)
+### [SQL servers should have an Azure Active Directory administrator provisioned](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f0553104-cfdb-65e6-759c-002812e38500)
-**Description**: Disabling local authentication methods and allowing only Azure Active Directory Authentication improves security by ensuring that Azure SQL Managed Instances can exclusively be accessed by Azure Active Directory identities.
-(Related policy: [Azure SQL Managed Instance should have Azure Active Directory Only Authentication enabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f78215662-041e-49ed-a9dd-5385911b3a1f))
+**Description**: Provision an Azure AD administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services.
+(Related policy: [An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1f314764-cb73-4fc9-b863-8eca98ac36e9)).
-**Severity**: Medium
+**Severity**: High
-### [Azure Synapse Workspace authentication mode should be Azure Active Directory Only](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/3320d1ac-0ebe-41ab-b96c-96fb91214c5c/subscriptionIds~/%5B%220cd6095b-b140-41ec-ad1d-32f2f7493386%22%2C%220ee78edb-a0ad-456c-a0a2-901bf542c102%22%2C%2284ca48fe-c942-42e5-b492-d56681d058fa%22%2C%22b2a328a7-ffff-4c09-b643-a4758cf170bc%22%2C%22eef8b6d5-94da-4b36-9327-a662f2674efb%22%2C%228d5565a3-dec1-4ee2-86d6-8aabb315eec4%22%2C%22e0fd569c-e34a-4249-8c24-e8d723c7f054%22%2C%22dad45786-32e5-4ef3-b90e-8e0838fbadb6%22%2C%22a5f9f0d3-a937-4de5-8cf3-387fce51e80c%22%2C%220368444d-756e-4ca6-9ecd-e964248c227a%22%2C%22e686ef8c-d35d-4e9b-92f8-caaaa7948c0a%22%2C%222145a411-d149-4010-84d4-40fe8a55db44%22%2C%22212f9889-769e-45ae-ab43-6da33674bd26%22%2C%22615f5f56-4ba9-45cf-b644-0c09d7d325c8%22%2C%22487bb485-b5b0-471e-9c0d-10717612f869%22%2C%22cb9eb375-570a-4e75-b83a-77dd942bee9f%22%2C%224bbecc02-f2c3-402a-8e01-1dfb1ffef499%22%2C%22432a7068-99ae-4975-ad38-d96b71172cdf%22%2C%22c0620f27-ac38-468c-a26b-264009fe7c41%22%2C%22a1920ebd-59b7-4f19-af9f-5e80599e88e4%22%2C%22b43a6159-1bea-4fa2-9407-e875fdc0ff55%22%2C%22d07c0080-170c-4c24-861d-9c817742986a%22%2C%22ae71ef11-a03f-4b4f-a0e6-ef144727c711%22%2C%2255a24be0-d9c3-4ecd-86b6-566c7aac2512%22%2C%227afc2d66-d5b4-4e84-970b-a782e3e4cc46%22%2C%2252a442a2-31e9-42f9-8e3e-4b27dbf82673%22%2C%228c4b5b03-3b24-4ed0-91f5-a703cd91b412%22%2C%22e01de573-132a-42ac-9ee2-f9dea9dd2717%22%2C%22b5c0b80f-5932-4d47-ae25-cd617dac90ce%22%2C%22e4e06275-58d1-4081-8f1b-be12462eb701%22%2C%229b4236fe-df75-4289-bf00-40628ed41fd9%22%2C%2221d8f407-c4c4-452e-87a4-e609bfb86248%22%2C%227d411d23-59e5-4e2e-8566-4f59de4544f2%22%2C%22b74d5345-100f-408a-a7ca-47abb52ba60d%22%2C%22f30787b9-82a8-4e74-bb0f-f12d64ecc496%22%2C%22482e1993-01d4-4b16-bff4-1866929176a1%22%2C%2226596251-f2f3-4e31-8a1b-f0754e32ad73%22%2C%224628298e-882d-4f12-abf4-a9f9654960bb%22%2C%224115b323-4aac-47f4-bb13-22af265ed58b%22%2C%22911e3904-5112-4232-a7ee-0d1811363c28%22%2C%22cd0fa82d-b6b6-4361-b002-050c32f71353%22%2C%22dd4c2dac-db51-4cd0-b734-684c6cc360c1%22%2C%22d2c9544f-4329-4642-b73d-020e7fef844f%22%2C%22bac420ed-c6fc-4a05-8ac1-8c0c52da1d6e%22%2C%2250ff7bc0-cd15-49d5-abb2-e975184c2f65%22%2C%223cd95ff9-ac62-4b5c-8240-0cd046687ea0%22%2C%2213723929-6644-4060-a50a-cc38ebc5e8b1%22%2C%2209fa8e83-d677-474f-8f73-2a954a0b0ea4%22%2C%22ca38bc19-cf50-48e2-bbe6-8c35b40212d8%22%2C%22bf163a87-8506-4eb3-8d14-c2dc95908830%22%2C%221278a874-89fc-418c-b6b9-ac763b000415%22%2C%223b2fda06-3ef6-454a-9dd5-994a548243e9%22%2C%226560575d-fa06-4e7d-95fb-f962e74efd7a%22%2C%22c3547baf-332f-4d8f-96bd-0659b39c7a59%22%2C%222f96ae42-240b-4228-bafa-26d8b7b03bf3%22%2C%2229de2cfc-f00a-43bb-bdc8-3108795bd282%22%2C%22a1ffc958-d2c7-493e-9f1e-125a0477f536%22%2C%2254b875cc-a81a-4914-8bfd-1a36bc7ddf4d%22%2C%22407ff5d7-0113-4c5c-8534-f5cfb09298f5%22%2C%22365a62ee-6166-4d37-a936-03585106dd50%22%2C%226d17b59e-06c4-4203-89d2-de793ebf5452%22%2C%229372b318-ed3a-4504-95a6-941201300f78%22%2C%223c1bb38c-82e3-4f8d-a115-a7110ba70d05%22%2C%22c6dcd830-359f-44d0-b4d4-c1ba95e86f48%22%2C%2209e8ad18-7bdb-43b8-80c4-43ee53460e0b%22%2C%22dcbdac96-1896-478d-89fc-c95ed43f4596%22%2C%22d23422cf-c0f2-4edc-a306-6e32b181a341%22%2C%228c2c7b23-848d-40fe-b817-690d79ad9dfd%22%2C%221163fbbe-27e7-4b0f-8466-195fe5417043%22%2C%223905431d-c062-4c17-8fd9-c51f89f334c4%22%2C%227ea26ded-0260-4e78-9336-285d4d9e33d2%22%2C%225ccdbd03-f1b1-4b59-a609-300685e17ce3%22%2C%22bcdc6eb0-74cd-40b6-b3a9-584b33cea7b6%22%2C%22d557e825-27b1-4819-8af5-dc2429af91c9%22%2C%222bb50811-92b6-43a1-9d80-745962d9c759%22%2C%22409111bf-3097-421c-ad68-a44e716edf58%22%2C%2249e3f635-484a-43d1-b953-b29e1871ba88%22%2C%22b77ec8a9-04ed-48d2-a87a-e5887b978ba6%22%2C%22075423e9-7d33-4166-8bdf-3920b04e3735%22%2C%22ef143bbb-6a7e-4a3f-b64f-2f23330e0116%22%2C%2224afc59a-f969-4f83-95c9-3b70f52d833d%22%2C%22a8783cc5-1171-4c34-924f-6f71a20b21ec%22%2C%220079a9bb-e218-496a-9880-d27ad6192f52%22%2C%226f53185c-ea09-4fc3-9075-318dec805303%22%2C%22588845a8-a4a7-4ab1-83a1-1388452e8c0c%22%2C%22b68b2f37-1d37-4c2f-80f6-c23de402792e%22%2C%22eec2de82-6ab2-4a84-ae5f-57e9a10bf661%22%2C%22227531a4-d775-435b-a878-963ed8d0d18f%22%2C%228cff5d56-95fb-4a74-ab9d-079edb45313e%22%2C%22e72e5254-f265-4e95-9bd2-9ee8e7329051%22%2C%228ae1955e-f748-4273-a507-10159ba940f9%22%2C%22f6869ac6-2a40-404f-acd3-d07461be771a%22%2C%2285b3dbca-5974-4067-9669-67a141095a76%22%2C%228168a4f2-74d6-4663-9951-8e3a454937b7%22%2C%229ec1d932-0f3f-486c-acc6-e7d78b358f9b%22%2C%2279f57c16-00fe-48da-87d4-5192e86cd047%22%2C%22bac044cf-49e1-4843-8dda-1ce9662606c8%22%2C%22009d0e9f-a42a-470e-b315-82496a88cf0f%22%2C%2268f3658f-0090-4277-a500-f02227aaee97%22%5D/showSecurityCenterCommandBar~/false/assessmentOwners~/null)
+### [Azure Synapse Workspace authentication mode should be Azure Active Directory Only](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/3320d1ac-0ebe-41ab-b96c-96fb91214c5c)
**Description**: Azure Synapse Workspace authentication mode should be Azure Active Directory Only Azure Active Directory only authentication methods improves security by ensuring that Synapse Workspaces exclusively require Azure AD identities for authentication. [Learn more](https://aka.ms/Synapse).
-(Related policy: [Synapse Workspaces should use only Azure Active Directory identities for authentication](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2158ddbe-fefa-408e-b43f-d4faef8ff3b8))
+(Related policy: [Synapse Workspaces should use only Azure Active Directory identities for authentication](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2158ddbe-fefa-408e-b43f-d4faef8ff3b8)).
**Severity**: Medium
Secure your storage account with greater flexibility using customer-managed keys
### [Code repositories should have secret scanning findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4e07c7d0-e06c-47d7-a4a9-8c7b748d1b27)
-**Description**: Defender for DevOps has found a secret in code repositories. This should be remediated immediately to prevent a security breach. Secrets found in repositories can be leaked or discovered by adversaries, leading to compromise of an application or service. For Azure DevOps, the Microsoft Security DevOps CredScan tool only scans builds on which it has been configured to run. Therefore, results may not reflect the complete status of secrets in your repositories.
+**Description**: Defender for DevOps has found a secret in code repositories. This should be remediated immediately to prevent a security breach. Secrets found in repositories can be leaked or discovered by adversaries, leading to compromise of an application or service. For Azure DevOps, the Microsoft Security DevOps CredScan tool only scans builds on which it has been configured to run. Therefore, results might not reflect the complete status of secrets in your repositories.
(No related policy) **Severity**: High
Secure your storage account with greater flexibility using customer-managed keys
### [Cognitive Services accounts should enable data encryption](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/cdcf4f71-60d3-540b-91e3-aa19792da364) **Description**: This policy audits any Cognitive Services account not using data encryption. For each Cognitive Services account with storage, should enable data encryption with either customer managed or Microsoft managed key.
-(Related policy: [Cognitive Services accounts should enable data encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2bdd0062-9d75-436e-89df-487dd8e4b3c7))
+(Related policy: [Cognitive Services accounts should enable data encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2bdd0062-9d75-436e-89df-487dd8e4b3c7)).
**Severity**: Low ### [Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f738efb8-005f-680d-3d43-b3db762d6243) **Description**: Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges.
-(Related policy: [Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f037eea7a-bd0a-46c5-9a66-03aea78705d3))
+(Related policy: [Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f037eea7a-bd0a-46c5-9a66-03aea78705d3)).
**Severity**: Medium
Secure your storage account with greater flexibility using customer-managed keys
### [Diagnostic logs in Azure Data Lake Store should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ad5bbaeb-7632-5edf-f1c2-752075831ce8) **Description**: Enable logs and retain them for up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised.
-(Related policy: [Diagnostic logs in Azure Data Lake Store should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f057ef27e-665e-4328-8ea3-04b3122bd9fb))
+(Related policy: [Diagnostic logs in Azure Data Lake Store should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f057ef27e-665e-4328-8ea3-04b3122bd9fb)).
**Severity**: Low ### [Diagnostic logs in Data Lake Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c6dad669-efd7-cd72-61c5-289935607791) **Description**: Enable logs and retain them for up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised.
-(Related policy: [Diagnostic logs in Data Lake Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fc95c74d9-38fe-4f0d-af86-0c7d626a315c))
+(Related policy: [Diagnostic logs in Data Lake Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fc95c74d9-38fe-4f0d-af86-0c7d626a315c)).
**Severity**: Low ### [Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/3869fbd7-5d90-84e4-37bd-d9a7f4ce9a24) **Description**: To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Defender for Cloud.
-(Related policy: [Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6e2593d9-add6-4083-9c9b-4b7d2188c899))
+(Related policy: [Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6e2593d9-add6-4083-9c9b-4b7d2188c899)).
**Severity**: Low ### [Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9f97e78d-88ee-a48d-abe2-5ef12954e7ea) **Description**: To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Defender for Cloud.
-(Related policy: [Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0b15565f-aa9e-48ba-8619-45960f2c314d))
+(Related policy: [Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0b15565f-aa9e-48ba-8619-45960f2c314d)).
**Severity**: Medium
Secure your storage account with greater flexibility using customer-managed keys
**Description**: Azure Database for MySQL supports connecting your Azure Database for MySQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server.
-(Related policy: [Enforce SSL connection should be enabled for MySQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fe802a67a-daf5-4436-9ea6-f6d821dd0c5d))
+(Related policy: [Enforce SSL connection should be enabled for MySQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fe802a67a-daf5-4436-9ea6-f6d821dd0c5d)).
**Severity**: Medium
This configuration enforces that SSL is always enabled for accessing your databa
**Description**: Azure Database for PostgreSQL supports connecting your Azure Database for PostgreSQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server.
-(Related policy: [Enforce SSL connection should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fd158790f-bfb0-486c-8631-2dc6b4e8e6af))
+(Related policy: [Enforce SSL connection should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fd158790f-bfb0-486c-8631-2dc6b4e8e6af)).
**Severity**: Medium
This configuration enforces that SSL is always enabled for accessing your databa
**Description**: Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery options in case of a region failure. Configuring geo-redundant storage for backup is only allowed when creating a server.
-(Related policy: [Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0ec47710-77ff-4a3d-9181-6aa50af424d0))
+(Related policy: [Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0ec47710-77ff-4a3d-9181-6aa50af424d0)).
**Severity**: Low
Configuring geo-redundant storage for backup is only allowed when creating a ser
**Description**: Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery options in case of a region failure. Configuring geo-redundant storage for backup is only allowed when creating a server.
-(Related policy: [Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f82339799-d096-41ae-8538-b108becf0970))
+(Related policy: [Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f82339799-d096-41ae-8538-b108becf0970)).
**Severity**: Low
Configuring geo-redundant storage for backup is only allowed when creating a ser
**Description**: Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery options in case of a region failure. Configuring geo-redundant storage for backup is only allowed when creating a server.
-(Related policy: [Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f48af4db5-9b8b-401c-8e74-076be876a430))
+(Related policy: [Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f48af4db5-9b8b-401c-8e74-076be876a430)).
**Severity**: Low ### [GitHub repositories should have Code scanning enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/6672df26-ff2e-4282-83c3-e2f20571bd11)
-**Description**: GitHub uses code scanning to analyze code in order to find security vulnerabilities and errors in code. Code scanning can be used to find, triage, and prioritize fixes for existing problems in your code. Code scanning can also prevent developers from introducing new problems. Scans can be scheduled for specific days and times, or scans can be triggered when a specific event occurs in the repository, such as a push. If code scanning finds a potential vulnerability or error in code, GitHub displays an alert in the repository. A vulnerability is a problem in a project's code that could be exploited to damage the confidentiality, integrity, or availability of the project.
+**Description**: GitHub uses code scanning to analyze code in order to find security vulnerabilities and errors in code. Code scanning can be used to find, triage, and prioritize fixes for existing problems in your code. Code scanning can also prevent developers from introducing new problems. Scans can be scheduled for specific days and times, or scans can be triggered when a specific event occurs in the repository, such as a push. If code scanning finds a potential vulnerability or error in code, GitHub displays an alert in the repository. A vulnerability is a problem in a project's code that could be exploited to damage the confidentiality, integrity, or availability of the project.
(No related policy) **Severity**: Medium ### [GitHub repositories should have Dependabot scanning enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/92643c1f-1a95-4b68-bbd2-5117f92d6e35)
-**Description**: GitHub sends Dependabot alerts when it detects vulnerabilities in code dependencies that affect repositories. A vulnerability is a problem in a project's code that could be exploited to damage the confidentiality, integrity, or availability of the project or other projects that use its code. Vulnerabilities vary in type, severity, and method of attack. When code depends on a package that has a security vulnerability, this vulnerable dependency can cause a range of problems.
+**Description**: GitHub sends Dependabot alerts when it detects vulnerabilities in code dependencies that affect repositories. A vulnerability is a problem in a project's code that could be exploited to damage the confidentiality, integrity, or availability of the project or other projects that use its code. Vulnerabilities vary in type, severity, and method of attack. When code depends on a package that has a security vulnerability, this vulnerable dependency can cause a range of problems.
(No related policy) **Severity**: Medium ### [GitHub repositories should have Secret scanning enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1a600c61-6443-4ab4-bd28-7a6b6fb4691d)
-**Description**: GitHub scans repositories for known types of secrets, to prevent fraudulent use of secrets that were accidentally committed to repositories. Secret scanning will scan the entire Git history on all branches present in the GitHub repository for any secrets. Examples of secrets are tokens and private keys that a service provider can issue for authentication. If a secret is checked into a repository, anyone who has read access to the repository can use the secret to access the external service with those privileges. Secrets should be stored in a dedicated, secure location outside the repository for the project.
+**Description**: GitHub scans repositories for known types of secrets, to prevent fraudulent use of secrets that were accidentally committed to repositories. Secret scanning will scan the entire Git history on all branches present in the GitHub repository for any secrets. Examples of secrets are tokens and private keys that a service provider can issue for authentication. If a secret is checked into a repository, anyone who has read access to the repository can use the secret to access the external service with those privileges. Secrets should be stored in a dedicated, secure location outside the repository for the project.
(No related policy) **Severity**: High
Configuring geo-redundant storage for backup is only allowed when creating a ser
It includes functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate a threat to your database, and discovering and classifying sensitive data. Important: Protections from this plan are charged as shown on the [Defender plans](https://aka.ms/pricing-security-center) page. If you don't have any Azure SQL Database servers in this subscription, you won't be charged. If you later create Azure SQL Database servers on this subscription, they'll automatically be protected and charges will begin. Learn about the [pricing details per region](https://aka.ms/pricing-security-center). Learn more in [Introduction to Microsoft Defender for SQL](/azure/defender-for-cloud/defender-for-sql-introduction).
-(Related policy: [Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2f7fe3b40f-802b-4cdd-8bd4-fd799c948cc2))
+(Related policy: [Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2f7fe3b40f-802b-4cdd-8bd4-fd799c948cc2)).
**Severity**: High
It includes functionality for surfacing and mitigating potential database vulner
Important: Remediating this recommendation will result in charges for protecting your SQL servers on machines. If you don't have any SQL servers on machines in this subscription, no charges will be incurred. If you create any SQL servers on machines on this subscription in the future, they will automatically be protected and charges will begin at that time. [Learn more about Microsoft Defender for SQL servers on machines.](/azure/azure-sql/database/advanced-data-security)
-(Related policy: [Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2f6581d072-105e-4418-827f-bd446d56421b))
+(Related policy: [Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2f6581d072-105e-4418-827f-bd446d56421b)).
**Severity**: High ### [Microsoft Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/400a6682-992c-4726-9549-629fbc3b988f) **Description**: Microsoft Defender for SQL is a unified package that provides advanced SQL security capabilities. It surfaces and mitigates potential database vulnerabilities, and detects anomalous activities that could indicate a threat to your database. Microsoft Defender for SQL is billed as shown on [pricing details per region](https://aka.ms/pricing-security-center).
-(Related policy: [Advanced data security should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9))
+(Related policy: [Advanced data security should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9)).
**Severity**: High ### [Microsoft Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ff6dbca8-d93c-49fc-92af-dc25da7faccd) **Description**: Microsoft Defender for SQL is a unified package that provides advanced SQL security capabilities. It surfaces and mitigates potential database vulnerabilities, and detects anomalous activities that could indicate a threat to your database. Microsoft Defender for SQL is billed as shown on [pricing details per region](https://aka.ms/pricing-security-center).
-(Related policy: [Advanced data security should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9))
+(Related policy: [Advanced data security should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9)).
**Severity**: High
If you create any SQL servers on machines on this subscription in the future, th
**Description**: Microsoft Defender for storage detects unusual and potentially harmful attempts to access or exploit storage accounts. Important: Protections from this plan are charged as shown on the **Defender plans** page. If you don't have any Azure Storage accounts in this subscription, you won't be charged. If you later create Azure Storage accounts on this subscription, they'll automatically be protected and charges will begin. Learn about the [pricing details per region](https://aka.ms/pricing-security-center). Learn more in [Introduction to Microsoft Defender for Storage](/azure/defender-for-cloud/defender-for-storage-introduction).
-(Related policy: [Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2f308fbb08-4ab8-4e67-9b29-592e93fb94fa))
+(Related policy: [Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2f308fbb08-4ab8-4e67-9b29-592e93fb94fa)).
**Severity**: High ### [Network Watcher should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f1f2f7dc-7bd5-18bf-c403-cbbdb7ec3d68) **Description**: Network Watcher is a regional service that enables you to monitor and diagnose conditions at a network scenario level in, to, and from Azure. Scenario level monitoring enables you to diagnose problems at an end-to-end network level view. Network diagnostic and visualization tools available with Network Watcher help you understand, diagnose, and gain insights to your network in Azure.
-(Related policy: [Network Watcher should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb6e2945c-0b7b-40f5-9233-7a5323b5cdc6))
+(Related policy: [Network Watcher should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb6e2945c-0b7b-40f5-9233-7a5323b5cdc6)).
**Severity**: Low ### [Over-provisioned identities in subscriptions should be investigated to reduce the Permission Creep Index (PCI)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d103537b-9f3d-4658-a568-31dd66eb05cb) **Description**: Over-provisioned identities in subscription should be investigated to reduce the Permission Creep Index (PCI) and to safeguard your infrastructure. Reduce the PCI by removing the unused high risk permission assignments. High PCI reflects risk associated with the identities with permissions that exceed their normal or required usage
-(No related policy)
+(No related policy).
**Severity**: Medium ### [Private endpoint connections on Azure SQL Database should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/75396512-3323-9be4-059d-32ecb113c3de) **Description**: Private endpoint connections enforce secure communication by enabling private connectivity to Azure SQL Database.
-(Related policy: [Private endpoint connections on Azure SQL Database should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7698e800-9299-47a6-b3b6-5a0fee576eed))
+(Related policy: [Private endpoint connections on Azure SQL Database should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7698e800-9299-47a6-b3b6-5a0fee576eed)).
**Severity**: Medium
Learn more in [Introduction to Microsoft Defender for Storage](/azure/defender-f
**Description**: Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MariaDB. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure.
-(Related policy: [Private endpoint should be enabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0a1302fb-a631-4106-9753-f3d494733990))
+(Related policy: [Private endpoint should be enabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0a1302fb-a631-4106-9753-f3d494733990)).
**Severity**: Medium
Configure a private endpoint connection to enable access to traffic coming only
**Description**: Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MySQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure.
-(Related policy: [Private endpoint should be enabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7595c971-233d-4bcf-bd18-596129188c49))
+(Related policy: [Private endpoint should be enabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7595c971-233d-4bcf-bd18-596129188c49)).
**Severity**: Medium
Configure a private endpoint connection to enable access to traffic coming only
**Description**: Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for PostgreSQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure.
-(Related policy: [Private endpoint should be enabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0564d078-92f5-4f97-8398-b9f58a51f70b))
+(Related policy: [Private endpoint should be enabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0564d078-92f5-4f97-8398-b9f58a51f70b)).
**Severity**: Medium ### [Public network access on Azure SQL Database should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/22e93e92-4a31-b4cd-d640-3ef908430aa6) **Description**: Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules.
-(Related policy: [Public network access on Azure SQL Database should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1b8ca024-1d5c-4dec-8995-b1a932b41780))
+(Related policy: [Public network access on Azure SQL Database should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1b8ca024-1d5c-4dec-8995-b1a932b41780)).
**Severity**: Medium ### [Public network access should be disabled for Cognitive Services accounts](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/684a5b6d-a270-61ce-306e-5cea400dc3a7) **Description**: This policy audits any Cognitive Services account in your environment with public network access enabled. Public network access should be disabled so that only connections from private endpoints are allowed.
-(Related policy: [Public network access should be disabled for Cognitive Services accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0725b4dd-7e76-479c-a735-68e7ee23d5ca))
+(Related policy: [Public network access should be disabled for Cognitive Services accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0725b4dd-7e76-479c-a735-68e7ee23d5ca)).
**Severity**: Medium ### [Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ab153e43-2fb5-0670-2117-70340851ea9b) **Description**: Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules.
-(Related policy: [Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ffdccbe47-f3e3-4213-ad5d-ea459b2fa077))
+(Related policy: [Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ffdccbe47-f3e3-4213-ad5d-ea459b2fa077)).
**Severity**: Medium ### [Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d5d090f1-7d5c-9b38-7344-0ede8343276d) **Description**: Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules.
-(Related policy: [Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fd9844e8a-1437-4aeb-a32c-0c992f056095))
+(Related policy: [Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fd9844e8a-1437-4aeb-a32c-0c992f056095)).
**Severity**: Medium ### [Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/b34f9fe7-80cd-6fb3-2c5b-951993746ca8) **Description**: Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules.
-(Related policy: [Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb52376f7-9612-48a1-81cd-1ffe4b61032c))
+(Related policy: [Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb52376f7-9612-48a1-81cd-1ffe4b61032c)).
**Severity**: Medium ### [Redis Cache should allow access only via SSL](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/35b25be2-d08a-e340-45ed-f08a95d804fc) **Description**: Enable only connections via SSL to Redis Cache. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking.
-(Related policy: [Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f22bee202-a82f-4305-9a2a-6d7f44d4dedb))
+(Related policy: [Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f22bee202-a82f-4305-9a2a-6d7f44d4dedb)).
**Severity**: High ### [SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/82e20e14-edc5-4373-bfc4-f13121257c37) **Description**: SQL Vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. [Learn more](https://aka.ms/SQL-Vulnerability-Assessment/)
-(Related policy: [Vulnerabilities on your SQL databases should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2ffeedbf84-6b99-488c-acc2-71c829aa5ffc))
+(Related policy: [Vulnerabilities on your SQL databases should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2ffeedbf84-6b99-488c-acc2-71c829aa5ffc)).
**Severity**: High ### [SQL managed instances should have vulnerability assessment configured](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c42fc28d-1703-45fc-aaa5-39797f570513) **Description**: Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities.
-(Related policy: [Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1b7aa243-30e4-4c9e-bca8-d0d3022b634a))
+(Related policy: [Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1b7aa243-30e4-4c9e-bca8-d0d3022b634a)).
**Severity**: High ### [SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f97aa83c-9b63-4f9a-99f6-b22c4398f936) **Description**: SQL Vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. [Learn more](https://aka.ms/explore-vulnerability-assessment-reports/)
-(Related policy: [Vulnerabilities on your SQL servers on machine should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2f6ba6d016-e7c3-4842-b8f2-4992ebc0d72d))
+(Related policy: [Vulnerabilities on your SQL servers on machine should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2f6ba6d016-e7c3-4842-b8f2-4992ebc0d72d)).
**Severity**: High ### [SQL servers should have an Azure Active Directory administrator provisioned](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f0553104-cfdb-65e6-759c-002812e38500) **Description**: Provision an Azure AD administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services.
-(Related policy: [An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1f314764-cb73-4fc9-b863-8eca98ac36e9))
+(Related policy: [An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1f314764-cb73-4fc9-b863-8eca98ac36e9)).
**Severity**: High ### [SQL servers should have vulnerability assessment configured](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1db4f204-cb5a-4c9c-9254-7556403ce51c) **Description**: Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities.
-(Related policy: [Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9))
+(Related policy: [Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9)).
**Severity**: High ### [Storage account should use a private link connection](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/cdc78c07-02b0-4af0-1cb2-cb7c672a8b0a) **Description**: Private links enforce secure communication, by providing private connectivity to the storage account
-(Related policy: [Storage account should use a private link connection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6edd7eda-6dd8-40f7-810d-67160c639cd9))
+(Related policy: [Storage account should use a private link connection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6edd7eda-6dd8-40f7-810d-67160c639cd9)).
**Severity**: Medium ### [Storage accounts should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/47bb383c-8e25-95f0-c2aa-437add1d87d3)
-**Description**: To benefit from new capabilities in Azure Resource Manager, you can migrate existing deployments from the Classic deployment model. Resource Manager enables security enhancements such as: stronger access control (RBAC), better auditing, ARM-based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management. [Learn more](/azure/virtual-machines/windows/migration-classic-resource-manager-overview)
-(Related policy: [Storage accounts should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f37e0d2fe-28a5-43d6-a273-67d37d1f5606))
+**Description**: To benefit from new capabilities in Azure Resource Manager, you can migrate existing deployments from the Classic deployment model. Resource Manager enables security enhancements such as: stronger access control (RBAC), better auditing, ARM-based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication, and support for tags and resource groups for easier security management. [Learn more](/azure/virtual-machines/windows/migration-classic-resource-manager-overview)
+(Related policy: [Storage accounts should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f37e0d2fe-28a5-43d6-a273-67d37d1f5606)).
**Severity**: Low ### [Storage accounts should restrict network access using virtual network rules](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ad4f3ff1-30eb-5042-16ed-27198f640b8d) **Description**: Protect your storage accounts from potential threats using virtual network rules as a preferred method instead of IP-based filtering. Disabling IP-based filtering prevents public IPs from accessing your storage accounts.
-(Related policy: [Storage accounts should restrict network access using virtual network rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2a1a9cdf-e04d-429a-8416-3bfb72a1b26f))
+(Related policy: [Storage accounts should restrict network access using virtual network rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2a1a9cdf-e04d-429a-8416-3bfb72a1b26f)).
**Severity**: Medium
Configure a private endpoint connection to enable access to traffic coming only
### [Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/651967bf-044e-4bde-8376-3e08e0600105) **Description**: Enable transparent data encryption to protect data-at-rest and meet compliance requirements
-(Related policy: [Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f17k78e20-9358-41c9-923c-fb736d382a12))
+(Related policy: [Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f17k78e20-9358-41c9-923c-fb736d382a12)).
**Severity**: Low ### [VM Image Builder templates should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f6b0e473-eb23-c3be-fe61-2ae3e8309530)
-**Description**: Audit VM Image Builder templates that do not have a virtual network configured. When a virtual network is not configured, a public IP is created and used instead, which may directly expose resources to the internet and increase the potential attack surface.
-(Related policy: [VM Image Builder templates should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2154edb9-244f-4741-9970-660785bccdaa))
+**Description**: Audit VM Image Builder templates that do not have a virtual network configured. When a virtual network is not configured, a public IP is created and used instead, which might directly expose resources to the internet and increase the potential attack surface.
+(Related policy: [VM Image Builder templates should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2154edb9-244f-4741-9970-660785bccdaa)).
**Severity**: Medium ### [Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/efe75f01-6fff-5d9d-08e6-092b98d3fb3f) **Description**: Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries/regions, IP address ranges, and other http(s) parameters via custom rules.
-(Related policy: [Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f564feb30-bf6a-4854-b4bb-0d2d2d1e6c66))
+(Related policy: [Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f564feb30-bf6a-4854-b4bb-0d2d2d1e6c66)).
**Severity**: Low
Configure a private endpoint connection to enable access to traffic coming only
### [Cognitive Services should use private link](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/54f53ddf-6ebd-461e-a247-394c542bc5d1) **Description**: Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about [private links](https://go.microsoft.com/fwlink/?linkid=2129800).
-(Related policy: [Cognitive Services should use private link](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fcddd188c-4b82-4c48-a19d-ddf74ee66a01))
+(Related policy: [Cognitive Services should use private link](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fcddd188c-4b82-4c48-a19d-ddf74ee66a01)).
**Severity**: Medium ### [Azure Cosmos DB should disable public network access](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/334a182c-7c2c-41bc-ae1e-55327891ab50) **Description**: Disabling public network access improves security by ensuring that your Cosmos DB account isn't exposed on the public internet. Creating private endpoints can limit exposure of your Cosmos DB account. [Learn more](/azure/cosmos-db/how-to-configure-private-endpoints#blocking-public-network-access-during-account-creation).
-(Related policy: [Azure Cosmos DB should disable public network access](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f797b37f7-06b8-444c-b1ad-fc62867f335a))
+(Related policy: [Azure Cosmos DB should disable public network access](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f797b37f7-06b8-444c-b1ad-fc62867f335a)).
**Severity**: Medium ### [Cosmos DB accounts should use private link](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/80dc29d6-9887-4071-a66c-e763376c2de3) **Description**: Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Cosmos DB account, data leakage risks are reduced. Learn more about [private links](/azure/cosmos-db/how-to-configure-private-endpoints).
-(Related policy: [Cosmos DB accounts should use private link](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f58440f8a-10c5-4151-bdce-dfbaad4a20b7))
+(Related policy: [Cosmos DB accounts should use private link](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f58440f8a-10c5-4151-bdce-dfbaad4a20b7)).
**Severity**: Medium ### [Azure SQL Database should be running TLS version 1.2 or newer](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/8e9a37b9-2828-4c8f-a24e-7b0ab0e89c78) **Description**: Setting TLS version to 1.2 or newer improves security by ensuring your Azure SQL Database can only be accessed from clients using TLS 1.2 or newer. Using versions of TLS less than 1.2 is not recommended since they have well documented security vulnerabilities.
-(Related policy: [Azure SQL Database should be running TLS version 1.2 or newer](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f32e6bbec-16b6-44c2-be37-c5b672d103cf))
+(Related policy: [Azure SQL Database should be running TLS version 1.2 or newer](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f32e6bbec-16b6-44c2-be37-c5b672d103cf)).
**Severity**: Medium ### [Azure SQL Managed Instances should disable public network access](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/a2624c52-2937-400c-af9d-3bf2d97382bf) **Description**: Disabling public network access (public endpoint) on Azure SQL Managed Instances improves security by ensuring that they can only be accessed from inside their virtual networks or via Private Endpoints. Learn more about [public network access](https://aka.ms/mi-public-endpoint).
-(Related policy: [Azure SQL Managed Instances should disable public network access](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f9dfea752-dd46-4766-aed1-c355fa93fb91))
+(Related policy: [Azure SQL Managed Instances should disable public network access](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f9dfea752-dd46-4766-aed1-c355fa93fb91)).
**Severity**: Medium
Configure a private endpoint connection to enable access to traffic coming only
### [A maximum of 3 owners should be designated for subscriptions](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/6f90a6d6-d4d6-0794-0ec1-98fa77878c2e) **Description**: To reduce the potential for breaches by compromised owner accounts, we recommend limiting the number of owner accounts to a maximum of 3
-(Related policy: [A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f4f11b553-d42e-4e3a-89be-32ca364cad4c))
+(Related policy: [A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f4f11b553-d42e-4e3a-89be-32ca364cad4c)).
**Severity**: High ### [Accounts with owner permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/6240402e-f77c-46fa-9060-a7ce53997754)
-**Description**: If you only use passwords to authenticate your users, you're leaving an attack vector open. Users often use weak passwords for multiple services. By enabling [multifactor authentication](/en-us/azure/defender-for-cloud/multi-factor-authentication-enforcement) (MFA), you provide better security for your accounts, while still allowing your users to authenticate to almost any application with single sign-on (SSO). Multifactor authentication is a process by which users are prompted, during the sign-in process, for another form of identification. For example, a code may be sent to their cellphone, or they may be asked for a fingerprint scan. We recommend you to enable MFA for all accounts that have [owner permissions](/en-us/azure/role-based-access-control/built-in-roles#owner) on Azure resources, to prevent breach and attacks.
+**Description**: If you only use passwords to authenticate your users, you're leaving an attack vector open. Users often use weak passwords for multiple services. By enabling [multifactor authentication](/en-us/azure/defender-for-cloud/multi-factor-authentication-enforcement) (MFA), you provide better security for your accounts, while still allowing your users to authenticate to almost any application with single sign-on (SSO). Multifactor authentication is a process by which users are prompted, during the sign-in process, for another form of identification. For example, a code might be sent to their cellphone, or they might be asked for a fingerprint scan. We recommend you to enable MFA for all accounts that have [owner permissions](/en-us/azure/role-based-access-control/built-in-roles#owner) on Azure resources, to prevent breach and attacks.
More details and frequently asked questions are available here: [Manage multifactor authentication (MFA) enforcement on your subscriptions](/en-us/azure/defender-for-cloud/multi-factor-authentication-enforcement)
-(No related policy)
+(No related policy).
**Severity**: High ### [Accounts with read permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dabc9bc4-b8a8-45bd-9a5a-43000df8aa1c)
-**Description**: If you only use passwords to authenticate your users, you're leaving an attack vector open. Users often use weak passwords for multiple services. By enabling [multifactor authentication](/en-us/azure/defender-for-cloud/multi-factor-authentication-enforcement) (MFA), you provide better security for your accounts, while still allowing your users to authenticate to almost any application with single sign-on (SSO). Multifactor authentication is a process by which users are prompted, during the sign-in process, for an additional form of identification. For example, a code may be sent to their cellphone, or they may be asked for a fingerprint scan. We recommend you to enable MFA for all accounts that have [read permissions](/en-us/azure/role-based-access-control/built-in-roles#owner) on Azure resources, to prevent breach and attacks.
+**Description**: If you only use passwords to authenticate your users, you're leaving an attack vector open. Users often use weak passwords for multiple services. By enabling [multifactor authentication](/en-us/azure/defender-for-cloud/multi-factor-authentication-enforcement) (MFA), you provide better security for your accounts, while still allowing your users to authenticate to almost any application with single sign-on (SSO). Multifactor authentication is a process by which users are prompted, during the sign-in process, for an additional form of identification. For example, a code might be sent to their cellphone, or they might be asked for a fingerprint scan. We recommend you to enable MFA for all accounts that have [read permissions](/en-us/azure/role-based-access-control/built-in-roles#owner) on Azure resources, to prevent breach and attacks.
More details and frequently asked questions are available [here](/en-us/azure/defender-for-cloud/multi-factor-authentication-enforcement). (No related policy)
Configure a private endpoint connection to enable access to traffic coming only
### [Accounts with write permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0cb17b2-0607-48a7-b0e0-903ed22de39b)
-**Description**: If you only use passwords to authenticate your users, you are leaving an attack vector open. Users often use weak passwords for multiple services. By enabling [multifactor authentication](/en-us/azure/defender-for-cloud/multi-factor-authentication-enforcement) (MFA), you provide better security for your accounts, while still allowing your users to authenticate to almost any application with single sign-on (SSO). Multifactor authentication is a process by which users are prompted, during the sign-in process, for an additional form of identification. For example, a code may be sent to their cellphone, or they may be asked for a fingerprint scan. We recommend you to enable MFA for all accounts that have [write permissions](/en-us/azure/role-based-access-control/built-in-roles#owner) on Azure resources, to prevent breach and attacks.
+**Description**: If you only use passwords to authenticate your users, you are leaving an attack vector open. Users often use weak passwords for multiple services. By enabling [multifactor authentication](/en-us/azure/defender-for-cloud/multi-factor-authentication-enforcement) (MFA), you provide better security for your accounts, while still allowing your users to authenticate to almost any application with single sign-on (SSO). Multifactor authentication is a process by which users are prompted, during the sign-in process, for an additional form of identification. For example, a code might be sent to their cellphone, or they might be asked for a fingerprint scan. We recommend you to enable MFA for all accounts that have [write permissions](/en-us/azure/role-based-access-control/built-in-roles#owner) on Azure resources, to prevent breach and attacks.
More details and frequently asked questions are available here: [Manage multifactor authentication (MFA) enforcement on your subscriptions](/en-us/azure/defender-for-cloud/multi-factor-authentication-enforcement)
-(No related policy)
+(No related policy).
**Severity**: High
Configure a private endpoint connection to enable access to traffic coming only
**Description**: User accounts that have been blocked from signing in, should be removed from your subscriptions. These accounts can be targets for attackers looking to find ways to access your data without being noticed.
-(Related policy: [Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6b1cbf55-e8b6-442f-ba4c-7246b6381474))
+(Related policy: [Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6b1cbf55-e8b6-442f-ba4c-7246b6381474)).
**Severity**: High
These accounts can be targets for attackers looking to find ways to access your
**Description**: User accounts that have been blocked from signing in, should be removed from your subscriptions. These accounts can be targets for attackers looking to find ways to access your data without being noticed.
-(Related policy: [Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2febb62a0c-3560-49e1-89ed-27e074e9f8ad))
+(Related policy: [Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2febb62a0c-3560-49e1-89ed-27e074e9f8ad)).
**Severity**: High ### [Diagnostic logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/88bbc99c-e5af-ddd7-6105-6150b2bfa519) **Description**: Enable logs and retain them for up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised.
-(Related policy: [Diagnostic logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fcf820ca0-f99e-4f3e-84fb-66e913812d21))
+(Related policy: [Diagnostic logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fcf820ca0-f99e-4f3e-84fb-66e913812d21)).
**Severity**: Low ### [External accounts with owner permissions should be removed from subscriptions](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c3b6ae71-f1f0-31b4-e6c1-d5951285d03d) **Description**: Accounts with owner permissions that have different domain names (external accounts), should be removed from your subscription. This prevents unmonitored access. These accounts can be targets for attackers looking to find ways to access your data without being noticed.
-(Related policy: [External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff8456c1c-aa66-4dfb-861a-25d127b775c9))
+(Related policy: [External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff8456c1c-aa66-4dfb-861a-25d127b775c9)).
**Severity**: High ### [External accounts with read permissions should be removed from subscriptions](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a8c6a4ad-d51e-88fe-2979-d3ee3c864f8b) **Description**: Accounts with read permissions that have different domain names (external accounts), should be removed from your subscription. This prevents unmonitored access. These accounts can be targets for attackers looking to find ways to access your data without being noticed.
-(Related policy: [External accounts with read permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f5f76cf89-fbf2-47fd-a3f4-b891fa780b60))
+(Related policy: [External accounts with read permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f5f76cf89-fbf2-47fd-a3f4-b891fa780b60)).
**Severity**: High ### [External accounts with write permissions should be removed from subscriptions](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/04e7147b-0deb-9796-2e5c-0336343ceb3d) **Description**: Accounts with write permissions that have different domain names (external accounts), should be removed from your subscription. This prevents unmonitored access. These accounts can be targets for attackers looking to find ways to access your data without being noticed.
-(Related policy: [External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f5c607a2e-c700-4744-8254-d77e7c9eb5e4))
+(Related policy: [External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f5c607a2e-c700-4744-8254-d77e7c9eb5e4)).
**Severity**: High ### [Firewall should be enabled on Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/52f7826a-ace7-3107-dd0d-4875853c1576) **Description**: Key vault's firewall prevents unauthorized traffic from reaching your key vault and provides an additional layer of protection for your secrets. Enable the firewall to make sure that only traffic from allowed networks can access your key vault.
-(Related policy: [Firewall should be enabled on Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f55615ac9-af46-4a59-874e-391cc3dfb490))
+(Related policy: [Firewall should be enabled on Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f55615ac9-af46-4a59-874e-391cc3dfb490)).
**Severity**: Medium
These accounts can be targets for attackers looking to find ways to access your
### [Key Vault keys should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1aabfa0d-7585-f9f5-1d92-ecb40291d9f2) **Description**: Cryptographic keys should have a defined expiration date and not be permanent. Keys that are valid forever provide a potential attacker with more time to compromise the key. It's a recommended security practice to set expiration dates on cryptographic keys.
-(Related policy: [Key Vault keys should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f152b15f7-8e1f-4c1f-ab71-8c010ba5dbc0))
+(Related policy: [Key Vault keys should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f152b15f7-8e1f-4c1f-ab71-8c010ba5dbc0)).
**Severity**: High ### [Key Vault secrets should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/14257785-9437-97fa-11ae-898cfb24302b) **Description**: Secrets should have a defined expiration date and not be permanent. Secrets that are valid forever provide a potential attacker with more time to compromise them. It's a recommended security practice to set expiration dates on secrets.
-(Related policy: [Key Vault secrets should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f98728c90-32c7-4049-8429-847dc0f4fe37))
+(Related policy: [Key Vault secrets should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f98728c90-32c7-4049-8429-847dc0f4fe37)).
**Severity**: High ### [Key vaults should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4ed62ae4-5072-f9e7-8d94-51c76c48159a) **Description**: Malicious deletion of a key vault can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge key vaults. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted key vaults. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period.
-(Related policy: [Key vaults should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0b60c0b2-2dc2-4e1c-b5c9-abbed971de53))
+(Related policy: [Key vaults should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0b60c0b2-2dc2-4e1c-b5c9-abbed971de53)).
**Severity**: Medium ### [Key vaults should have soft delete enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/78211c00-15a9-336e-17c4-0b48613dadf4) **Description**: Deleting a key vault without soft delete enabled permanently deletes all secrets, keys, and certificates stored in the key vault. Accidental deletion of a key vault can lead to permanent data loss. Soft delete allows you to recover an accidentally deleted key vault for a configurable retention period.
-(Related policy: [Key vaults should have soft delete enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1e66c121-a66a-4b1f-9b83-0fd99bf0fc2d))
+(Related policy: [Key vaults should have soft delete enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1e66c121-a66a-4b1f-9b83-0fd99bf0fc2d)).
**Severity**: High ### [MFA should be enabled on accounts with owner permissions on subscriptions](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/94290b00-4d0c-d7b4-7cea-064a9554e681) **Description**: Multifactor authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources.
-(Related policy: [MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2faa633080-8b72-40c4-a2d7-d00c03e80bed))
+(Related policy: [MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2faa633080-8b72-40c4-a2d7-d00c03e80bed)).
**Severity**: High ### [MFA should be enabled on accounts with read permissions on subscriptions](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/151e82c5-5341-a74b-1eb0-bc38d2c84bb5) **Description**: Multifactor authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources.
-(Related policy: [MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fe3576e28-8b17-4677-84c3-db2990658d64))
+(Related policy: [MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fe3576e28-8b17-4677-84c3-db2990658d64)).
**Severity**: High ### [MFA should be enabled on accounts with write permissions on subscriptions](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/57e98606-6b1e-6193-0e3d-fe621387c16b) **Description**: Multifactor authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources.
-(Related policy: [MFA should be enabled accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f9297c21d-2ed6-4474-b48f-163f75654ce3))
+(Related policy: [MFA should be enabled accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f9297c21d-2ed6-4474-b48f-163f75654ce3)).
**Severity**: High
These accounts can be targets for attackers looking to find ways to access your
Microsoft Defender for Key Vault detects unusual and potentially harmful attempts to access or exploit Key Vault accounts. Important: Protections from this plan are charged as shown on the **Defender plans** page. If you don't have any key vaults in this subscription, you won't be charged. If you later create key vaults on this subscription, they'll automatically be protected and charges will begin. Learn about the [pricing details per region](https://aka.ms/pricing-security-center). Learn more in [Introduction to Microsoft Defender for Key Vault](/azure/defender-for-cloud/defender-for-key-vault-introduction).
-(Related policy: [Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2f0e6763cc-5078-4e64-889d-ff4d9a839047))
+(Related policy: [Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2f0e6763cc-5078-4e64-889d-ff4d9a839047)).
**Severity**: High ### [Private endpoint should be configured for Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2e96bc2f-1972-e471-9e70-ae58d41e9d2a) **Description**: Private link provides a way to connect Key Vault to your Azure resources without sending traffic over the public internet. Private link provides defense in depth protection against data exfiltration.
-(Related policy: [Private endpoint should be configured for Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f5f0bc445-3935-4915-9981-011aa2b46147))
+(Related policy: [Private endpoint should be configured for Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f5f0bc445-3935-4915-9981-011aa2b46147)).
**Severity**: Medium ### [Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/51fd8bb1-0db4-bbf1-7e2b-cfcba7eb66a6) **Description**: Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data, but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it.
-(Related policy: [Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2f4fa4b6c0-31ca-4c0d-b10d-24b96f62a751))
+(Related policy: [Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2f4fa4b6c0-31ca-4c0d-b10d-24b96f62a751)).
**Severity**: Medium ### [There should be more than one owner assigned to subscriptions](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2c79b4af-f830-b61e-92b9-63dfa30f16e4) **Description**: Designate more than one subscription owner in order to have administrator access redundancy.
-(Related policy: [There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f09024ccc-0c5f-475e-9457-b7c0d9ed487b))
+(Related policy: [There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f09024ccc-0c5f-475e-9457-b7c0d9ed487b)).
**Severity**: High ### [Validity period of certificates stored in Azure Key Vault should not exceed 12 months](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/fc84abc0-eee6-4758-8372-a7681965ca44) **Description**: Ensure your certificates do not have a validity period that exceeds 12 months.
-(Related policy: [Certificates should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0a075868-4c26-42ef-914c-5bc007359560))
+(Related policy: [Certificates should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0a075868-4c26-42ef-914c-5bc007359560)).
**Severity**: Medium
Learn more in [Introduction to Microsoft Defender for Key Vault](/azure/defender
### [Super identities in your Azure environment should be removed (Preview)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/fe7d5a87-36fc-4530-99b5-1848512a3209)
-**Description**: Super Identity is any human or workload identity such as users, Service Principals and serverless functions that have admin permissions and can perform any action on any resource across the infrastructure. Super Identities are extremely high risk, as any malicious or accidental permissions misuse can result in catastrophic service disruption, service degradation, or data leakage. Super Identities pose a huge threat to cloud infrastructure. Too many super identities can create excessive risks and increase the blast radius during a breach.
+**Description**: Super Identity is any human or workload identity such as users, Service Principals, and serverless functions that have admin permissions and can perform any action on any resource across the infrastructure. Super Identities are extremely high risk, as any malicious or accidental permissions misuse can result in catastrophic service disruption, service degradation, or data leakage. Super Identities pose a huge threat to cloud infrastructure. Too many super identities can create excessive risks and increase the blast radius during a breach.
**Severity**: Medium
Learn more in [Introduction to Microsoft Defender for Key Vault](/azure/defender
### [Default IP Filter Policy should be Deny](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/5a3d6cdd-8eb3-46d2-ba11-d24a0d47fe65) **Description**: IP Filter Configuration should have rules defined for allowed traffic and should deny all other traffic by default
-(No related policy)
+(No related policy).
**Severity**: Medium ### [Diagnostic logs in IoT Hub should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/77785808-ce86-4e40-b45f-19110a547397) **Description**: Enable logs and retain them for up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised.
-(Related policy: [Diagnostic logs in IoT Hub should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f383856f8-de7f-44a2-81fc-e5135b5c2aa4))
+(Related policy: [Diagnostic logs in IoT Hub should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f383856f8-de7f-44a2-81fc-e5135b5c2aa4)).
**Severity**: Low ### [Identical Authentication Credentials](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9d07b7e6-2986-4964-a76c-b2689604e212) **Description**: Identical authentication credentials to the IoT Hub used by multiple devices. This could indicate an illegitimate device impersonating a legitimate device. It also exposes the risk of device impersonation by an attacker
-(No related policy)
+(No related policy).
**Severity**: High ### [IP Filter rule large IP range](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d8326952-60bb-40fb-b33f-51e662708a88) **Description**: An Allow IP Filter rule's source IP range is too large. Overly permissive rules might expose your IoT hub to malicious intenders
-(No related policy)
+(No related policy).
**Severity**: Medium
Learn more in [Introduction to Microsoft Defender for Key Vault](/azure/defender
### [Access to storage accounts with firewall and virtual network configurations should be restricted](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/45d313c3-3fca-5040-035f-d61928366d31)
-**Description**: Review the settings of network access in your storage account firewall settings. We recommended configuring network rules so that only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premise clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges.
-(Related policy: [Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f34c877ad-507e-4c82-993e-3452a6e0ad3c))
+**Description**: Review the settings of network access in your storage account firewall settings. We recommended configuring network rules so that only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges.
+(Related policy: [Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f34c877ad-507e-4c82-993e-3452a6e0ad3c)).
**Severity**: Low
Learn more in [Introduction to Microsoft Defender for Key Vault](/azure/defender
**Description**: Defender for Cloud has analyzed the internet traffic communication patterns of the virtual machines listed below, and determined that the existing rules in the NSGs associated to them are overly permissive, resulting in an increased potential attack surface. This typically occurs when this IP address doesn't communicate regularly with this resource. Alternatively, the IP address has been flagged as malicious by Defender for Cloud's threat intelligence sources. Learn more in [Improve your network security posture with adaptive network hardening](/azure/defender-for-cloud/adaptive-network-hardening).
-(Related policy: [Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f08e6af2d-db70-460a-bfe9-d5bd474ba9d6))
+(Related policy: [Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f08e6af2d-db70-460a-bfe9-d5bd474ba9d6)).
**Severity**: High ### [All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/3b20e985-f71f-483b-b078-f30d73936d43) **Description**: Defender for Cloud has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources.
-(Related policy: [All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f9daedab3-fb2d-461e-b861-71790eead4f6))
+(Related policy: [All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f9daedab3-fb2d-461e-b861-71790eead4f6)).
**Severity**: High ### [Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e3de1cc0-f4dd-3b34-e496-8b5381ba2d70) **Description**: Defender for Cloud has discovered virtual networks with Application Gateway resources unprotected by the DDoS protection service. These resources contain public IPs. Enable mitigation of network volumetric and protocol attacks.
-(Related policy: [Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fa7aca53f-2ed4-4466-a25e-0b45ade68efd))
+(Related policy: [Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fa7aca53f-2ed4-4466-a25e-0b45ade68efd)).
**Severity**: Medium
This typically occurs when this IP address doesn't communicate regularly with th
**Description**: Protect your VM from potential threats by restricting access to it with a network security group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your VM from other instances, in or outside the same subnet. To keep your machine as secure as possible, the VM access to the internet must be restricted and an NSG should be enabled on the subnet. VMs with 'High' severity are internet-facing VMs.
-(Related policy: [Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c))
+(Related policy: [Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c)).
**Severity**: High ### [IP forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c3b51c94-588b-426b-a892-24696f9e54cc) **Description**: Defender for Cloud has discovered that IP forwarding is enabled on some of your virtual machines. Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team.
-(Related policy: [IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fbd352bd5-2853-4985-bf0d-73806b4a5744))
+(Related policy: [IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fbd352bd5-2853-4985-bf0d-73806b4a5744)).
**Severity**: Medium
VMs with 'High' severity are internet-facing VMs.
### [Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/805651bc-6ecd-4c73-9b55-97a19d0582d0) **Description**: Defender for Cloud has identified some overly permissive inbound rules for management ports in your Network Security Group. Enable just-in-time access control to protect your VM from internet-based brute-force attacks. Learn more in [Understanding just-in-time (JIT) VM access](/azure/defender-for-cloud/just-in-time-access-overview).
-(Related policy: [Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb0f33259-77d7-4c9e-aac6-3aabcfae693c))
+(Related policy: [Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb0f33259-77d7-4c9e-aac6-3aabcfae693c)).
**Severity**: High ### [Management ports should be closed on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/bc303248-3d14-44c2-96a0-55f5c326b5fe) **Description**: Open remote management ports are exposing your VM to a high level of risk from Internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine.
-(Related policy: [Management ports should be closed on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f22730e10-96f6-4aac-ad84-9383d35b5917))
+(Related policy: [Management ports should be closed on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f22730e10-96f6-4aac-ad84-9383d35b5917)).
**Severity**: Medium
VMs with 'High' severity are internet-facing VMs.
**Description**: Protect your non-internet-facing virtual machine from potential threats by restricting access to it with a network security group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your VM from other instances, whether or not they're on the same subnet. Note that to keep your machine as secure as possible, the VM's access to the internet must be restricted and an NSG should be enabled on the subnet.
-(Related policy: [Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fbb91dfba-c30d-4263-9add-9c2384e659a6))
+(Related policy: [Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fbb91dfba-c30d-4263-9add-9c2384e659a6)).
**Severity**: Low ### [Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1c5de8e1-f68d-6a17-e0d2-ec259c42768c) **Description**: Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking.
-(Related policy: [Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f404c3081-a854-4457-ae30-26a93ef643f9))
+(Related policy: [Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f404c3081-a854-4457-ae30-26a93ef643f9)).
**Severity**: High
Note that to keep your machine as secure as possible, the VM's access to the int
**Description**: Protect your subnet from potential threats by restricting access to it with a network security group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. When an NSG is associated with a subnet, the ACL rules apply to all the VM instances and integrated services in that subnet, but don't apply to internal traffic inside the subnet. To secure resources in the same subnet from one another, enable NSG directly on the resources as well. Note that the following subnet types will be listed as not applicable: GatewaySubnet, AzureFirewallSubnet, AzureBastionSubnet.
-(Related policy: [Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fe71308d3-144b-4262-b144-efdc3cc90517))
+(Related policy: [Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fe71308d3-144b-4262-b144-efdc3cc90517)).
**Severity**: Low ### [Virtual networks should be protected by Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f67fb4ed-d481-44d7-91e5-efadf504f74a) **Description**: Some of your virtual networks aren't protected with a firewall. Use [Azure Firewall](https://azure.microsoft.com/pricing/details/azure-firewall) to restrict access to your virtual networks and prevent potential threats.
-(Related policy: [All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ffc5e4038-4584-4632-8c85-c0448d374b2c))
+(Related policy: [All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ffc5e4038-4584-4632-8c85-c0448d374b2c)).
**Severity**: Low
Note that the following subnet types will be listed as not applicable: GatewaySu
### Access to App Services should be restricted **Description & related policy**: Restrict access to your App Services by changing the networking configuration, to deny inbound traffic from ranges that are too broad.
-(Related policy: [Preview]: Access to App Services should be restricted)
+(Related policy: [Preview]: Access to App Services should be restricted).
**Severity**: High ### The rules for web applications on IaaS NSGs should be hardened **Description & related policy**: Harden the network security group (NSG) of your virtual machines that are running web applications, with NSG rules that are overly permissive with regard to web application ports.
-(Related policy: The NSGs rules for web applications on IaaS should be hardened)
+(Related policy: The NSGs rules for web applications on IaaS should be hardened).
**Severity**: High ### Pod Security Policies should be defined to reduce the attack vector by removing unnecessary application privileges (Preview) **Description & related policy**: Define Pod Security Policies to reduce the attack vector by removing unnecessary application privileges. It is recommended to configure pod security policies so pods can only access resources which they are allowed to access.
-(Related policy: [Preview]: Pod Security Policies should be defined on Kubernetes Services)
+(Related policy: [Preview]: Pod Security Policies should be defined on Kubernetes Services).
**Severity**: Medium
Note that the following subnet types will be listed as not applicable: GatewaySu
### Your machines should be restarted to apply system updates **Description & related policy**: Restart your machines to apply the system updates and secure the machine from vulnerabilities.
-(Related policy: System updates should be installed on your machines)
+(Related policy: System updates should be installed on your machines).
**Severity**: Medium
Note that the following subnet types will be listed as not applicable: GatewaySu
**Description & related policy**: Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for web apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version.
-(Related policy: Ensure that 'Java version' is the latest, if used as a part of the Web app)
+(Related policy: Ensure that 'Java version' is the latest, if used as a part of the Web app).
**Severity**: Medium
Using the latest Java version for web apps is recommended to benefit from securi
**Description & related policy**: Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for function apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version.
-(Related policy: Ensure that 'Python version' is the latest, if used as a part of the Function app)
+(Related policy: Ensure that 'Python version' is the latest, if used as a part of the Function app).
**Severity**: Medium
Using the latest Python version for function apps is recommended to benefit from
**Description & related policy**: Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for web apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version.
-(Related policy: Ensure that 'Python version' is the latest, if used as a part of the Web app)
+(Related policy: Ensure that 'Python version' is the latest, if used as a part of the Web app).
**Severity**: Medium
Using the latest Python version for web apps is recommended to benefit from secu
**Description & related policy**: Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for function apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version.
-(Related policy: Ensure that 'Java version' is the latest, if used as a part of the Function app)
+(Related policy: Ensure that 'Java version' is the latest, if used as a part of the Function app).
**Severity**: Medium
Using the latest Java version for function apps is recommended to benefit from s
**Description & related policy**: Periodically, newer versions are released for PHP software either due to security flaws or to include additional functionality. Using the latest PHP version for web apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version.
-(Related policy: Ensure that 'PHP version' is the latest, if used as a part of the WEB app)
+(Related policy: Ensure that 'PHP version' is the latest, if used as a part of the WEB app).
**Severity**: Medium
defender-for-cloud Sql Azure Vulnerability Assessment Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-enable.md
When you enable the Defender for Azure SQL plan in Defender for Cloud, Defender
When you enable the Defender for Azure SQL plan in Defender for Cloud, Defender for Cloud automatically enables Advanced Threat Protection and vulnerability assessment with the express configuration for all Azure SQL databases in the selected subscription. You can enable vulnerability assessment in two ways:+ - [Express configuration](#express-configuration) - [Classic configuration](#classic-configuration)
You can enable vulnerability assessment in two ways:
**To enable vulnerability assessment without a storage account, using the express configuration**: 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Open the specific Azure SQL Database resource.
+1. Open the specific Azure SQL Database resource.
1. Under the Security heading, select **Defender for Cloud**. 1. Enable the express configuration of vulnerability assessment:
-
+ - **If vulnerability assessment is not configured**, select **Enable** in the notice that prompts you to enable the vulnerability assessment express configuration, and confirm the change. :::image type="content" source="media/sql-azure-vulnerability-assessment-enable/enable-express-vulnerability-assessment.png" alt-text="Screenshot of notice to enable the express vulnerability assessment configuration in the Defender for Cloud settings for a SQL server.":::
You can enable vulnerability assessment in two ways:
> [!IMPORTANT] > Baselines and scan history are not migrated.
- :::image type="content" source="media/sql-azure-vulnerability-assessment-enable/migrate-to-express-vulnerability-assessment.png" alt-text="Screenshot of notice to migrate from the classic to the express vulnerability assessment configuration in the Defender for Cloud settings for a SQL server.":::
+ :::image type="content" source="media/sql-azure-vulnerability-assessment-enable/migrate-to-express-vulnerability-assessment.png" alt-text="Screenshot of notice to migrate from classic to express vulnerability assessment configuration in the Defender for Cloud settings for a SQL server.":::
You can also select **Configure** and then select **Enable** in the Microsoft Defender for SQL settings:
-
+ :::image type="content" source="media/sql-azure-vulnerability-assessment-enable/migrate-to-express-vulnerability-assessment-configure.png" alt-text="Screenshot of notice to migrate from the classic to the express vulnerability assessment configuration in the Microsoft Defender for SQL settings."::: Now you can go to the [**SQL databases should have vulnerability findings resolved**](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_AzureDefenderForData/SqlVaServersRecommendationDetailsBlade/assessmentKey/82e20e14-edc5-4373-bfc4-f13121257c37) recommendation to see the vulnerabilities found in your databases. You can also run on-demand vulnerability assessment scans to see the current findings.
-> [!NOTE]
+> [!NOTE]
> Each database is randomly assigned a scan time on a set day of the week. #### Enable express vulnerability assessment at scale
To enable vulnerability assessment with a storage account, use the classic confi
1. To configure vulnerability assessments to automatically run weekly scans to detect security misconfigurations, set **Periodic recurring scans** to **On**. The results are sent to the email addresses you provide in **Send scan reports to**. You can also send email notification to admins and subscription owners by enabling **Also send email notification to admins and subscription owners**.
- > [!NOTE]
+ > [!NOTE]
> Each database is randomly assigned a scan time on a set day of the week. Email notifications are scheduled randomly per server on a set day of the week. The email notification report includes data from all recurring database scans that were executed during the preceding week (does not include on-demand scans).
-## Next steps
+## Related content
Learn more about:
defender-for-cloud Sql Azure Vulnerability Assessment Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-manage.md
Title: Manage vulnerability findings in your Azure SQL databases
-description: Learn how to remediate software vulnerabilities and disable findings with the express configuration on Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics.
+description: Learn how to remediate software vulnerabilities and disable findings with the express configuration.
Last updated 06/14/2023
If the vulnerability settings show the option to configure a storage account, yo
### View scan history
-Select **Scan History** in the vulnerability assessment pane to view a history of all scans previously run on this database.
+Select **Scan History** in the vulnerability assessment pane to view a history of all scans previously run on this database.
Express configuration doesn't store scan results if they're identical to previous scans. The scan time shown in the scan history is the time of the last scan where the scan results changed.
Here are several examples to how you can set up baselines using ARM templates:
} ```
-#### Using PowerShell
+#### Using PowerShell
Express configuration isn't supported in PowerShell cmdlets but you can use PowerShell to invoke the latest vulnerability assessment capabilities using REST API, for example:
To change an Azure SQL database from the express vulnerability assessment config
-RecurringScansInterval Weekly ` -ScanResultsContainerName "vulnerability-assessment" ```
-
+ You might have to tweak `Update-AzSqlServerVulnerabilityAssessmentSetting` according to [Store Vulnerability Assessment scan results in a storage account accessible behind firewalls and VNets](/azure/azure-sql/database/sql-database-vulnerability-assessment-storage). #### Errors
Possible causes:
- Switching to express configuration failed due to a database policy error. Database policies aren't visible in the Azure portal for Defender for SQL vulnerability assessment, so we check for them during the validation stage of switching to express configuration. **Solution**: Disable all database policies for the relevant server and then try to switch to express configuration again.-- Consider using the [provided PowerShell script](powershell-sample-vulnerability-assessment-azure-sql.md) for assistance.
+- Consider using the [provided PowerShell script](powershell-sample-vulnerability-assessment-azure-sql.md) for assistance.
## Classic configuration
Typical scenarios might include:
- Disable findings from benchmarks that aren't of interest for a defined scope > [!IMPORTANT]
+>
> - To disable specific findings, you need permissions to edit a policy in Azure Policy. Learn more in [Azure RBAC permissions in Azure Policy](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy). > - Disabled findings will still be included in the weekly SQL vulnerability assessment email report. > - Disabled rules are shown in the "Not applicable" section of the scan results.
You can use Azure PowerShell cmdlets to programmatically manage your vulnerabili
| [Update-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting](/powershell/module/az.sql/Update-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting) | Updates the vulnerability assessment settings of a managed database. | | [Update-AzSqlInstanceVulnerabilityAssessmentSetting](/powershell/module/az.sql/Update-AzSqlInstanceVulnerabilityAssessmentSetting) | Updates the vulnerability assessment settings of a managed instance. | -- For a script example, see [Azure SQL vulnerability assessment PowerShell support](/archive/blogs/sqlsecurity/azure-sql-vulnerability-assessment-now-with-powershell-support). #### Azure CLI
To handle Boolean types as true/false, set the baseline result with binary input
} ``` ---
-## Next steps
+## Related content
- Learn more about [Microsoft Defender for Azure SQL](defender-for-sql-introduction.md). - Learn more about [data discovery and classification](/azure/azure-sql/database/data-discovery-and-classification-overview). - Learn more about [storing vulnerability assessment scan results in a storage account accessible behind firewalls and VNets](/azure/azure-sql/database/sql-database-vulnerability-assessment-storage). - Check out [common questions](faq-defender-for-databases.yml) about Azure SQL databases.--
defender-for-cloud Sql Azure Vulnerability Assessment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-overview.md
You can configure vulnerability assessment for your SQL databases with either:
### What's the difference between the express and classic configuration?
-Configuration modes benefits and limitations comparison:
+Configuration modes benefits and limitations comparison:
| Parameter | Express configuration | Classic configuration | |--|--|--|
Configuration modes benefits and limitations comparison:
| Scan export | Azure Resource Graph | Excel format, Azure Resource Graph | | Supported Clouds | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Microsoft Azure operated by 21Vianet | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure operated by 21Vianet | - ## Next steps - Enable [SQL vulnerability assessments](sql-azure-vulnerability-assessment-enable.md)
defender-for-cloud Sql Azure Vulnerability Assessment Rules Changelog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-rules-changelog.md
Title: SQL vulnerability assessment rules changelog description: Changelog for SQL vulnerability assessment rules with SQL Server, Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics--- Last updated 11/29/2022
This article details the changes made to the SQL vulnerability assessment servic
|Rule ID |Rule Title |Change details | ||||
-|VA1018 |Latest updates should be installed |Logic change |
+|VA1018 |Latest updates should be installed |Logic change |
## July 2023 |Rule ID |Rule Title |Change details | ||||
-|VA2129 |Changes to signed modules should be authorized |Logic change |
+|VA2129 |Changes to signed modules should be authorized |Logic change |
## June 2022 |Rule ID |Rule Title |Change details | ||||
-|VA2129 |Changes to signed modules should be authorized |Logic change |
-|VA1219 |Transparent data encryption should be enabled |Logic change |
-|VA1047 |Password expiration check should be enabled for all SQL logins |Logic change |
+|VA2129 |Changes to signed modules should be authorized |Logic change |
+|VA1219 |Transparent data encryption should be enabled |Logic change |
+|VA1047 |Password expiration check should be enabled for all SQL logins |Logic change |
## January 2022 |Rule ID |Rule Title |Change details | ||||
-|VA1288 |Sensitive data columns should be classified |Removed rule |
-|VA1054 |Minimal set of principals should be members of fixed high impact database roles |Logic change |
-|VA1220 |Database communication using TDS should be protected through TLS |Logic change |
-|VA2120 |Features that may affect security should be disabled |Logic change |
-|VA2129 |Changes to signed modules should be authorized |Logic change |
+|VA1288 |Sensitive data columns should be classified |Removed rule |
+|VA1054 |Minimal set of principals should be members of fixed high impact database roles |Logic change |
+|VA1220 |Database communication using TDS should be protected through TLS |Logic change |
+|VA2120 |Features that may affect security should be disabled |Logic change |
+|VA2129 |Changes to signed modules should be authorized |Logic change |
## June 2021 |Rule ID |Rule Title |Change details | ||||
-|VA1220 |Database communication using TDS should be protected through TLS |Logic change |
-|VA2108 |Minimal set of principals should be members of fixed high impact database roles |Logic change |
+|VA1220 |Database communication using TDS should be protected through TLS |Logic change |
+|VA2108 |Minimal set of principals should be members of fixed high impact database roles |Logic change |
## December 2020 |Rule ID |Rule Title |Change details | ||||
-|VA1017 |Execute permissions on xp_cmdshell from all users (except dbo) should be revoked |Title and description change|
+|VA1017 |Execute permissions on xp_cmdshell from all users (except dbo) should be revoked |Title and description change|
|VA1021 |Global temporary stored procedures should be removed |Removed rule | |VA1024 |C2 Audit Mode should be enabled |Removed rule | |VA1042 |Database ownership chaining should be disabled for all databases except for `master`, `msdb`, and `tempdb` |Description change | |VA1044 |Remote Admin Connections should be disabled unless specifically required |Title and description change | |VA1047 |Password expiration check should be enabled for all SQL logins |Title and description change |
-|VA1051 |AUTO_CLOSE should be disabled on all databases |Description change |
-|VA1053 |Account with default name 'sa' should be renamed or disabled |Description change |
-|VA1067 |Database Mail XPs should be disabled when it is not in use | Title and description change |
+|VA1051 |AUTO_CLOSE should be disabled on all databases |Description change |
+|VA1053 |Account with default name 'sa' should be renamed or disabled |Description change |
+|VA1067 |Database Mail XPs should be disabled when it is not in use | Title and description change |
|VA1068 |Server permissions shouldn't be granted directly to principals |Logic change | |VA1069 |Permissions to select from system tables and views should be revoked from non-sysadmins |Removed rule | |VA1090 |Ensure all Government Off The Shelf (GOTS) and Custom Stored Procedures are encrypted |Removed rule |
defender-for-cloud Sql Azure Vulnerability Assessment Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-rules.md
Title: SQL vulnerability assessment rules reference description: List of rule titles and descriptions for SQL Server, Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics--- Last updated 11/29/2022
This article lists the set of built-in rules that are used to flag security vuln
Applies to: :::image type="icon" source="./media/icons/yes-icon.png"::: Azure SQL Database :::image type="icon" source="./media/icons/yes-icon.png"::: Azure SQL Managed Instance :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Synapse Analytics :::image type="icon" source="./media/icons/yes-icon.png"::: SQL Server (all supported versions)
-The rules shown in your database scans depend on the SQL version and platform that was scanned.
+The rules shown in your database scans depend on the SQL version and platform that was scanned.
To learn about how to implement vulnerability assessment in Azure, see [Implement vulnerability assessment](sql-azure-vulnerability-assessment-enable.md).
SQL vulnerability assessment rules have five categories, which are in the follow
|VA1046 |CHECK_POLICY should be enabled for all SQL logins |Low |CHECK_POLICY option enables verifying SQL logins against the domain policy. This rule checks that CHECK_POLICY option is enabled for all SQL logins. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance | |VA1047 |Password expiration check should be enabled for all SQL logins |Low |Password expiration policies are used to manage the lifespan of a password. When SQL Server enforces password expiration policy, users are reminded to change old passwords, and accounts that have expired passwords are disabled. This rule checks that password expiration policy is enabled for all SQL logins. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance | |VA1048 |Database principals should not be mapped to the `sa` account |High |A database principal that is mapped to the `sa` account can be exploited by an attacker to elevate permissions to `sysadmin` |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance |
-|VA1052 |Remove BUILTIN\Administrators as a server login |Low |The BUILTIN\Administrators group contains the Windows Local Administrators group. In older versions of Microsoft SQL Server this group has administrator rights by default. This rule checks that this group is removed from SQL Server. |<nobr>SQL Server 2012+<nobr/> |
+|VA1052 |Remove BUILTIN\Administrators as a server login |Low |The BUILTIN\Administrators group contains the Windows Local Administrators group. In older versions of Microsoft SQL Server, this group has administrator rights by default. This rule checks that this group is removed from SQL Server. |<nobr>SQL Server 2012+<nobr/> |
|VA1053 |Account with default name `sa` should be renamed or disabled |Low |`sa` is a well-known account with principal ID 1. This rule verifies that the `sa` account is either renamed or disabled. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance | |VA1054 |Excessive permissions should not be granted to PUBLIC role on objects or columns |Low |Every SQL Server login belongs to the public server role. When a server principal has not been granted or denied specific permissions on a securable object the user inherits the permissions granted to public on that object. This rule displays a list of all securable objects or columns that are accessible to all users through the PUBLIC role. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Database | |VA1058 |`sa` login should be disabled |High |`sa` is a well-known account with principal ID 1. This rule verifies that the `sa` account is disabled. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance | |VA1059 |xp_cmdshell should be disabled |High |xp_cmdshell spawns a Windows command shell and passes it a string for execution. This rule checks that xp_cmdshell is disabled. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance | |VA1067 |Database Mail XPs should be disabled when it is not in use |Medium |This rule checks that Database Mail is disabled when no database mail profile is configured. Database Mail can be used for sending e-mail messages from the SQL Server Database Engine and is disabled by default. If you are not using this feature, it is recommended to disable it to reduce the surface area. |<nobr>SQL Server 2012+<nobr/> | |VA1068 |Server permissions shouldn't be granted directly to principals |Low |Server level permissions are associated with a server level object to regulate which users can gain access to the object. This rule checks that there are no server level permissions granted directly to logins. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance |
-|VA1070 |Database users shouldn't share the same name as a server login |Low |Database users may share the same name as a server login. This rule validates that there are no such users. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance |
+|VA1070 |Database users shouldn't share the same name as a server login |Low |Database users might share the same name as a server login. This rule validates that there are no such users. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance |
|VA1072 |Authentication mode should be Windows Authentication |Medium |There are two possible authentication modes: Windows Authentication mode and mixed mode. Mixed mode means that SQL Server enables both Windows authentication and SQL Server authentication. This rule checks that the authentication mode is set to Windows Authentication. |<nobr>SQL Server 2012+<nobr/> | |VA1094 |Database permissions shouldn't be granted directly to principals |Low |Permissions are rules associated with a securable object to regulate which users can gain access to the object. This rule checks that there are no DB permissions granted directly to users. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance |
-|VA1095 |Excessive permissions should not be granted to PUBLIC role |Medium |Every SQL Server login belongs to the public server role. When a server principal has not been granted or denied specific permissions on a securable object the user inherits the permissions granted to public on that object. This displays a list of all permissions that are granted to the PUBLIC role. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance<br/><br/>SQL Database |
+|VA1095 |Excessive permissions should not be granted to PUBLIC role |Medium |Every SQL Server login belongs to the public server role. When a server principal has not been granted or denied specific permissions on a securable object, the user inherits the permissions granted to public on that object. This displays a list of all permissions that are granted to the PUBLIC role. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance<br/><br/>SQL Database |
|VA1096 |Principal GUEST should not be granted permissions in the database |Low |Each database includes a user called GUEST. Permissions granted to GUEST are inherited by users who have access to the database but who do not have a user account in the database. This rule checks that all permissions have been revoked from the GUEST user. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance<br/><br/>SQL Database | |VA1097 |Principal GUEST should not be granted permissions on objects or columns |Low |Each database includes a user called GUEST. Permissions granted to GUEST are inherited by users who have access to the database but who do not have a user account in the database. This rule checks that all permissions have been revoked from the GUEST user. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance<br/><br/>SQL Database | |VA1099 |GUEST user should not be granted permissions on database securables |Low |Each database includes a user called GUEST. Permissions granted to GUEST are inherited by users who have access to the database but who do not have a user account in the database. This rule checks that all permissions have been revoked from the GUEST user. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance<br/><br/>SQL Database |
SQL vulnerability assessment rules have five categories, which are in the follow
|VA1022 |Ad hoc distributed queries should be disabled |Medium |Ad hoc distributed queries use the `OPENROWSET` and `OPENDATASOURCE` functions to connect to remote data sources that use OLE DB. This rule checks that ad hoc distributed queries are disabled. |<nobr>SQL Server 2012+<nobr/> | |VA1023 |CLR should be disabled |High |The CLR allows managed code to be hosted by and run in the Microsoft SQL Server environment. This rule checks that CLR is disabled. |<nobr>SQL Server 2012+<nobr/> | |VA1026 |CLR should be disabled |Medium |The CLR allows managed code to be hosted by and run in the Microsoft SQL Server environment. CLR strict security treats SAFE and EXTERNAL_ACCESS assemblies as if they were marked UNSAFE and requires all assemblies be signed by a certificate or asymmetric key with a corresponding login that has been granted UNSAFE ASSEMBLY permission in the master database. This rule checks that CLR is disabled. |<nobr>SQL Server 2017+<sup>2</sup><nobr/><br/><br/>SQL Managed Instance |
-|VA1027 |Untracked trusted assemblies should be removed |High |Assemblies marked as UNSAFE are required to be signed by a certificate or asymmetric key with a corresponding login that has been granted UNSAFE ASSEMBLY permission in the master database. Trusted assemblies may bypass this requirement. |<nobr>SQL Server 2017+<nobr/><br/><br/>SQL Managed Instance |
+|VA1027 |Untracked trusted assemblies should be removed |High |Assemblies marked as UNSAFE are required to be signed by a certificate or asymmetric key with a corresponding login that has been granted UNSAFE ASSEMBLY permission in the master database. Trusted assemblies might bypass this requirement. |<nobr>SQL Server 2017+<nobr/><br/><br/>SQL Managed Instance |
|VA1044 |Remote Admin Connections should be disabled unless specifically required |Medium |This rule checks that remote dedicated admin connections are disabled if they are not being used for clustering to reduce attack surface area. SQL Server provides a dedicated administrator connection (DAC). The DAC lets an administrator access a running server to execute diagnostic functions or Transact-SQL statements, or to troubleshoot problems on the server and it becomes an attractive target to attack when it is enabled remotely. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance | |VA1051 |AUTO_CLOSE should be disabled on all databases |Medium |The AUTO_CLOSE option specifies whether the database shuts down gracefully and frees resources after the last user disconnects. Regardless of its benefits it can cause denial of service by aggressively opening and closing the database, thus it is important to keep this feature disabled. This rule checks that this option is disabled on the current database. |<nobr>SQL Server 2012+<nobr/> | |VA1066 |Unused service broker endpoints should be removed |Low |Service Broker provides queuing and reliable messaging for SQL Server. Service Broker is used both for applications that use a single SQL Server instance and applications that distribute work across multiple instances. Service Broker endpoints provide options for transport security and message forwarding. This rule enumerates all the service broker endpoints. Remove those that are not used. |<nobr>SQL Server 2012+<nobr/> |
SQL vulnerability assessment rules have five categories, which are in the follow
|VA1247 |There should be no SPs marked as auto-start |High |When SQL Server has been configured to 'scan for startup procs' the server will scan master DB for stored procedures marked as auto-start. This rule checks that there are no SPs marked as auto-start. |<nobr>SQL Server 2012+<nobr/> | |VA1256 |User CLR assemblies should not be defined in the database |High |CLR assemblies can be used to execute arbitrary code on SQL Server process. This rule checks that there are no user-defined CLR assemblies in the database. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance | |VA1277 |Polybase network encryption should be enabled |High |PolyBase is a technology that accesses and combines both non-relational and relational data all from within SQL Server. Polybase network encryption option configures SQL Server to encrypt control and data channels when using Polybase. This rule verifies that this option is enabled. |<nobr>SQL Server 2016+<nobr/> |
-|VA1278 |Create a baseline of External Key Management Providers |Medium |The SQL Server Extensible Key Management (EKM) enables third-party EKM / Hardware Security Modules (HSM) vendors to register their modules in SQL Server. When registered SQL Server users can use the encryption keys stored on EKM modules,this rule displays a list of EKM providers being used in the system. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance |
+|VA1278 |Create a baseline of External Key Management Providers |Medium |The SQL Server Extensible Key Management (EKM) enables third-party EKM / Hardware Security Modules (HSM) vendors to register their modules in SQL Server. When registered SQL Server users can use the encryption keys stored on EKM modules, this rule displays a list of EKM providers being used in the system. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance |
|VA2062 |Database-level firewall rules should not grant excessive access |High |The Azure SQL Database-level firewall helps protect your data by preventing all access to your database until you specify which IP addresses have permission. Database-level firewall rules grant access to the specific database based on the originating IP address of each request. Database-level firewall rules for master and user databases can only be created and managed through Transact-SQL (unlike server-level firewall rules, which can also be created and managed using the Azure portal or PowerShell). For more information, see [Azure SQL Database and Azure Synapse Analytics IP firewall rules](/azure/azure-sql/database/firewall-configure). This check verifies that database-level firewall rules do not grant access to more than 255 IP addresses. |<nobr/>SQL Database<br/><br/>Azure Synapse | |VA2063 |Server-level firewall rules should not grant excessive access |High |The Azure SQL server-level firewall helps protect your server by preventing all access to your databases until you specify which IP addresses have permission. Server-level firewall rules grant access to all databases that belong to the server based on the originating IP address of each request. Server-level firewall rules can only be created and managed through Transact-SQL as well as through the Azure portal or PowerShell. For more information, see [Azure SQL Database and Azure Synapse Analytics IP firewall rules](/azure/azure-sql/database/firewall-configure). This check verifies that server-level firewall rules do not grant access to more than 255 IP addresses. |<nobr/>SQL Database<br/><br/>Azure Synapse | |VA2064 |Database-level firewall rules should be tracked and maintained at a strict minimum |High |The Azure SQL Database-level firewall helps protect your data by preventing all access to your database until you specify which IP addresses have permission. Database-level firewall rules grant access to the specific database based on the originating IP address of each request. Database-level firewall rules for master and user databases can only be created and managed through Transact-SQL (unlike server-level firewall rules, which can also be created and managed using the Azure portal or PowerShell). For more information, see [Azure SQL Database and Azure Synapse Analytics IP firewall rules](/azure/azure-sql/database/firewall-configure). This check enumerates all the database-level firewall rules so that any changes made to them can be identified and addressed. |<nobr/>SQL Database<br/><br/>Azure Synapse |
-|VA2065 |Server-level firewall rules should be tracked and maintained at a strict minimum |High |The Azure SQL server-level firewall helps protect your data by preventing all access to your databases until you specify which IP addresses have permission. Server-level firewall rules grant access to all databases that belong to the server based on the originating IP address of each request. Server-level firewall rules can be created and managed through Transact-SQL as well as through the Azure portal or PowerShell. For more information, see [Azure SQL Database and Azure Synapse Analytics IP firewall rules](/azure/azure-sql/database/firewall-configure). This check enumerates all the server-level firewall rules so that any changes made to them can be identified and addressed. |<nobr/>SQL Database<br/><br/>Azure Synapse |
+|VA2065 |Server-level firewall rules should be tracked and maintained at a strict minimum |High |The Azure SQL server-level firewall helps protect your data by preventing all access to your databases until you specify which IP addresses have permission. Server-level firewall rules grant access to all databases that belong to the server based on the originating IP address of each request. Server-level firewall rules can be created and managed through Transact-SQL as well as through the Azure portal or PowerShell. For more information, see [Azure SQL Database and Azure Synapse Analytics IP firewall rules](/azure/azure-sql/database/firewall-configure). This check enumerates all the server-level firewall rules so that any changes made to them can be identified and addressed. |<nobr/>SQL Database<br/><br/>Azure Synapse |
|VA2111 |Sample databases should be removed |Low |Microsoft SQL Server comes shipped with several sample databases. This rule checks whether the sample databases have been removed. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance |
-|VA2120 |Features that may affect security should be disabled |High |SQL Server is capable of providing a wide range of features and services. Some of the features and services provided by default may not be necessary and enabling them could adversely affect the security of the system. This rule checks that these features are disabled. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance |
-|VA2121 | 'OLE Automation Procedures' feature should be disabled |High |SQL Server is capable of providing a wide range of features and services. Some of the features and services, provided by default, may not be necessary, and enabling them could adversely affect the security of the system. The OLE Automation Procedures option controls whether OLE Automation objects can be instantiated within Transact-SQL batches. These are extended stored procedures that allow SQL Server users to execute functions external to SQL Server. Regardless of its benefits it can also be used for exploits, and is known as a popular mechanism to plant files on the target machines. It is advised to use PowerShell as a replacement for this tool. This rule checks that 'OLE Automation Procedures' feature is disabled. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance |
-|VA2122 |'User Options' feature should be disabled |Medium |SQL Server is capable of providing a wide range of features and services. Some of the features and services provided by default may not be necessary and enabling them could adversely affect the security of the system. The user options specifies global defaults for all users. A list of default query processing options is established for the duration of a user's work session. The user options allows you to change the default values of the SET options (if the server's default settings are not appropriate). This rule checks that 'user options' feature is disabled. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance |
-|VA2126 |Extensibility-features that may affect security should be disabled if not needed |Medium |SQL Server provides a wide range of features and services. Some of the features and services, provided by default, may not be necessary, and enabling them could adversely affect the security of the system. This rule checks that configurations that allow extraction of data to an external data source and the execution of scripts with certain remote language extensions are disabled. |<nobr>SQL Server 2016+<nobr/> |
+|VA2120 |Features that may affect security should be disabled |High |SQL Server is capable of providing a wide range of features and services. Some of the features and services provided by default might not be necessary and enabling them could adversely affect the security of the system. This rule checks that these features are disabled. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance |
+|VA2121 | 'OLE Automation Procedures' feature should be disabled |High |SQL Server is capable of providing a wide range of features and services. Some of the features and services, provided by default, might not be necessary, and enabling them could adversely affect the security of the system. The OLE Automation Procedures option controls whether OLE Automation objects can be instantiated within Transact-SQL batches. These are extended stored procedures that allow SQL Server users to execute functions external to SQL Server. Regardless of its benefits it can also be used for exploits, and is known as a popular mechanism to plant files on the target machines. It is advised to use PowerShell as a replacement for this tool. This rule checks that 'OLE Automation Procedures' feature is disabled. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance |
+|VA2122 |'User Options' feature should be disabled |Medium |SQL Server is capable of providing a wide range of features and services. Some of the features and services provided by default might not be necessary and enabling them could adversely affect the security of the system. The user options specifies global defaults for all users. A list of default query processing options is established for the duration of a user's work session. The user options allows you to change the default values of the SET options (if the server's default settings are not appropriate). This rule checks that 'user options' feature is disabled. |<nobr>SQL Server 2012+<nobr/><br/><br/>SQL Managed Instance |
+|VA2126 |Extensibility-features that might affect security should be disabled if not needed |Medium |SQL Server provides a wide range of features and services. Some of the features and services, provided by default, might not be necessary, and enabling them could adversely affect the security of the system. This rule checks that configurations that allow extraction of data to an external data source and the execution of scripts with certain remote language extensions are disabled. |<nobr>SQL Server 2016+<nobr/> |
## Removed rules
defender-for-cloud Support Matrix Defender For Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-cloud.md
Defender for Cloud provides recommendations, security alerts, and vulnerability
\*\* Microsoft Entra recommendations are available only for subscriptions with [enhanced security features enabled](enable-enhanced-security.md). -- ## Supported operating systems Defender for Cloud depends on the [Azure Monitor Agent](../azure-monitor/agents/agents-overview.md) or the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md). Make sure that your machines are running one of the supported operating systems as described on the following pages: - Azure Monitor Agent
- - [Azure Monitor Agent for Windows supported operating systems](../azure-monitor/agents/agents-overview.md#windows)
- - [Azure Monitor Agent for Linux supported operating systems](../azure-monitor/agents/agents-overview.md#linux)
+ - [Azure Monitor Agent for Windows supported operating systems](../azure-monitor/agents/agents-overview.md#windows)
+ - [Azure Monitor Agent for Linux supported operating systems](../azure-monitor/agents/agents-overview.md#linux)
- Log Analytics agent
- - [Log Analytics agent for Windows supported operating systems](../azure-monitor/agents/agents-overview.md#windows)
- - [Log Analytics agent for Linux supported operating systems](../azure-monitor/agents/agents-overview.md#linux)
+ - [Log Analytics agent for Windows supported operating systems](../azure-monitor/agents/agents-overview.md#windows)
+ - [Log Analytics agent for Linux supported operating systems](../azure-monitor/agents/agents-overview.md#linux)
Also ensure your Log Analytics agent is [properly configured to send data to Defender for Cloud](working-with-log-analytics-agent.md#manual-agent).
defender-for-cloud Support Matrix Defender For Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-containers.md
Ensure your Kubernetes node is running on one of the verified supported operatin
### Defender agent limitations
-The Defender agent is currently not supported on ARM64 nodes.
+The Defender agent in AKS V1.28 and below is not supported on ARM64 nodes.
### Network restrictions
defender-for-cloud Support Matrix Defender For Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-storage.md
The action sets are collections of Azure resource provider operations that you c
- Microsoft.EventGrid/eventSubscriptions/delete - Microsoft.Authorization/roleAssignments/read - Microsoft.Authorization/roleAssignments/write-- Microsoft.Authorization/roleAssignments/delete
+- Microsoft.Authorization/roleAssignments/delete
defender-for-cloud Tenant Wide Permissions Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tenant-wide-permissions-management.md
Last updated 01/08/2023
# Grant and request tenant-wide visibility
-A user with the Microsoft Entra role of **Global Administrator** might have tenant-wide responsibilities, but lack the Azure permissions to view that organization-wide information in Microsoft Defender for Cloud. Permission elevation is required because Microsoft Entra role assignments don't grant access to Azure resources.
+A user with the Microsoft Entra role of **Global Administrator** might have tenant-wide responsibilities, but lack the Azure permissions to view that organization-wide information in Microsoft Defender for Cloud. Permission elevation is required because Microsoft Entra role assignments don't grant access to Azure resources.
## Grant tenant-wide permissions to yourself
A user with the Microsoft Entra role of **Global Administrator** might have tena
1. If your organization manages resource access with [Microsoft Entra Privileged Identity Management (PIM)](../active-directory/privileged-identity-management/pim-configure.md), or any other PIM tool, the global administrator role must be active for the user.
-1. As a Global Administrator user without an assignment on the root management group of the tenant, open Defender for Cloud's **Overview** page and select the **tenant-wide visibility** link in the banner.
+1. As a Global Administrator user without an assignment on the root management group of the tenant, open Defender for Cloud's **Overview** page and select the **tenant-wide visibility** link in the banner.
:::image type="content" source="media/management-groups-roles/enable-tenant-level-permissions-banner.png" alt-text="Enable tenant-level permissions in Microsoft Defender for Cloud.":::
-1. Select the new Azure role to be assigned.
+1. Select the new Azure role to be assigned.
:::image type="content" source="media/management-groups-roles/enable-tenant-level-permissions-form.png" alt-text="Form for defining the tenant-level permissions to be assigned to your user.":::
A user with the Microsoft Entra role of **Global Administrator** might have tena
1. Sign out of the Azure portal, and then log back in again.
-1. Once you have elevated access, open or refresh Microsoft Defender for Cloud to verify you have visibility into all subscriptions under your Microsoft Entra tenant.
+1. Once you have elevated access, open or refresh Microsoft Defender for Cloud to verify you have visibility into all subscriptions under your Microsoft Entra tenant.
The process of assigning yourself tenant-level permissions, performs many operations automatically for you:
For more information of the Microsoft Entra elevation process, see [Elevate acce
## Request tenant-wide permissions when yours are insufficient
-When you navigate to Defender for Cloud, you might see a banner that alerts you to the fact that your view is limited. If you see this banner, select it to send a request to the global administrator for your organization. In the request, you can include the role you'd like to be assigned and the global administrator will make a decision about which role to grant.
+When you navigate to Defender for Cloud, you might see a banner that alerts you to the fact that your view is limited. If you see this banner, select it to send a request to the global administrator for your organization. In the request, you can include the role you'd like to be assigned and the global administrator will make a decision about which role to grant.
-It's the global administrator's decision whether to accept or reject these requests.
+It's the global administrator's decision whether to accept or reject these requests.
> [!IMPORTANT] > You can only submit one request every seven days.
To request elevated permissions from your global administrator:
:::image type="content" source="media/management-groups-roles/request-tenant-permissions-email.png" alt-text="Email to the global administrator for new permissions.":::
- After the global administrator selects **Review the request** and completes the process, the decision is emailed to the requesting user.
+ After the global administrator selects **Review the request** and completes the process, the decision is emailed to the requesting user.
## Next steps
defender-for-cloud Threat Intelligence Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/threat-intelligence-reports.md
This type of information is useful during the incident response process. Such as
## How to access the threat intelligence report? 1. From Defender for Cloud's menu, open the **Security alerts** page.
-1. Select an alert.
+1. Select an alert.
The alerts details page opens with more details about the alert. For example, the **Ransomware indicators detected** alert details page:
This type of information is useful during the incident response process. Such as
[![Potentially Unsafe Action alert details page.](media/threat-intelligence-reports/threat-intelligence-report.png)](media/threat-intelligence-reports/threat-intelligence-report.png#lightbox)
- You can optionally download the PDF report.
+ You can optionally download the PDF report.
>[!TIP] > The amount of information available for each security alert will vary according to the type of alert.
defender-for-iot Concept Supported Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-supported-protocols.md
OT network sensors can detect the following protocols when identifying assets an
|**Honeywell** | ENAP<br> Experion DCS CDA<br> Experion DCS FDA<br> Honeywell EUCN <br> Honeywell Discovery | |**IEC** | Codesys V3<br>IEC 60870-5-7 (IEC 62351-3 + IEC 62351-5)<br> IEC 60870-5-104<br> IEC 60870-5-104 ASDU_APCI<br> IEC 60870 ICCP TASE.2<br> IEC 61850 GOOSE<br> IEC 61850 MMS<br> IEC 61850 SMV (SAMPLED-VALUES)<br> LonTalk (LonWorks) | |**IEEE** | LLC<br> STP<br> VLAN |
-|**IETF** | ARP<br> DHCP<br> DCE RPC<br> DNS<br> FTP (FTP_ADAT<br> FTP_DATA)<br> GSSAPI (RFC2743)<br> HTTP<br> ICMP<br> IPv4<br> IPv6<br> LLDP<br> MDNS<br> NBNS<br> NTLM (NTLMSSP Auth Protocol)<br> RPC<br> SMB / Browse / NBDGM<br> SMB / CIFS<br> SNMP<br> SPNEGO (RFC4178)<br> SSH<br> Syslog<br> TCP<br> Telnet<br> TFTP<br> TPKT<br> UDP |
+|**IETF** | ARP<br> DHCP<br> DCE RPC<br> DNS<br> FTP (FTP_ADAT<br> FTP_DATA)<br> GSSAPI (RFC2743)<br> HTTP<br> ICMP<br> IPv4<br> LLDP<br> MDNS<br> NBNS<br> NTLM (NTLMSSP Auth Protocol)<br> RPC<br> SMB / Browse / NBDGM<br> SMB / CIFS<br> SNMP<br> SPNEGO (RFC4178)<br> SSH<br> Syslog<br> TCP<br> Telnet<br> TFTP<br> TPKT<br> UDP |
|**ISO** | CLNP (ISO 8473)<br> COTP (ISO 8073)<br> ISO Industrial Protocol<br> MQTT (IEC 20922) | | **Jenesys** |FOX <br>Niagara | |**Medical** |ASTM<br> HL7 <br> DICOM <br> POCT1 |
firewall-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/overview.md
Title: What is Azure Firewall Manager?
-description: Learn about Azure Firewall Manager features
+description: Learn about Azure Firewall Manager features.
Previously updated : 01/17/2023 Last updated : 02/26/2024
Firewall Manager can provide security management for two network architecture ty
- **Secured virtual hub**
- An [Azure Virtual WAN Hub](../virtual-wan/virtual-wan-about.md#resources) is a Microsoft-managed resource that lets you easily create hub and spoke architectures. When security and routing policies are associated with such a hub, it is referred to as a *[secured virtual hub](secured-virtual-hub.md)*.
+ An [Azure Virtual WAN Hub](../virtual-wan/virtual-wan-about.md#resources) is a Microsoft-managed resource that lets you easily create hub and spoke architectures. When security and routing policies are associated with such a hub, it's referred to as a *[secured virtual hub](secured-virtual-hub.md)*.
- **Hub virtual network**
- This is a standard Azure virtual network that you create and manage yourself. When security policies are associated with such a hub, it is referred to as a *hub virtual network*. At this time, only Azure Firewall Policy is supported. You can peer spoke virtual networks that contain your workload servers and services. You can also manage firewalls in standalone virtual networks that aren't peered to any spoke.
+ This is a standard Azure virtual network that you create and manage yourself. When security policies are associated with such a hub, it's referred to as a *hub virtual network*. At this time, only Azure Firewall Policy is supported. You can peer spoke virtual networks that contain your workload servers and services. You can also manage firewalls in standalone virtual networks that aren't peered to any spoke.
For a detailed comparison of *secured virtual hub* and *hub virtual network* architectures, see [What are the Azure Firewall Manager architecture options?](vhubs-and-vnets.md).
You can centrally deploy and configure multiple Azure Firewall instances that sp
You can use Azure Firewall Manager to centrally manage Azure Firewall policies across multiple secured virtual hubs. Your central IT teams can author global firewall policies to enforce organization wide firewall policy across teams. Locally authored firewall policies allow a DevOps self-service model for better agility.
-### Integrated with third-party security-as-a-service for advanced security
+### Integrated with partner security-as-a-service for advanced security
-In addition to Azure Firewall, you can integrate third-party security as a service (SECaaS) providers to provide additional network protection for your VNet and branch Internet connections.
+In addition to Azure Firewall, you can integrate partner security as a service (SECaaS) providers to provide more network protection for your virtual network and branch Internet connections.
This feature is available only with secured virtual hub deployments. -- VNet to Internet (V2I) traffic filtering
+- Virtual network to Internet (V2I) traffic filtering
- - Filter outbound virtual network traffic with your preferred third-party security provider.
- - Leverage advanced user-aware Internet protection for your cloud workloads running on Azure.
+ - Filter outbound virtual network traffic with your preferred partner security provider.
+ - Use advanced user-aware Internet protection for your cloud workloads running on Azure.
- Branch to Internet (B2I) traffic filtering
- Leverage your Azure connectivity and global distribution to easily add third-party filtering for branch to Internet scenarios.
+ Use your Azure connectivity and global distribution to easily add partner filtering for branch to Internet scenarios.
For more information about security partner providers, see [What are Azure Firewall Manager security partner providers?](trusted-security-partners.md)
Easily route traffic to your secured hub for filtering and logging without the n
This feature is available only with secured virtual hub deployments.
-You can use third-party providers for Branch to Internet (B2I) traffic filtering, side by side with Azure Firewall for Branch to VNet (B2V), VNet to VNet (V2V) and VNet to Internet (V2I).
+You can use partner providers for Branch to Internet (B2I) traffic filtering, side by side with Azure Firewall for Branch to virtual network (B2V), virtual network to virtual network (V2V) and virtual network to Internet (V2I).
### DDoS protection plan
Azure Firewall Manager has the following known issues:
|Issue |Description |Mitigation | ||||
-|Traffic splitting|Microsoft 365 and Azure Public PaaS traffic splitting isn't currently supported. As such, selecting a third-party provider for V2I or B2I also sends all Azure Public PaaS and Microsoft 365 traffic via the partner service.|Investigating traffic splitting at the hub.
+|Traffic splitting|Microsoft 365 and Azure Public PaaS traffic splitting isn't currently supported. As such, selecting a partner provider for V2I or B2I also sends all Azure Public PaaS and Microsoft 365 traffic via the partner service.|Investigating traffic splitting at the hub.
|Base policies must be in same region as local policy|Create all your local policies in the same region as the base policy. You can still apply a policy that was created in one region on a secured hub from another region.|Investigating| |Filtering inter-hub traffic in secure virtual hub deployments|Secured Virtual Hub to Secured Virtual Hub communication filtering is supported with the Routing Intent feature.|Enable Routing Intent on your Virtual WAN Hub by setting Inter-hub to **Enabled** in Azure Firewall Manager. See [Routing Intent documentation](../virtual-wan/how-to-routing-policies.md) for more information about this feature.| |Branch to branch traffic with private traffic filtering enabled|Branch to branch traffic can be inspected by Azure Firewall in secured hub scenarios if Routing Intent is enabled. |Enable Routing Intent on your Virtual WAN Hub by setting Inter-hub to **Enabled** in Azure Firewall Manager. See [Routing Intent documentation](../virtual-wan/how-to-routing-policies.md) for more information about this feature.| |All Secured Virtual Hubs sharing the same virtual WAN must be in the same resource group.|This behavior is aligned with Virtual WAN Hubs today.|Create multiple Virtual WANs to allow Secured Virtual Hubs to be created in different resource groups.| |Bulk IP address addition fails|The secure hub firewall goes into a failed state if you add multiple public IP addresses.|Add smaller public IP address increments. For example, add 10 at a time.|
-|DDoS Protection not supported with secured virtual hubs|DDoS Protection is not integrated with vWANs.|Investigating|
-|Activity logs not fully supported|Firewall policy does not currently support Activity logs.|Investigating|
-|Description of rules not fully supported|Firewall policy does not display the description of rules in an ARM export.|Investigating|
-|Azure Firewall Manager overwrites static and custom routes causing downtime in virtual WAN hub.|You should not use Azure Firewall Manager to manage your settings in deployments configured with custom or static routes. Updates from Firewall Manager can potentially overwrite static or custom route settings.|If you use static or custom routes, use the Virtual WAN page to manage security settings and avoid configuration via Azure Firewall Manager.<br><br>For more information, see [Scenario: Azure Firewall - custom](../virtual-wan/scenario-route-between-vnets-firewall.md).|
+|DDoS Protection not supported with secured virtual hubs|DDoS Protection isn't integrated with vWANs.|Investigating|
+|Activity logs not fully supported|Firewall policy doesn't currently support Activity logs.|Investigating|
+|Description of rules not fully supported|Firewall policy doesn't display the description of rules in an ARM export.|Investigating|
+|Azure Firewall Manager overwrites static and custom routes causing downtime in virtual WAN hub.|You shouldn't use Azure Firewall Manager to manage your settings in deployments configured with custom or static routes. Updates from Firewall Manager can potentially overwrite static or custom route settings.|If you use static or custom routes, use the Virtual WAN page to manage security settings and avoid configuration via Azure Firewall Manager.<br><br>For more information, see [Scenario: Azure Firewall - custom](../virtual-wan/scenario-route-between-vnets-firewall.md).|
## Next steps
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
Previously updated : 11/07/2022 Last updated : 02/26/2024 # Customer intent: As an administrator, I want to evaluate Azure Firewall so I can determine if I want to use it.
To learn about Firewall Premium features, see [Azure Firewall Premium features](
## Azure Firewall Basic
-Azure Firewall Basic is intended for small and medium size (SMB) customers to secure their Azure cloud.
+Azure Firewall Basic is intended for small and medium size (SMB) customers to secure their Azure cloud
environments. It provides the essential protection SMB customers need at an affordable price point. :::image type="content" source="media/overview/firewall-basic-diagram.png" alt-text="Diagram showing Firewall Basic.":::
To compare the all Firewall SKU features, see [Choose the right Azure Firewall S
You can use Azure Firewall Manager to centrally manage Azure Firewalls across multiple subscriptions. Firewall Manager uses firewall policy to apply a common set of network/application rules and configuration to the firewalls in your tenant.
-Firewall Manager supports firewalls in both VNet and Virtual WANs (Secure Virtual Hub) environments. Secure Virtual Hubs use the Virtual WAN route automation solution to simplify routing traffic to the firewall with just a few steps.
+Firewall Manager supports firewalls in both virtual network and Virtual WANs (Secure Virtual Hub) environments. Secure Virtual Hubs use the Virtual WAN route automation solution to simplify routing traffic to the firewall with just a few steps.
To learn more about Azure Firewall Manager, see [Azure Firewall Manager](../firewall-manager/overview.md).
To learn what's new with Azure Firewall, see [Azure updates](https://azure.micro
## Known issues
-For Azure Firewall known issues, see [Azure Firewall known issues](firewall-known-issues.md)
+For Azure Firewall known issues, see [Azure Firewall known issues](firewall-known-issues.md).
## Next steps
iot-hub-device-update Device Update Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-diagnostics.md
Title: Understand Device Update for Azure IoT Hub diagnostic features description: Understand what diagnostic features Device Update for IoT Hub has, including deployment error codes in UX and remote log collection.--++ Last updated 9/2/2022
iot-hub-device-update Device Update Log Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-log-collection.md
Title: Device Update for Azure IoT Hub log collection | Microsoft Docs description: Device Update for IoT Hub enables remote collection of diagnostic logs from connected IoT devices.--++ Last updated 10/26/2022
iot-operations Howto Deploy Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/develop/howto-deploy-dapr.md
+
+ Title: Deploy Dapr pluggable components
+
+description: Deploy Dapr and the IoT MQ pluggable components to a cluster.
+++++ Last updated : 1/31/2024++
+# Deploy Dapr pluggable components
++
+The Distributed Application Runtime (Dapr) is a portable, serverless, event-driven runtime that simplifies the process of building distributed applications. Dapr lets you build stateful or stateless apps without worrying about how the building blocks function. Dapr provides several [building blocks](https://docs.dapr.io/developing-applications/building-blocks/): pub/sub, state management, service invocation, actors, and more.
+
+Azure IoT MQ Preview supports two of these building blocks, powered by [Azure IoT MQ MQTT broker](../manage-mqtt-connectivity/overview-iot-mq.md):
+
+- Publish and subscribe
+- State management
+
+To use the IoT MQ Dapr pluggable components, define the component spec for each of the APIs and then [register this to the cluster](https://docs.dapr.io/operations/components/pluggable-components-registration/). The Dapr components listen to a Unix domain socket placed on the shared volume. The Dapr runtime connects with each socket and discovers all services from a given building block API that the component implements.
+
+## Install Dapr runtime
+
+To install the Dapr runtime, use the following Helm command:
+
+> [!NOTE]
+> If you completed the provided Azure IoT Operations Preview [quickstart](../get-started/quickstart-deploy.md), you already installed the Dapr runtime and the following steps are not required.
+
+```bash
+helm repo add dapr https://dapr.github.io/helm-charts/
+helm repo update
+helm upgrade --install dapr dapr/dapr --version=1.11 --namespace dapr-system --create-namespace --wait
+```
+
+> [!IMPORTANT]
+> **Dapr v1.12** is currently not supported.
+
+## Register MQ pluggable components
+
+To register MQ's pluggable pub/sub and state management components, create the component manifest yaml, and apply it to your cluster.
+
+To create the yaml file, use the following component definitions:
+
+> [!div class="mx-tdBreakAll"]
+> | Component | Description |
+> |-|-|
+> | `metadata.name` | The component name is important and is how a Dapr application references the component. |
+> | `spec.type` | [The type of the component](https://docs.dapr.io/operations/components/pluggable-components-registration/#define-the-component), which must be declared exactly as shown. It tells Dapr what kind of component (`pubsub` or `state`) it is and which Unix socket to use. |
+> | `spec.metadata.url` | The URL tells the component where the local MQ endpoint is. Defaults to `8883` is MQ's default MQTT port with TLS enabled. |
+> | `spec.metadata.satTokenPath` | The Service Account Token is used to authenticate the Dapr components with the MQTT broker |
+> | `spec.metadata.tlsEnabled` | Define if TLS is used by the MQTT broker. Defaults to `true` |
+> | `spec.metadata.caCertPath` | The certificate chain path for validating the broker, required if `tlsEnabled` is `true` |
+> | `spec.metadata.logLevel` | The logging level of the component. 'Debug', 'Info', 'Warn' and 'Error' |
+
+1. Save the following yaml, which contains the component definitions, to a file named `components.yaml`:
+
+ ```yml
+ # Pub/sub component
+ apiVersion: dapr.io/v1alpha1
+ kind: Component
+ metadata:
+ name: aio-mq-pubsub
+ namespace: azure-iot-operations
+ spec:
+ type: pubsub.aio-mq-pubsub-pluggable # DO NOT CHANGE
+ version: v1
+ metadata:
+ - name: url
+ value: "aio-mq-dmqtt-frontend:8883"
+ - name: satTokenPath
+ value: "/var/run/secrets/tokens/mqtt-client-token"
+ - name: tlsEnabled
+ value: true
+ - name: caCertPath
+ value: "/var/run/certs/aio-mq-ca-cert/ca.crt"
+ - name: logLevel
+ value: "Info"
+
+ # State Management component
+ apiVersion: dapr.io/v1alpha1
+ kind: Component
+ metadata:
+ name: aio-mq-statestore
+ namespace: azure-iot-operations
+ spec:
+ type: state.aio-mq-statestore-pluggable # DO NOT CHANGE
+ version: v1
+ metadata:
+ - name: url
+ value: "aio-mq-dmqtt-frontend:8883"
+ - name: satTokenPath
+ value: "/var/run/secrets/tokens/mqtt-client-token"
+ - name: tlsEnabled
+ value: true
+ - name: caCertPath
+ value: "/var/run/certs/aio-mq-ca-cert/ca.crt"
+ - name: logLevel
+ value: "Info"
+ ```
+
+1. Apply the component yaml to your cluster by running the following command:
+
+ ```bash
+ kubectl apply -f components.yaml
+ ```
+
+ Verify the following output:
+
+ ```output
+ component.dapr.io/aio-mq-pubsub created
+ component.dapr.io/aio-mq-statestore created
+ ```
+
+## Create authorization policy for IoT MQ
+
+To configure authorization policies to Azure IoT MQ, first you create a [BrokerAuthorization](../manage-mqtt-connectivity/howto-configure-authorization.md) resource.
+
+> [!NOTE]
+> If Broker Authorization is not enabled on this cluster, you can skip this section as the applications will have access to all MQTT topics, including those needed to access the IoT MQ State Store.
+
+1. Save the following yaml, which contains a BrokerAuthorization definition, to a file named `aio-dapr-authz.yaml`:
+
+ ```yml
+ apiVersion: mq.iotoperations.azure.com/v1beta1
+ kind: BrokerAuthorization
+ metadata:
+ name: my-dapr-authz-policies
+ namespace: azure-iot-operations
+ spec:
+ listenerRef:
+ - my-listener # change to match your listener name as needed
+ authorizationPolicies:
+ enableCache: false
+ rules:
+ - principals:
+ attributes:
+ - group: dapr-workload # match to the attribute annotated to the service account
+ brokerResources:
+ - method: Connect
+ - method: Publish
+ topics:
+ - "$services/statestore/#"
+ - method: Subscribe
+ topics:
+ - "clients/{principal.clientId}/services/statestore/#"
+ ```
+
+1. Apply the BrokerAuthorizaion definition to the cluster:
+
+ ```bash
+ kubectl apply -f aio-dapr-authz.yaml
+ ```
+
+## Next steps
+
+Now that you have deployed the Dapr components, you can [Use Dapr to develop distributed applications](howto-develop-dapr-apps.md).
iot-operations Howto Develop Dapr Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/develop/howto-develop-dapr-apps.md
- ignite-2023 Last updated 11/14/2023
-# CustomerIntent: As an developer, I want to understand how to use Dapr to develop distributed apps that talk with Azure IoT MQ.
+# CustomerIntent: As a developer, I want to understand how to use Dapr to develop distributed apps that talk with Azure IoT MQ.
# Use Dapr to develop distributed application workloads [!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
-The Distributed Application Runtime (Dapr) is a portable, serverless, event-driven runtime that simplifies the process of building distributed application. Dapr enables developers to build stateful or stateless apps without worrying about how the building blocks function. Dapr provides several [building blocks](https://docs.dapr.io/developing-applications/building-blocks/): state management, service invocation, actors, pub/sub, and more. Azure IoT MQ Preview supports two of these building blocks:
+To use the IoT MQ Dapr pluggable components, deploy both the pub/sub and state store components in your application deployment along with your Dapr application. This guide shows you how to deploy an application using the Dapr SDK and IoT MQ pluggable components.
-- Publish and Subscribe, powered by [Azure IoT MQ MQTT broker](../manage-mqtt-connectivity/overview-iot-mq.md)-- State Management
+## Prerequisites
-To use Dapr pluggable components, define all the components, then add pluggable component containers to your [deployments](https://docs.dapr.io/operations/components/pluggable-components-registration/). The Dapr component listens to a Unix Domain Socket placed on the shared volume, and Dapr runtime connects with each socket and discovers all services from a given building block API that the component implements. Each deployment must have its own pluggable component defined. This guide shows you how to deploy an application using the Dapr SDK and IoT MQ pluggable components.
-
-## Install Dapr runtime
-
-To install the Dapr runtime, use the following Helm command. If you completed the provided Azure IoT Operations Preview [quickstart](../get-started/quickstart-deploy.md), you already installed the runtime.
-
-```bash
-helm repo add dapr https://dapr.github.io/helm-charts/
-helm repo update
-helm upgrade --install dapr dapr/dapr --version=1.11 --namespace dapr-system --create-namespace --wait
-```
-
-> [!IMPORTANT]
-> **Dapr v1.12** is currently not supported.
-
-## Register MQ's pluggable components
-
-To register MQ's pluggable Pub/sub and State Management components, create the component manifest yaml, and apply it to your cluster.
-
-To create the yaml file, use the following component definitions:
-
-> [!div class="mx-tdBreakAll"]
-> | Component | Description |
-> |-|-|
-> | `metadata.name` | The component name is important and is how a Dapr application references the component. |
-> | `spec.type` | [The type of the component](https://docs.dapr.io/operations/components/pluggable-components-registration/#define-the-component), which must be declared exactly as shown. It tells Dapr what kind of component (`pubsub` or `state`) it is and which Unix socket to use. |
-> | `spec.metadata.url` | The URL tells the component where the local MQ endpoint is. Defaults to `8883` is MQ's default MQTT port with TLS enabled. |
-> | `spec.metadata.satTokenPath` | The Service Account Token is used to authenticate the Dapr components with the MQTT broker |
-> | `spec.metadata.tlsEnabled` | Define if TLS is used by the MQTT broker. Defaults to `true` |
-> | `spec.metadata.caCertPath` | The certificate chain path for validating the broker, required if `tlsEnabled` is `true` |
-> | `spec.metadata.logLevel` | The logging level of the component. 'Debug', 'Info', 'Warn' and 'Error' |
-
-1. Save the following yaml, which contains the component definitions, to a file named `components.yaml`:
-
- ```yml
- # Pub/sub component
- apiVersion: dapr.io/v1alpha1
- kind: Component
- metadata:
- name: aio-mq-pubsub
- namespace: azure-iot-operations
- spec:
- type: pubsub.aio-mq-pubsub-pluggable # DO NOT CHANGE
- version: v1
- metadata:
- - name: url
- value: "aio-mq-dmqtt-frontend:8883"
- - name: satTokenPath
- value: "/var/run/secrets/tokens/mqtt-client-token"
- - name: tlsEnabled
- value: true
- - name: caCertPath
- value: "/var/run/certs/aio-mq-ca-cert/ca.crt"
- - name: logLevel
- value: "Info"
-
- # State Management component
- apiVersion: dapr.io/v1alpha1
- kind: Component
- metadata:
- name: aio-mq-statestore
- namespace: azure-iot-operations
- spec:
- type: state.aio-mq-statestore-pluggable # DO NOT CHANGE
- version: v1
- metadata:
- - name: url
- value: "aio-mq-dmqtt-frontend:8883"
- - name: satTokenPath
- value: "/var/run/secrets/tokens/mqtt-client-token"
- - name: tlsEnabled
- value: true
- - name: caCertPath
- value: "/var/run/certs/aio-mq-ca-cert/ca.crt"
- - name: logLevel
- value: "Info"
- ```
-
-1. Apply the component yaml to your cluster by running the following command:
-
- ```bash
- kubectl apply -f components.yaml
- ```
-
- Verify the following output:
-
- ```output
- component.dapr.io/aio-mq-pubsub created
- component.dapr.io/aio-mq-statestore created
- ```
-
-## Set up authorization policy between the application and MQ
-
-To configure authorization policies to Azure IoT MQ, first you create a [BrokerAuthorization resource](../manage-mqtt-connectivity/howto-configure-authorization.md).
-
-> [!NOTE]
-> If Broker Authorization is not enabled on this cluster, you can skip this section as the applications will have access to all MQTT topics.
-
-1. Annotate the service account `mqtt-client` with an [authorization attribute](../manage-mqtt-connectivity/howto-configure-authentication.md#create-a-service-account):
-
- ```bash
- kubectl annotate serviceaccount mqtt-client aio-mq-broker-auth/group=dapr-workload -n azure-iot-operations
- ```
-
-1. Save the following yaml, which contains the BrokerAuthorization definition, to a file named `aio-mq-authz.yaml`.
-
- Use the following definitions:
-
- > [!div class="mx-tdBreakAll"]
- > | Item | Description |
- > |-|-|
- > | `dapr-workload` | The Dapr application authorization attribute assigned to the service account |
- > | `topics` | Describe the topics required to communicate with the MQ State Store |
-
- ```yml
- apiVersion: mq.iotoperations.azure.com/v1beta1
- kind: BrokerAuthorization
- metadata:
- name: my-authz-policies
- namespace: azure-iot-operations
- spec:
- listenerRef:
- - my-listener # change to match your listener name as needed
- authorizationPolicies:
- enableCache: false
- rules:
- - principals:
- attributes:
- - group: dapr-workload
- brokerResources:
- - method: Connect
- - method: Publish
- topics:
- - "$services/statestore/#"
- - method: Subscribe
- topics:
- - "clients/{principal.clientId}/services/statestore/#"
- ```
-
-1. Apply the BrokerAuthorizaion definition to the cluster:
-
- ```bash
- kubectl apply -f aio-mq-authz.yaml
- ```
+* Azure IoT Operations deployed - [Deploy Azure IoT Operations](../get-started/quickstart-deploy.md)
+* IoT MQ Dapr Components deployed - [Deploy IoT MQ Dapr Components](./howto-deploy-dapr.md)
## Creating a Dapr application
After you finish writing the Dapr application, build the container:
## Deploy a Dapr application
-To deploy the Dapr application to your cluster, you can use either a Kubernetes [Pod](https://kubernetes.io/docs/concepts/workloads/pods/) or [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/)
+The following [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) definition defines the different volumes required to deploy the application along with the required containers.
-The following Pod definition defines the different volumes required to deploy the application along with the required containers.
-
-To start, you create a yaml file that uses the following definitions:
+To start, create a yaml file with the following definitions:
> | Component | Description | > |-|-|
To start, you create a yaml file that uses the following definitions:
1. Save the following yaml to a file named `dapr-app.yaml`: ```yml
+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ name: dapr-client
+ namespace: azure-iot-operations
+ annotations:
+ aio-mq-broker-auth/group: dapr-workload
+
apiVersion: apps/v1 kind: Deployment metadata:
To start, you create a yaml file that uses the following definitions:
dapr.io/app-port: "6001" dapr.io/app-protocol: "grpc" spec:
+ serviceAccountName: dapr-client
+ volumes: - name: dapr-unix-domain-socket emptyDir: {}
To start, you create a yaml file that uses the following definitions:
name: aio-ca-trust-bundle-test-only containers:
- # Container for the dapr quickstart application
+ # Container for the Dapr application
- name: mq-dapr-app
- image: <YOUR DAPR APPLICATION>
+ image: <YOUR_DAPR_APPLICATION>
- # Container for the Pub/sub component
+ # Container for the Dapr Pub/sub component
- name: aio-mq-pubsub-pluggable image: ghcr.io/azure/iot-mq-dapr-components/pubsub:latest volumeMounts:
To start, you create a yaml file that uses the following definitions:
- name: aio-ca-trust-bundle mountPath: /var/run/certs/aio-mq-ca-cert/
- # Container for the State Management component
+ # Container for the Dapr State store component
- name: aio-mq-statestore-pluggable image: ghcr.io/azure/iot-mq-dapr-components/statestore:latest volumeMounts:
Run the following command to view the logs:
kubectl logs dapr-workload daprd ```
-## Related content
+## Next steps
-- [Develop highly available applications](concept-about-distributed-apps.md)
+Now that you know how to develop a Dapr application, you can run through the tutorial to [Build an event-driven app with Dapr](tutorial-event-driven-with-dapr.md).
iot-operations Tutorial Event Driven With Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/develop/tutorial-event-driven-with-dapr.md
Title: Build event-driven apps with Dapr
+ Title: Build an event-driven app with Dapr
-description: Learn how to create a Dapr application that aggregates data and publishing on another topic
+description: Learn how to create a Dapr application that aggregates data and publishing on another topic.
Last updated 11/13/2023
#CustomerIntent: As an operator, I want to configure IoT MQ to bridge to Azure Event Grid MQTT broker PaaS so that I can process my IoT data at the edge and in the cloud.
-# Build event-driven apps with Dapr
+# Build an event-driven app with Dapr
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
-In this walkthrough, you deploy a Dapr application to the cluster. The Dapr application will consume simulated MQTT data published to Azure IoT MQ, apply a windowing function and then publish the result back to IoT MQ. This represents how high volume data can be aggregated on the edge to reduce message frequency and size. The Dapr application is stateless, and uses the IoT MQ state store to cache past values needed for the window calculations.
+In this walkthrough, you deploy a Dapr application to the cluster. The Dapr application consumes simulated MQTT data published to Azure IoT MQ, applies a windowing function, and then publishes the result back to IoT MQ. The published output represents how high volume data can be aggregated on the edge to reduce message frequency and size. The Dapr application is stateless, and uses the IoT MQ state store to cache past values needed for the window calculations.
The Dapr application performs the following steps: 1. Subscribes to the `sensor/data` topic for sensor data.
-1. When receiving data on this topic, it's pushed to the Azure IoT MQ state store.
-1. Every **10 seconds**, it fetches the data from the state store, and calculates the *min*, *max*, *mean*, *median* and *75th percentile* values on any sensor data timestamped in the last **30 seconds**.
+1. When data is receiving on the topic, it's forwarded to the Azure IoT MQ state store.
+1. Every **10 seconds**, it fetches the data from the state store and calculates the *min*, *max*, *mean*, *median*, and *75th percentile* values on any sensor data timestamped in the last **30 seconds**.
1. Data older than **30 seconds** is expired from the state store. 1. The result is published to the `sensor/window_data` topic in JSON format.
The Dapr application performs the following steps:
## Prerequisites * Azure IoT Operations installed - [Deploy Azure IoT Operations](../get-started/quickstart-deploy.md)
-* Dapr runtime and MQ's pluggable components installed - [Use Dapr to develop distributed application workloads](../develop/howto-develop-dapr-apps.md)
-
+* IoT MQ Dapr components installed - [Install IoT MQ Dapr Components](./howto-deploy-dapr.md)
+
## Deploy the Dapr application
-At this point, you can deploy the Dapr application. When you register the components, that doesn't deploy the associated binary that is packaged in a container. To deploy the binary along with your application, you can use a Deployment to group the containerized Dapr application and the two components together.
+At this point, you can deploy the Dapr application. Registering the components doesn't deploy the associated binary that is packaged in a container. To deploy the binary along with your application, you can use a Deployment to group the containerized Dapr application and the two components together.
To start, create a yaml file that uses the following definitions:
To start, create a yaml file that uses the following definitions:
| `volumes.dapr-unit-domain-socket` | The socket file used to communicate with the Dapr sidecar | | `volumes.mqtt-client-token` | The SAT used for authenticating the Dapr pluggable components with the MQ broker and State Store | | `volumes.aio-mq-ca-cert-chain` | The chain of trust to validate the MQTT broker TLS cert |
-| `containers.mq-event-driven` | The prebuilt dapr application container. |
+| `containers.mq-event-driven` | The prebuilt Dapr application container. |
1. Save the following deployment yaml to a file named `app.yaml`: ```yml
+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ name: dapr-client
+ namespace: azure-iot-operations
+ annotations:
+ aio-mq-broker-auth/group: dapr-workload
+
apiVersion: apps/v1 kind: Deployment metadata:
To start, create a yaml file that uses the following definitions:
dapr.io/app-port: "6001" dapr.io/app-protocol: "grpc" spec:
- serviceAccountName: mqtt-client
+ serviceAccountName: dapr-client
volumes: - name: dapr-unix-domain-socket
To start, create a yaml file that uses the following definitions:
1. Confirm that the application deployed successfully. The pod should report all containers are ready after a short interval, as shown with the following command: ```bash
- kubectl get pods -w
+ kubectl get pods -n azure-iot-operations
``` With the following output: ```output
- pod/dapr-workload created
NAME READY STATUS RESTARTS AGE ...
- dapr-workload 4/4 Running 0 30s
+ mq-event-driven-dapr 4/4 Running 0 30s
```
Simulate test data by deploying a Kubernetes workload. It simulates a sensor by
1. Confirm the simulator is running: ```bash
- kubectl logs deployment/mqtt-publisher-deployment -f
+ kubectl logs deployment/mqtt-publisher-deployment -n azure-iot-operations -f
``` With the following output:
To verify the MQTT bridge is working, deploy an MQTT client to the cluster.
## Verify the Dapr application output
-1. Start a shell in the mosquitto client pod:
+1. Open a shell to the mosquitto client pod:
```bash kubectl exec --stdin --tty mqtt-client -n azure-iot-operations -- sh ```
-1. Subscribe to the `sensor/window_data` topic to see the publish output from the Dapr application:
+1. Subscribe to the `sensor/window_data` topic to observe the published output from the Dapr application:
```bash mosquitto_sub -L mqtts://aio-mq-dmqtt-frontend/sensor/window_data -u '$sat' -P $(cat /var/run/secrets/tokens/mqtt-client-token) --cafile /var/run/certs/aio-mq-ca-cert/ca.crt ```
-1. Verify the application is outputting a sliding windows calculation for the various sensors:
+1. Verify the application is outputting a sliding windows calculation for the various sensors every 10 seconds:
```json {
The above tutorial uses a prebuilt container of the Dapr application. If you wou
### Build the application
-1. Check out the Explore IoT Operations repository:
+1. Clone the **Explore IoT Operations** repository:
```bash git clone https://github.com/Azure-Samples/explore-iot-operations
kubectl logs dapr-workload daprd
## Next steps
-* [Bridge MQTT data between IoT MQ and Azure Event Grid](../connect-to-cloud/tutorial-connect-event-grid.md)
+* [Bridge MQTT data between IoT MQ and Azure Event Grid](../connect-to-cloud/tutorial-connect-event-grid.md)
machine-learning How To Create Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-compute-instance.md
A compute instance won't be considered idle if any custom application is running
Also, if a compute instance has already been idle for a certain amount of time, if idle shutdown settings are updated to an amount of time shorter than the current idle duration, the idle time clock is reset to 0. For example, if the compute instance has already been idle for 20 minutes, and the shutdown settings are updated to 15 minutes, the idle time clock is reset to 0.
+> [!IMPORTANT]
+> If the compute instance is also configured with a [managed identity](#assign-managed-identity), the compute instance won't shut down due to inactivity unless the managed identity has *contributor* access to the Azure Machine Learning workspace. For more information on assigning permissions, see [Manage access to Azure Machine Learning workspaces](how-to-assign-roles.md).
+ The setting can be configured during compute instance creation or for existing compute instances via the following interfaces: # [Python SDK](#tab/python)
As an administrator, you can create a compute instance on behalf of a data scien
You can assign a system- or user-assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to a compute instance, to authenticate against other Azure resources such as storage. Using managed identities for authentication helps improve workspace security and management. For example, you can allow users to access training data only when logged in to a compute instance. Or use a common user-assigned managed identity to permit access to a specific storage account.
+> [!IMPORTANT]
+> If the compute instance is also configured for [idle shutdown](#configure-idle-shutdown), the compute instance won't shut down due to inactivity unless the managed identity has *contributor* access to the Azure Machine Learning workspace. For more information on assigning permissions, see [Manage access to Azure Machine Learning workspaces](how-to-assign-roles.md).
+ # [Python SDK](#tab/python) Use SDK V2 to create a compute instance with assign system-assigned managed identity:
machine-learning How To Deploy Models Mistral https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-mistral.md
+
+ Title: How to deploy Mistral family of models with Azure Machine Learning studio
+
+description: Learn how to deploy Mistral Large with Azure Machine Learning studio.
++++ Last updated : 02/23/2024+
+reviewer: shubhirajMsft
++++
+#This functionality is also available in Azure AI Studio: /azure/ai-studio/how-to/deploy-models-mistral.md
+
+# How to deploy Mistral models with Azure Machine Learning studio
+Mistral AI offers two categories of models in Azure Machine Learning studio:
+
+- Premium models: Mistral Large. These models are available with pay-as-you-go token based billing with Models as a Service in the studio model catalog.
+- Open models: Mixtral-8x7B-Instruct-v01, Mixtral-8x7B-v01, Mistral-7B-Instruct-v01, and Mistral-7B-v01. These models are also available in the Azure Machine Learning studio model catalog and can be deployed to dedicated VM instances in your own Azure subscription with managed online endpoints.
+
+You can browse the Mistral family of models in the model catalog by filtering on the Mistral collection.
+
+## Mistral Large
+
+In this article, you learn how to use Azure Machine Learning studio to deploy the Mistral Large model as a service with pay-as you go billing.
+
+Mistral Large is Mistral AI's most advanced Large Language Model (LLM). It can be used on any language-based task thanks to its state-of-the-art reasoning and knowledge capabilities.
+
+Additionally, mistral-large is:
+
+- Specialized in RAG. Crucial information isn't lost in the middle of long context windows (up to 32 K tokens).
+- Strong in coding. Code generation, review, and comments. Supports all mainstream coding languages.
+- Multi-lingual by design. Best-in-class performance in French, German, Spanish, and Italian - in addition to English. Dozens of other languages are supported.
+- Responsible AI. Efficient guardrails baked in the model, with additional safety layer with safe_mode option.
++
+## Deploy Mistral Large with pay-as-you-go
+
+Certain models in the model catalog can be deployed as a service with pay-as-you-go, providing a way to consume them as an API without hosting them on your subscription, while keeping the enterprise security and compliance organizations need. This deployment option doesn't require quota from your subscription.
+
+Mistral Large can be deployed as a service with pay-as-you-go, and is offered by Mistral AI through the Microsoft Azure Marketplace. Please note that Mistral AI can change or update the terms of use and pricing of this model.
+
+### Azure Marketplace model offerings
+
+The following models are available in Azure Marketplace for Mistral AI when deployed as a service with pay-as-you-go:
+
+* Mistral Large (preview)
+
+### Prerequisites
+
+- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
+- An Azure Machine Learning workspace. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them.
+
+ > [!IMPORTANT]
+ > Pay-as-you-go model deployment offering is only available in workspaces created in **East US 2** and **France Central** regions.
+
+- Azure role-based access controls (Azure RBAC) are used to grant access to operations. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the Resouce Group.
+
+ For more information on permissions, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
+
+### Create a new deployment
+
+To create a deployment:
+
+1. Go to [Azure Machine Learning studio](https://ml.azure.com/home).
+1. Select the workspace in which you want to deploy your models. To use the pay-as-you-go model deployment offering, your workspace must belong to the **East US 2** or **France Central** region.
+1. Choose the model (Mistral-large) you want to deploy from the [model catalog](https://ml.azure.com/model/catalog).
+
+ Alternatively, you can initiate deployment by going to your workspace and selecting **Endpoints** > **Serverless endpoints** > **Create**.
+
+1. On the model's overview page in the model catalog, select **Deploy** and then **Pay-as-you-go**.
+
+ :::image type="content" source="media/how-to-deploy-models-mistral/mistral-deploy-pay-as-you-go.png" alt-text="A screenshot showing how to deploy a model with the pay-as-you-go option." lightbox="media/how-to-deploy-models-mistral/mistral-deploy-pay-as-you-go.png":::
+
+1. In the deployment wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use.
+1. You can also select the **Marketplace offer details** tab to learn about pricing for the selected model.
+1. If this is your first time deploying the model in the workspace, you have to subscribe your workspace for the particular offering (for example, Mistral-large). This step requires that your account has the **Azure AI Developer role** permissions on the Resource Group, as listed in the prerequisites. Each workspace has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and Deploy**. Currently you can have only one deployment for each model within a workspace.
+
+ :::image type="content" source="media/how-to-deploy-models-mistral/mistral-deploy-marketplace-terms.png" alt-text="A screenshot showing the terms and conditions of a given model." lightbox="media/how-to-deploy-models-mistral/mistral-deploy-marketplace-terms.png":::
+
+1. Once you subscribe the workspace for the particular Azure Marketplace offering, subsequent deployments of the _same_ offering in the _same_ workspace don't require subscribing again. If this scenario applies to you, you will see a **Continue to deploy** option to select.
+
+ :::image type="content" source="media/how-to-deploy-models-mistral/mistral-deploy-pay-as-you-go-project.png" alt-text="A screenshot showing a project that is already subscribed to the offering." lightbox="media/how-to-deploy-models-mistral/mistral-deploy-pay-as-you-go-project.png":::
+
+1. Give the deployment a name. This name becomes part of the deployment API URL. This URL must be unique in each Azure region.
+
+ :::image type="content" source="media/how-to-deploy-models-mistral/mistral-deployment-name.png" alt-text="A screenshot showing how to indicate the name of the deployment you want to create." lightbox="media/how-to-deploy-models-mistral/mistral-deployment-name.png":::
+
+1. Select **Deploy**. Wait until the deployment is finished and you're redirected to the serverless endpoints page.
+1. Select the endpoint to open its Details page.
+1. Select the **Test** tab to start interacting with the model.
+1. You can always find the endpoint's details, URL, and access keys by navigating to **Workspace** > **Endpoints** > **Serverless endpoints**.
+1. Take note of the **Target** URL and the **Secret Key** to call the deployment and generate chat completions using the [`<target_url>/v1/chat/completions`](#chat-api) API.
+
+To learn about billing for Mistral models deployed with pay-as-you-go, see [Cost and quota considerations for Mistral models deployed as a service](#cost-and-quota-considerations-for-mistral-large-deployed-as-a-service).
+
+### Consume the Mistral Large model as a service
+
+Mistral Large can be consumed using the chat API.
+
+1. In the **workspace**, select **Endpoints** > **Serverless endpoints**.
+1. Find and select the deployment you created.
+1. Copy the **Target** URL and the **Key** token values.
+1. Make an API request using the [`<target_url>/v1/chat/completions`](#chat-api) API.
+
+ For more information on using the APIs, see the [reference](#reference-for-mistral-large-deployed-as-a-service) section.
+
+### Reference for Mistral large deployed as a service
+
+#### Chat API
+
+Use the method `POST` to send the request to the `/v1/chat/completions` route:
+
+__Request__
+
+```rest
+POST /v1/chat/completions HTTP/1.1
+Host: <DEPLOYMENT_URI>
+Authorization: Bearer <TOKEN>
+Content-type: application/json
+```
+
+#### Request schema
+
+Payload is a JSON formatted string containing the following parameters:
+
+| Key | Type | Default | Description |
+|--|--|--|--|
+| `messages` | `string` | No default. This value must be specified. | The message or history of messages to use to prompt the model. |
+| `stream` | `boolean` | `False` | Streaming allows the generated tokens to be sent as data-only server-sent events whenever they become available. |
+| `max_tokens` | `integer` | `8192` | The maximum number of tokens to generate in the completion. The token count of your prompt plus `max_tokens` can't exceed the model's context length. |
+| `top_p` | `float` | `1` | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with `top_p` probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering `top_p` or `temperature`, but not both. |
+| `temperature` | `float` | `1` | The sampling temperature to use, between 0 and 2. Higher values mean the model samples more broadly the distribution of tokens. Zero means greedy sampling. We recommend altering this or `top_p`, but not both. |
+| `ignore_eos` | `boolean` | `False` | Whether to ignore the EOS token and continue generating tokens after the EOS token is generated. |
+| `safe_prompt` | `boolean` | `False` | Whether to inject a safety prompt before all conversations. |
+
+The `messages` object has the following fields:
+
+| Key | Type | Value |
+|--|--||
+| `content` | `string` | The contents of the message. Content is required for all messages. |
+| `role` | `string` | The role of the message's author. One of `system`, `user`, or `assistant`. |
++
+#### Example
+
+__Body__
+
+```json
+{
+ "messages":
+ [
+ {
+ "role": "system",
+ "content": "You are a helpful assistant that translates English to Italian."
+ },
+ {
+ "role": "user",
+ "content": "Translate the following sentence from English to Italian: I love programming."
+ }
+ ],
+ "temperature": 0.8,
+ "max_tokens": 512,
+}
+```
+
+#### Response schema
+
+The response payload is a dictionary with the following fields.
+
+| Key | Type | Description |
+|--|--|-|
+| `id` | `string` | A unique identifier for the completion. |
+| `choices` | `array` | The list of completion choices the model generated for the input messages. |
+| `created` | `integer` | The Unix timestamp (in seconds) of when the completion was created. |
+| `model` | `string` | The model_id used for completion. |
+| `object` | `string` | The object type, which is always `chat.completion`. |
+| `usage` | `object` | Usage statistics for the completion request. |
+
+> [!TIP]
+> In the streaming mode, for each chunk of response, `finish_reason` is always `null`, except from the last one which is terminated by a payload `[DONE]`. In each `choices` object, the key for `messages` is changed by `delta`.
++
+The `choices` object is a dictionary with the following fields.
+
+| Key | Type | Description |
+||--|--|
+| `index` | `integer` | Choice index. When `best_of` > 1, the index in this array might not be in order and might not be `0` to `n-1`. |
+| `messages` or `delta` | `string` | Chat completion result in `messages` object. When streaming mode is used, `delta` key is used. |
+| `finish_reason` | `string` | The reason the model stopped generating tokens: <br>- `stop`: model hit a natural stop point or a provided stop sequence. <br>- `length`: if max number of tokens have been reached. <br>- `content_filter`: When RAI moderates and CMP forces moderation <br>- `content_filter_error`: an error during moderation and wasn't able to make decision on the response <br>- `null`: API response still in progress or incomplete. |
+| `logprobs` | `object` | The log probabilities of the generated tokens in the output text. |
++
+The `usage` object is a dictionary with the following fields.
+
+| Key | Type | Value |
+||--|--|
+| `prompt_tokens` | `integer` | Number of tokens in the prompt. |
+| `completion_tokens` | `integer` | Number of tokens generated in the completion. |
+| `total_tokens` | `integer` | Total tokens. |
+
+The `logprobs` object is a dictionary with the following fields:
+
+| Key | Type | Value |
+||-||
+| `text_offsets` | `array` of `integers` | The position or index of each token in the completion output. |
+| `token_logprobs` | `array` of `float` | Selected `logprobs` from dictionary in `top_logprobs` array. |
+| `tokens` | `array` of `string` | Selected tokens. |
+| `top_logprobs` | `array` of `dictionary` | Array of dictionary. In each dictionary, the key is the token and the value is the prob. |
+
+#### Example
+
+The following is an example response:
+
+```json
+{
+ "id": "12345678-1234-1234-1234-abcdefghijkl",
+ "object": "chat.completion",
+ "created": 2012359,
+ "model": "",
+ "choices": [
+ {
+ "index": 0,
+ "finish_reason": "stop",
+ "message": {
+ "role": "assistant",
+ "content": "Sure, I\'d be happy to help! The translation of ""I love programming"" from English to Italian is:\n\n""Amo la programmazione.""\n\nHere\'s a breakdown of the translation:\n\n* ""I love"" in English becomes ""Amo"" in Italian.\n* ""programming"" in English becomes ""la programmazione"" in Italian.\n\nI hope that helps! Let me know if you have any other sentences you\'d like me to translate."
+ }
+ }
+ ],
+ "usage": {
+ "prompt_tokens": 10,
+ "total_tokens": 40,
+ "completion_tokens": 30
+ }
+}
+```
+
+#### Additional inference examples
+
+| **Sample Type** | **Sample Notebook** |
+|-|-|
+| CLI using CURL and Python web requests | [webrequests.ipynb](https://aka.ms/mistral-large/webrequests-sample)|
+| OpenAI SDK (experimental) | [openaisdk.ipynb](https://aka.ms/mistral-large/openaisdk) |
+| LangChain | [langchain.ipynb](https://aka.ms/mistral-large/langchain-sample) |
+| Mistral AI | [mistralai.ipynb](https://aka.ms/mistral-large/mistralai-sample) |
+| LiteLLM | [litellm.ipynb](https://aka.ms/mistral-large/litellm-sample)
+
+## Cost and quotas
+
+### Cost and quota considerations for Mistral Large deployed as a service
+
+Mistral models deployed as a service are offered by Mistral AI through Azure Marketplace and integrated with Azure Machine Learning studio for use. You can find Azure Marketplace pricing when deploying the models.
+
+Each time a workspace subscribes to a given model offering from Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference; however, multiple meters are available to track each scenario independently.
+
+For more information on how to track costs, see [Monitor costs for models offered through the Azure Marketplace](../ai-studio/how-to/costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace).
+
+Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
+
+## Content filtering
+
+Models deployed as a service with pay-as-you-go are protected by Azure AI content safety. With Azure AI content safety enabled, both the prompt and completion pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Learn more about [Azure AI Content Safety](/azure/ai-services/content-safety/overview).
+
+## Related content
+
+- [Model Catalog and Collections](concept-model-catalog.md)
+- [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md)
+- [Plan and manage costs for Azure AI Studio](concept-plan-manage-cost.md)
machine-learning How To Identity Based Service Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-service-authentication.md
Certain machine learning scenarios involve working with private data. In such ca
To enable authentication with compute managed identity: * Create compute with managed identity enabled. See the [compute cluster](#compute-cluster) section, or for compute instance, the [Assign managed identity](how-to-create-compute-instance.md#assign-managed-identity) section.+
+ > [!IMPORTANT]
+ > If the compute instance is also configured for [idle shutdown](how-to-create-compute-instance.md#configure-idle-shutdown), the compute instance won't shut down due to inactivity unless the managed identity has *contributor* access to the Azure Machine Learning workspace. For more information on assigning permissions, see [Manage access to Azure Machine Learning workspaces](how-to-assign-roles.md).
+ * Grant compute managed identity at least Storage Blob Data Reader role on the storage account. * Create any datastores with identity-based authentication enabled. See [Create datastores](how-to-datastore.md).
managed-instance-apache-cassandra Best Practice Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/best-practice-performance.md
Title: Best practices for optimal performance in Azure Managed Instance for Apache Cassandra
-description: Learn about best practices to ensure optimal performance from Azure Managed Instance for Apache Cassandra
-
+description: Learn about best practices to ensure optimal performance from Azure Managed Instance for Apache Cassandra.
+ Last updated 04/05/2023-+ keywords: azure performance cassandra
Transactional workloads typically need a data center optimized for low latency,
### Optimizing for analytical workloads
-We recommend customers apply the following `cassandra.yaml` settings for analytical workloads (see [here](create-cluster-portal.md#update-cassandra-configuration) on how to apply)
+We recommend customers apply the following `cassandra.yaml` settings for analytical workloads (see [here](create-cluster-portal.md#update-cassandra-configuration) on how to apply).
We recommend boosting Cassandra client driver timeouts in accordance with the ti
### Optimizing for low latency
-Our default settings are already suitable for low latency workloads. To ensure best performance for tail latencies we highly recommend using a client driver that supports [speculative execution](https://docs.datastax.com/en/developer/java-driver/4.10/manual/core/speculative_execution/) and configuring your client accordingly. For Java V4 driver, you can find a demo illustrating how this works and how to enable the policy [here](https://github.com/Azure-Samples/azure-cassandra-mi-java-v4-speculative-execution).
+Our default settings are already suitable for low latency workloads. To ensure best performance for tail latencies, we highly recommend using a client driver that supports [speculative execution](https://docs.datastax.com/en/developer/java-driver/4.10/manual/core/speculative_execution/) and configuring your client accordingly. For Java V4 driver, you can find a demo illustrating how this works and how to enable the policy [here](https://github.com/Azure-Samples/azure-cassandra-mi-java-v4-speculative-execution).
Like every database system, Cassandra works best if the CPU utilization is aroun
:::image type="content" source="./media/best-practice-performance/metrics.png" alt-text="Screenshot of CPU metrics." lightbox="./media/best-practice-performance/metrics.png" border="true":::
-If the CPU is permanently above 80% for most nodes the database will become overloaded manifesting in multiple client timeouts. In this scenario, we recommend taking the following actions:
+If the CPU is permanently above 80% for most nodes the database becomes overloaded manifesting in multiple client timeouts. In this scenario, we recommend taking the following actions:
* vertically scale up to a SKU with more CPU cores (especially if the cores are only 8 or less). * horizontally scale by adding more nodes (as mentioned earlier, the number of nodes should be multiple of the replication factor).
If the CPU is only high for a few nodes, but low for the others, it indicates a
> [!NOTE]
-> Currently changing SKU is only supported via ARM template deployment. You can deploy/edit ARM template, and replace SKU with one of the following.
+> Changing SKU is supported via Azure Portal, Azure CLI and ARM template deployment. You can deploy/edit ARM template, and replace SKU with one of the following.
> > - Standard_E8s_v4 > - Standard_E16s_v4
If the CPU is only high for a few nodes, but low for the others, it indicates a
> - Standard_L8as_v3 > - Standard_L16as_v3 > - Standard_L32as_v3
+>
+> Please note that currently, we do not support transitioning across SKU families. For instance, if you currently possess a `Standard_DS13_v2` and are interested in upgrading to a larger SKU such as `Standard_DS14_v2`, this option is not available. However, you can open a support ticket to request an upgrade to the higher SKU.
If your IOPS max out what your SKU supports, you can:
* [Scale up the data center(s)](create-cluster-portal.md#scale-a-datacenter) by adding more nodes.
-For more information refer to [Virtual Machine and disk performance](../virtual-machines/disks-performance.md).
+For more information, refer to [Virtual Machine and disk performance](../virtual-machines/disks-performance.md).
### Network performance
In most cases network performance is sufficient. However, if you're frequently s
:::image type="content" source="./media/best-practice-performance/metrics-network.png" alt-text="Screenshot of network metrics." lightbox="./media/best-practice-performance/metrics-network.png" border="true":::
-If you only see the network elevated for a small number of nodes, you might have a hot partition and need to review your data distribution and/or access patterns for a potential skew.
+If you only see the network elevated for a few nodes, you might have a hot partition and need to review your data distribution and/or access patterns for a potential skew.
* Vertically scale up to a different SKU supporting more network I/O. * Horizontally scale up the cluster by adding more nodes.
Deployments should be planned and provisioned to support the maximum number of p
### Disk space
-In most cases, there's sufficient disk space as default deployments are optimized for IOPS, which leads to low utilization of the disk. Nevertheless, we advise occasionally reviewing disk space metrics. Cassandra accumulates a lot of disk and then reduces it when compaction is triggered. Hence it is important to review disk usage over longer periods to establish trends - like compaction unable to recoup space.
+In most cases, there's sufficient disk space as default deployments are optimized for IOPS, which leads to low utilization of the disk. Nevertheless, we advise occasionally reviewing disk space metrics. Cassandra accumulates a lot of disks and then reduces it when compaction is triggered. Hence it's important to review disk usage over longer periods to establish trends - like compaction unable to recoup space.
> [!NOTE] > In order to ensure available space for compaction, disk utilization should be kept to around 50%.
If you only see this behavior for a few nodes, you might have a hot partition an
### JVM memory
-Our default formula assigns half the VM's memory to the JVM with an upper limit of 31 GB - which in most cases is a good balance between performance and memory. Some workloads, especially ones which have frequent cross-partition reads or range scans might be memory challenged.
+Our default formula assigns half the VM's memory to the JVM with an upper limit of 31 GB - which in most cases is a good balance between performance and memory. Some workloads, especially ones that have frequent cross-partition reads or range scans might be memory challenged.
In most cases memory gets reclaimed effectively by the Java garbage collector, but especially if the CPU is often above 80% there aren't enough CPU cycles for the garbage collector left. So any CPU performance problems should be addresses before memory problems.
-If the CPU hovers below 70%, and the garbage collection isn't able to reclaim memory, you might need more JVM memory. This is especially the case if you're on a SKU with limited memory. In most cases, you'll need to review your queries and client settings and reduce `fetch_size` along with what is chosen in `limit` within your CQL query.
+If the CPU hovers below 70%, and the garbage collection isn't able to reclaim memory, you might need more JVM memory. This is especially the case if you're on a SKU with limited memory. In most cases, you need to review your queries and client settings and reduce `fetch_size` along with what is chosen in `limit` within your CQL query.
If you indeed need more memory, you can:
If you indeed need more memory, you can:
### Tombstones
-We run repairs every seven days with reaper which removes rows whose TTL has expired (called "tombstone"). Some workloads have more frequent deletes and see warnings like `Read 96 live rows and 5035 tombstone cells for query SELECT ...; token <token> (see tombstone_warn_threshold)` in the Cassandra logs, or even errors indicating that a query couldn't be fulfilled due to excessive tombstones.
+We run repairs every seven days with reaper, which removes rows whose TTL has expired (called "tombstone"). Some workloads have more frequent deletes and see warnings like `Read 96 live rows and 5035 tombstone cells for query SELECT ...; token <token> (see tombstone_warn_threshold)` in the Cassandra logs, or even errors indicating that a query couldn't be fulfilled due to excessive tombstones.
A short term mitigation if queries don't get fulfilled is to increase the `tombstone_failure_threshold` in the [Cassandra config](create-cluster-portal.md#update-cassandra-configuration) from the default 100,000 to a higher value.
This indicates a problem in the data model. Here's a [stack overflow article](ht
## Specialized optimizations ### Compression
-Cassandra allows the selection of an appropriate compression algorithm when a table is created (see [Compression](https://cassandra.apache.org/doc/latest/cassandra/operating/compression.html)) The default is LZ4 which is excellent
-for throughput and CPU but consumes more space on disk. Using Zstd (Cassandra 4.0 and up) saves about ~12% space with
+Cassandra allows the selection of an appropriate compression algorithm when a table is created (see [Compression](https://cassandra.apache.org/doc/latest/cassandra/operating/compression.html)) The default is LZ4, which is excellent for throughput and CPU but consumes more space on disk. Using Zstd (Cassandra 4.0 and up) saves about ~12% space with
minimal CPU overhead. ### Optimizing memtable heap space Our default is to use 1/4 of the JVM heap for [memtable_heap_space](https://cassandra.apache.org/doc/latest/cassandra/configuration/cass_yaml_file.html#memtable_heap_space) in the cassandra.yaml. For write oriented application and/or on SKUs with small memory this can lead to frequent flushing and fragmented sstables thus requiring more compaction.
-In such cases increasing it to at least 4048 might be beneficial but requires careful benchmarking
-to make sure other operations (e.g. reads) aren't affected.
+In such cases increasing, it to at least 4048 might be beneficial but requires careful benchmarking
+to make sure other operations (for example, reads) aren't affected.
## Next steps
managed-instance-apache-cassandra Manage Resources Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/manage-resources-cli.md
az managed-cassandra datacenter create \
> - Standard_DS14_v2 > - Standard_D8s_v4 > - Standard_D16s_v4
-> - Standard_D32s_v4
+> - Standard_D32s_v4
+> - Standard_L8s_v3
+> - Standard_L16s_v3
+> - Standard_L32s_v3
+> - Standard_L8as_v3
+> - Standard_L16as_v3
+> - Standard_L32as_v3
>
+> Currently, we do not support transitioning across SKU families. For instance, if you currently possess a `Standard_DS13_v2` and are interested in upgrading to a larger SKU such as `Standard_DS14_v2`, this option is not available. However, you can open a support ticket to request an upgrade to the higher SKU.
+>
> Note also that `--availability-zone` is set to `false`. To enable availability zones, set this to `true`. Availability zones increase the availability SLA of the service. For more details, review the full SLA details [here](https://azure.microsoft.com/support/legal/sla/managed-instance-apache-cassandra/v1_0/). > [!WARNING]
mariadb Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-connectivity-architecture.md
As part of ongoing service maintenance, we'll periodically refresh compute hardw
The following table lists the gateway IP addresses of the Azure Database for MariaDB gateway for all data regions. The most up-to-date information of the gateway IP addresses for each region is maintained in the table below. In the table below, the columns represent following:
-* **Region Name:** This column lists the name of Azure region where Azure Database for PostgreSQL - Single Server is offered.
* **Gateway IP address subnets:** This column lists the IP address subnets of the gateway rings located in the particular region. As we retire older gateway hardware, we recommend that you open the client-side firewall to allow outbound traffic for the IP address subnets in the region you're operating.
-* **Gateway IP addresses (decommissioning):** This column lists the IP addresses of the gateways hosted on an older generation of hardware that is being decommissioned right now. If you're provisioning a new server, you can ignore these IP addresses. If you have an existing server, continue to retain the outbound rule for the firewall for these IP addresses as we haven't decommissioned it yet. If you drop the firewall rules for these IP addresses, you may get connectivity errors. Instead, you're expected to proactively add the new IP addresses listed in Gateway IP addresses column to the outbound firewall rule as soon as you receive the notification for decommissioning. This will ensure when your server is migrated to latest gateway hardware, there's no interruptions in connectivity to your server.
-* **Gateway IP addresses (decommissioned):** This column lists the IP addresses of the gateway rings, which are decommissioned and are no longer in operations. You can safely remove these IP addresses from your outbound firewall rule.
--
-| **Region name** | **Gateway IP address subnets** | **Gateway IP addresses (decommissioning)** | **Gateway IP addresses (decommissioned)** |
-|:--|:--|:|:|
-| Australia Central | 20.36.105.32/29 | | |
-| Australia Central 2 | 20.36.113.32/29 | | |
-| Australia East | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29 | 13.75.149.87 | |
-| Australia South East | 13.77.49.32/29 | 13.73.109.251 | |
-| Brazil South | 191.233.200.32/29, 191.234.144.32/29 | | 104.41.11.5 |
-| Canada Central | 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29 | | |
-| Canada East | 40.69.105.32/29 | 40.86.226.166 | |
-| Central US | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29 | 13.67.215.62 | |
-| China East | 52.130.112.136/29 | | |
-| China East 2 | 52.130.120.88/29 | | |
-| China East 3 | 52.130.128.88/29 | | |
-| China North | 52.130.128.88/29 | | |
-| China North 2 | 52.130.40.64/29 | | |
-| China North 3 | 13.75.32.192/29, 13.75.33.192/29 | | |
-| East Asia | 13.75.32.192/29, 13.75.33.192/29 | | |
-| East US | 20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29 | 40.121.158.30 | 191.238.6.43 |
-| East US 2 | 104.208.150.192/29, 40.70.144.192/29, 52.167.104.192/29 | 52.177.185.181 | |
-| France Central | 40.79.136.32/29, 40.79.144.32/29 | | |
-| France South | 40.79.176.40/29, 40.79.177.32/29 | | |
-| Germany West Central | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29 | | |
-| India Central | 104.211.86.32/29, 20.192.96.32/29 | | |
-| India South | 40.78.192.32/29, 40.78.193.32/29 | | |
-| India West | 104.211.144.32/29, 104.211.145.32/29 | 104.211.160.80 | |
-| Japan East | 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29 | 13.78.61.196 | |
-| Japan West | 40.74.96.32/29 | 104.214.148.156 | |
-| Korea Central | 20.194.64.32/29, 20.44.24.32/29, 52.231.16.32/29 | 52.231.32.42 | |
-| Korea South | 52.231.145.0/29 | 52.231.200.86 | |
-| North Central US | 52.162.105.192/29 | 23.96.178.199 | |
-| North Europe | 13.69.233.136/29, 13.74.105.192/29, 52.138.229.72/29 | 40.113.93.91 | 191.235.193.75 |
-| South Africa North | 102.133.120.32/29, 102.133.152.32/29, 102.133.248.32/29 | | |
-| South Africa West | 102.133.25.32/29 | | |
-| South Central US | 20.45.121.32/29, 20.49.88.32/29, 20.49.89.32/29, 40.124.64.136/29 | 13.66.62.124 | 23.98.162.75 |
-| South East Asia | 13.67.16.192/29, 23.98.80.192/29, 40.78.232.192/29 | 104.43.15.0 | |
-| Switzerland North | 51.107.56.32/29, 51.103.203.192/29, 20.208.19.192/29, 51.107.242.32/27 | | |
-| Switzerland West | 51.107.153.32/29 | | |
-| UAE Central | 20.37.72.96/29, 20.37.73.96/29 | | |
-| UAE North | 40.120.72.32/29, 65.52.248.32/29 | | |
-| UK South | 51.105.64.32/29, 51.105.72.32/29, 51.140.144.32/29 | | |
-| UK West | 51.140.208.96/29, 51.140.209.32/29 | | |
-| West Central US | 13.71.193.32/29 | 13.78.145.25 | |
-| West Europe | 104.40.169.32/29, 13.69.112.168/29, 52.236.184.32/29 | 40.68.37.158 | 191.237.232.75 |
-| West US | 13.86.217.224/29 | 104.42.238.205 | 23.99.34.75 |
-| West US 2 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29 | | |
-| West US 3 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29 | | |
-
+* **Gateway IP addresses**: Periodically, individual **Gateway IP addresses** will be retired and traffic will be migrated to corresponding **Gateway IP address subnets**.
+
+We strongly encourage customers to move away from relying on any individual Gateway IP address (since these will be retired in the future). Instead allow network traffic to reach both the individual Gateway IP addresses and Gateway IP address subnets in a region.
++
+| **Region name** | **Gateway IP address(es)** | **Gateway IP address Subnets** |
+|:-|:-|:--|
+| Australia Central | 20.36.105.32 | 20.36.105.32/29, 20.53.48.96/27 |
+| Australia Central 2 | 20.36.113.32 | 20.36.113.32/29, 20.53.56.32/27 |
+| Australia East | 13.70.112.32, 40.79.160.32, 40.79.168.32 | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29, 20.53.46.128/27 |
+| Australia Southeast | 13.77.49.33 | 13.77.49.32/29, 104.46.179.160/27 |
+| Brazil South | 191.233.201.8, 191.233.200.16 | 191.234.153.32/27, 191.234.152.32/27, 191.234.157.136/29, 191.233.200.32/29, 191.234.144.32/29, 191.234.142.160/27 |
+| Canada Central | 13.71.168.32 | 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29, 20.48.196.32/27 |
+| Canada East | 40.69.105.32 | 40.69.105.32/29, 52.139.106.192/27 |
+| Central US | 104.208.21.192, 13.89.168.192, 52.182.136.192 | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29, 20.40.228.128/27 |
+| China East | 52.130.112.139 | 52.130.112.136/29, 52.130.13.96/27 |
+| China East 2 | 40.73.82.1, 52.130.120.89 | 52.130.120.88/29, 52.130.7.0/27 |
+| China East 3 | 52.130.128.89 | 52.130.128.88/29, 40.72.77.128/27 |
+| China North | 52.130.128.89 | 52.130.128.88/29, 40.72.77.128/27 |
+| China North 2 | 40.73.50.0 | 52.130.40.64/29, 52.130.21.160/27 |
+| China North 3 | 13.75.32.192, 13.75.33.192 | 13.75.32.192/29, 13.75.33.192/29 |
+| East Asia | 13.75.33.20, 13.75.33.21 | 20.205.77.176/29, 20.205.83.224/29, 20.205.77.200/29, 13.75.32.192/29, 13.75.33.192/29, 20.195.72.32/27 |
+| East US | 20.42.65.64, 20.42.73.0, 52.168.116.64 | 20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29, 20.62.132.160/27 |
+| East US 2 | 104.208.150.192, 40.70.144.192, 52.167.104.192 | 104.208.150.192/29, 40.70.144.192/29, 52.167.104.192/29, 20.62.58.128/27 |
+| France Central | 40.79.129.1 | 40.79.128.32/29, 40.79.136.32/29, 40.79.144.32/29, 20.43.47.192/27 |
+| France South | 40.79.176.40 | 40.79.176.40/29, 40.79.177.32/29, 52.136.185.0/27 |
+| Germany West Central | 51.116.152.0 | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29, 51.116.149.32/27 |
+| India Central | 104.211.86.32, 20.192.96.33 | 40.80.48.32/29, 104.211.86.32/29, 20.192.96.32/29, 20.192.43.160/27 |
+| India South | 40.78.192.32 | 40.78.192.32/29, 40.78.193.32/29, 52.172.113.96/27 |
+| India West | 104.211.144.32 | 104.211.144.32/29, 104.211.145.32/29, 52.136.53.160/27 |
+| Japan East | 40.79.184.8, 40.79.192.23 | 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29, 20.191.165.160/27 |
+| Japan West | 40.74.96.6 | 20.18.179.192/29, 40.74.96.32/29, 20.189.225.160/27 |
+| Korea Central | 52.231.17.13 | 20.194.64.32/29, 20.44.24.32/29, 52.231.16.32/29, 20.194.73.64/27 |
+| Korea South | 52.231.145.3 | 52.231.151.96/27, 52.231.151.88/29, 52.231.145.0/29, 52.147.112.160/27 |
+| North Central US | 52.162.104.35, 52.162.104.36 | 52.162.105.200/29, 20.125.171.192/29, 52.162.105.192/29, 20.49.119.32/27 |
+| North Europe | 52.138.224.6, 52.138.224.7 | 13.69.233.136/29, 13.74.105.192/29, 52.138.229.72/29, 52.146.133.128/27 |
+| Norway East | 51.120.96.0 | 51.120.208.32/29, 51.120.104.32/29, 51.120.96.32/29, 51.120.232.192/27 |
+| Norway West | 51.120.216.0 | 51.120.217.32/29, 51.13.136.224/27 |
+| South Africa North | 102.133.152.0 | 102.133.120.32/29, 102.133.152.32/29, 102.133.248.32/29, 102.133.221.224/27 |
+| South Africa West | 102.133.24.0 | 102.133.25.32/29, 102.37.80.96/27 |
+| South Central US | 20.45.120.0 | 20.45.121.32/29, 20.49.88.32/29, 20.49.89.32/29, 40.124.64.136/29, 20.65.132.160/27 |
+| Southeast Asia | 23.98.80.12, 40.78.233.2 | 13.67.16.192/29, 23.98.80.192/29, 40.78.232.192/29, 20.195.65.32/27 |
+| Sweden Central | 51.12.96.32 | 51.12.96.32/29, 51.12.232.32/29, 51.12.224.32/29, 51.12.46.32/27 |
+| Sweden South | 51.12.200.32 | 51.12.201.32/29, 51.12.200.32/29, 51.12.198.32/27 |
+| Switzerland North | 51.107.56.0 | 51.107.56.32/29, 51.103.203.192/29, 20.208.19.192/29, 51.107.242.32/27 |
+| Switzerland West | 51.107.152.0 | 51.107.153.32/29, 51.107.250.64/27 |
+| UAE Central | 20.37.72.64 | 20.37.72.96/29, 20.37.73.96/29, 20.37.71.64/27 |
+| UAE North | 65.52.248.0 | 20.38.152.24/29, 40.120.72.32/29, 65.52.248.32/29, 20.38.143.64/27 |
+| UK South | 51.105.64.0 | 51.105.64.32/29, 51.105.72.32/29, 51.140.144.32/29, 51.143.209.224/27 |
+| UK West | 51.140.208.98 | 51.140.208.96/29, 51.140.209.32/29, 20.58.66.128/27 |
+| West Central US | 13.71.193.34 | 13.71.193.32/29, 20.69.0.32/27 |
+| West Europe | 13.69.105.208, 104.40.169.187 | 104.40.169.32/29, 13.69.112.168/29, 52.236.184.32/29, 20.61.99.192/27 |
+| West US | 13.86.216.212, 13.86.217.212 | 20.168.163.192/29, 13.86.217.224/29, 20.66.3.64/27 |
+| West US 2 | 13.66.136.192 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29, 20.51.9.128/27 |
+| West US 3 | 20.150.184.2 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29, 20.150.241.128/27 |
## Connection redirection
This indicates that your applications connect to server using static IP address
### Is there any impact for my application connections?
-This maintenance is just a DNS change, so it's transparent to the client. Once the DNS cache is refreshed in the client (automatically done by operation system), all the new connections connect to the new IP address and all the existing connections will still work fine until the old IP address is fully decommissioned, which happens several weeks later. And the retry logic isn't required for this case, but it's good to see the application have retry logic configured. Either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
-This maintenance operation won't drop the existing connections. It only makes the new connection requests go to new gateway ring.
+This maintenance is just a DNS change, so it's transparent to the client. Once the DNS cache is refreshed in the client (automatically done by operation system), all the new connections connect to the new IP address and all the existing connections will still work fine until the old IP address is fully decommissioned, which happens several weeks later. And the retry logic isn't required for this case, but it's good to see the application have retry logic configured. Use FQDN to connect to the database server in your application connection string. This maintenance operation won't drop the existing connections. It only makes the new connection requests go to new gateway ring.
### Can I request for a specific time window for the maintenance?
mysql Tutorial Deploy Wordpress On Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-deploy-wordpress-on-aks.md
Open a web browser to the external IP address of your service to see your WordPr
> [!NOTE] >
-> - Currently the WordPress site is not using HTTPS. It is recommended to [ENABLE TLS with your own certificates](../../aks/ingress-own-tls.md).
-> - You can enable [HTTP routing](../../aks/http-application-routing.md) for your cluster.
+> - WordPress site isn't configured to use HTTPS. For more information about HTTPS and how to configure application routing for AKS, see [Managed NGINX ingress with the application routing add-on](../../aks/app-routing.md).
## Clean up the resources
mysql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connectivity-architecture.md
As part of ongoing service maintenance, we'll periodically refresh compute hardw
The following table lists the gateway IP addresses of the Azure Database for MySQL gateway for all data regions. The most up-to-date information of the gateway IP addresses for each region is maintained in the table below. In the table below, the columns represent following:
-* **Region Name:** This column lists the name of Azure region where Azure Database for PostgreSQL - Single Server is offered.
* **Gateway IP address subnets:** This column lists the IP address subnets of the gateway rings located in the particular region. As we retire older gateway hardware, we recommend that you open the client-side firewall to allow outbound traffic for the IP address subnets in the region you're operating.
-* **Gateway IP addresses (decommissioning):** This column lists the IP addresses of the gateways hosted on an older generation of hardware that is being decommissioned right now. If you're provisioning a new server, you can ignore these IP addresses. If you have an existing server, continue to retain the outbound rule for the firewall for these IP addresses as we haven't decommissioned it yet. If you drop the firewall rules for these IP addresses, you may get connectivity errors. Instead, you're expected to proactively add the new IP addresses listed in Gateway IP addresses column to the outbound firewall rule as soon as you receive the notification for decommissioning. This will ensure when your server is migrated to latest gateway hardware, there's no interruptions in connectivity to your server.
-* **Gateway IP addresses (decommissioned):** This column lists the IP addresses of the gateway rings, which are decommissioned and are no longer in operations. You can safely remove these IP addresses from your outbound firewall rule.
--
-| **Region name** | **Gateway IP address subnets** | **Gateway IP addresses (decommissioning)** | **Gateway IP addresses (decommissioned)** |
-|:--|:--|:|:|
-| Australia Central | 20.36.105.32/29 | | |
-| Australia Central 2 | 20.36.113.32/29 | | |
-| Australia East | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29 | 13.75.149.87 | |
-| Australia South East | 13.77.49.32/29 | 13.73.109.251 | |
-| Brazil South | 191.233.200.32/29, 191.234.144.32/29 | | 104.41.11.5 |
-| Canada Central | 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29 | | |
-| Canada East | 40.69.105.32/29 | 40.86.226.166 | |
-| Central US | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29 | 13.67.215.62 | |
-| China East | 52.130.112.136/29 | | |
-| China East 2 | 52.130.120.88/29 | | |
-| China East 3 | 52.130.128.88/29 | | |
-| China North | 52.130.128.88/29 | | |
-| China North 2 | 52.130.40.64/29 | | |
-| China North 3 | 13.75.32.192/29, 13.75.33.192/29 | | |
-| East Asia | 13.75.32.192/29, 13.75.33.192/29 | | |
-| East US | 20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29 | 40.121.158.30 | 191.238.6.43 |
-| East US 2 | 104.208.150.192/29, 40.70.144.192/29, 52.167.104.192/29 | 52.177.185.181 | |
-| France Central | 40.79.136.32/29, 40.79.144.32/29 | | |
-| France South | 40.79.176.40/29, 40.79.177.32/29 | | |
-| Germany West Central | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29 | | |
-| India Central | 104.211.86.32/29, 20.192.96.32/29 | | |
-| India South | 40.78.192.32/29, 40.78.193.32/29 | | |
-| India West | 104.211.144.32/29, 104.211.145.32/29 | 104.211.160.80 | |
-| Japan East | 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29 | 13.78.61.196 | |
-| Japan West | 40.74.96.32/29 | 104.214.148.156 | |
-| Korea Central | 20.194.64.32/29, 20.44.24.32/29, 52.231.16.32/29 | 52.231.32.42 | |
-| Korea South | 52.231.145.0/29 | 52.231.200.86 | |
-| North Central US | 52.162.105.192/29 | 23.96.178.199 | |
-| North Europe | 13.69.233.136/29, 13.74.105.192/29, 52.138.229.72/29 | 40.113.93.91 | 191.235.193.75 |
-| South Africa North | 102.133.120.32/29, 102.133.152.32/29, 102.133.248.32/29 | | |
-| South Africa West | 102.133.25.32/29 | | |
-| South Central US | 20.45.121.32/29, 20.49.88.32/29, 20.49.89.32/29, 40.124.64.136/29 | 13.66.62.124 | 23.98.162.75 |
-| South East Asia | 13.67.16.192/29, 23.98.80.192/29, 40.78.232.192/29 | 104.43.15.0 | |
-| Switzerland North | 51.107.56.32/29, 51.103.203.192/29, 20.208.19.192/29, 51.107.242.32/27 | | |
-| Switzerland West | 51.107.153.32/29 | | |
-| UAE Central | 20.37.72.96/29, 20.37.73.96/29 | | |
-| UAE North | 40.120.72.32/29, 65.52.248.32/29 | | |
-| UK South | 51.105.64.32/29, 51.105.72.32/29, 51.140.144.32/29 | | |
-| UK West | 51.140.208.96/29, 51.140.209.32/29 | | |
-| West Central US | 13.71.193.32/29 | 13.78.145.25 | |
-| West Europe | 104.40.169.32/29, 13.69.112.168/29, 52.236.184.32/29 | 40.68.37.158 | 191.237.232.75 |
-| West US | 13.86.217.224/29 | 104.42.238.205 | 23.99.34.75 |
-| West US 2 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29 | | |
-| West US 3 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29 | | |
+* **Gateway IP addresses**: Periodically, individual **Gateway IP addresses** will be retired and traffic will be migrated to corresponding **Gateway IP address subnets**.
+
+We strongly encourage customers to move away from relying on any individual Gateway IP address (since these will be retired in the future). Instead allow network traffic to reach both the individual Gateway IP addresses and Gateway IP address subnets in a region.
+
+| **Region name** | **Gateway IP address(es)** | **Gateway IP address subnets** |
+|:-|:-|:--|
+| Australia Central | 20.36.105.32 | 20.36.105.32/29, 20.53.48.96/27 |
+| Australia Central 2 | 20.36.113.32 | 20.36.113.32/29, 20.53.56.32/27 |
+| Australia East | 13.70.112.32, 40.79.160.32, 40.79.168.32 | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29, 20.53.46.128/27 |
+| Australia Southeast | 13.77.49.33 | 13.77.49.32/29, 104.46.179.160/27 |
+| Brazil South | 191.233.201.8, 191.233.200.16 | 191.234.153.32/27, 191.234.152.32/27, 191.234.157.136/29, 191.233.200.32/29, 191.234.144.32/29, 191.234.142.160/27 |
+| Canada Central | 13.71.168.32 | 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29, 20.48.196.32/27 |
+| Canada East | 40.69.105.32 | 40.69.105.32/29, 52.139.106.192/27 |
+| Central US | 104.208.21.192, 13.89.168.192, 52.182.136.192 | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29, 20.40.228.128/27 |
+| China East | 52.130.112.139 | 52.130.112.136/29, 52.130.13.96/27 |
+| China East 2 | 40.73.82.1, 52.130.120.89 | 52.130.120.88/29, 52.130.7.0/27 |
+| China East 3 | 52.130.128.89 | 52.130.128.88/29, 40.72.77.128/27 |
+| China North | 52.130.128.89 | 52.130.128.88/29, 40.72.77.128/27 |
+| China North 2 | 40.73.50.0 | 52.130.40.64/29, 52.130.21.160/27 |
+| China North 3 | 13.75.32.192, 13.75.33.192 | 13.75.32.192/29, 13.75.33.192/29 |
+| East Asia | 13.75.33.20, 13.75.33.21 | 20.205.77.176/29, 20.205.83.224/29, 20.205.77.200/29, 13.75.32.192/29, 13.75.33.192/29, 20.195.72.32/27 |
+| East US | 20.42.65.64, 20.42.73.0, 52.168.116.64 | 20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29, 20.62.132.160/27 |
+| East US 2 | 104.208.150.192, 40.70.144.192, 52.167.104.192 | 104.208.150.192/29, 40.70.144.192/29, 52.167.104.192/29, 20.62.58.128/27 |
+| France Central | 40.79.129.1 | 40.79.128.32/29, 40.79.136.32/29, 40.79.144.32/29, 20.43.47.192/27 |
+| France South | 40.79.176.40 | 40.79.176.40/29, 40.79.177.32/29, 52.136.185.0/27 |
+| Germany West Central | 51.116.152.0 | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29, 51.116.149.32/27 |
+| India Central | 104.211.86.32, 20.192.96.33 | 40.80.48.32/29, 104.211.86.32/29, 20.192.96.32/29, 20.192.43.160/27 |
+| India South | 40.78.192.32 | 40.78.192.32/29, 40.78.193.32/29, 52.172.113.96/27 |
+| India West | 104.211.144.32 | 104.211.144.32/29, 104.211.145.32/29, 52.136.53.160/27 |
+| Japan East | 40.79.184.8, 40.79.192.23 | 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29, 20.191.165.160/27 |
+| Japan West | 40.74.96.6 | 20.18.179.192/29, 40.74.96.32/29, 20.189.225.160/27 |
+| Korea Central | 52.231.17.13 | 20.194.64.32/29, 20.44.24.32/29, 52.231.16.32/29, 20.194.73.64/27 |
+| Korea South | 52.231.145.3 | 52.231.151.96/27, 52.231.151.88/29, 52.231.145.0/29, 52.147.112.160/27 |
+| North Central US | 52.162.104.35, 52.162.104.36 | 52.162.105.200/29, 20.125.171.192/29, 52.162.105.192/29, 20.49.119.32/27 |
+| North Europe | 52.138.224.6, 52.138.224.7 | 13.69.233.136/29, 13.74.105.192/29, 52.138.229.72/29, 52.146.133.128/27 |
+| Norway East | 51.120.96.0 | 51.120.208.32/29, 51.120.104.32/29, 51.120.96.32/29, 51.120.232.192/27 |
+| Norway West | 51.120.216.0 | 51.120.217.32/29, 51.13.136.224/27 |
+| South Africa North | 102.133.152.0 | 102.133.120.32/29, 102.133.152.32/29, 102.133.248.32/29, 102.133.221.224/27 |
+| South Africa West | 102.133.24.0 | 102.133.25.32/29, 102.37.80.96/27 |
+| South Central US | 20.45.120.0 | 20.45.121.32/29, 20.49.88.32/29, 20.49.89.32/29, 40.124.64.136/29, 20.65.132.160/27 |
+| Southeast Asia | 23.98.80.12, 40.78.233.2 | 13.67.16.192/29, 23.98.80.192/29, 40.78.232.192/29, 20.195.65.32/27 |
+| Sweden Central | 51.12.96.32 | 51.12.96.32/29, 51.12.232.32/29, 51.12.224.32/29, 51.12.46.32/27 |
+| Sweden South | 51.12.200.32 | 51.12.201.32/29, 51.12.200.32/29, 51.12.198.32/27 |
+| Switzerland North | 51.107.56.0 | 51.107.56.32/29, 51.103.203.192/29, 20.208.19.192/29, 51.107.242.32/27 |
+| Switzerland West | 51.107.152.0 | 51.107.153.32/29, 51.107.250.64/27 |
+| UAE Central | 20.37.72.64 | 20.37.72.96/29, 20.37.73.96/29, 20.37.71.64/27 |
+| UAE North | 65.52.248.0 | 20.38.152.24/29, 40.120.72.32/29, 65.52.248.32/29, 20.38.143.64/27 |
+| UK South | 51.105.64.0 | 51.105.64.32/29, 51.105.72.32/29, 51.140.144.32/29, 51.143.209.224/27 |
+| UK West | 51.140.208.98 | 51.140.208.96/29, 51.140.209.32/29, 20.58.66.128/27 |
+| West Central US | 13.71.193.34 | 13.71.193.32/29, 20.69.0.32/27 |
+| West Europe | 13.69.105.208, 104.40.169.187 | 104.40.169.32/29, 13.69.112.168/29, 52.236.184.32/29, 20.61.99.192/27 |
+| West US | 13.86.216.212, 13.86.217.212 | 20.168.163.192/29, 13.86.217.224/29, 20.66.3.64/27 |
+| West US 2 | 13.66.136.192 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29, 20.51.9.128/27 |
+| West US 3 | 20.150.184.2 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29, 20.150.241.128/27 |
You receive an email to inform you when we start the maintenance work. The maint
This indicates that your applications connect to server using static IP address instead of FQDN. Review connection strings and connection pooling setting, AKS setting, or even in the source code. ### Is there any impact for my application connections?
-This maintenance is just a DNS change, so it's transparent to the client. Once the DNS cache is refreshed in the client (automatically done by operation system), all the new connections connect to the new IP address and all the existing connections will still work fine until the old IP address fully get decommissioned, which happens several weeks later. And the retry logic isn't required for this case, but it's good to see the application have retry logic configured. Either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
-This maintenance operation won't drop the existing connections. It only makes the new connection requests go to new gateway ring.
+This maintenance is just a DNS change, so it's transparent to the client. Once the DNS cache is refreshed in the client (automatically done by operation system), all the new connections connect to the new IP address and all the existing connections will still work fine until the old IP address fully get decommissioned, which happens several weeks later. And the retry logic isn't required for this case, but it's good to see the application have retry logic configured. Use FQDN to connect to the database server in your application connection string. This maintenance operation won't drop the existing connections. It only makes the new connection requests go to new gateway ring.
### Can I request for a specific time window for the maintenance?
-As the migration should be transparent and no impact to customer's connectivity, we expect there will be no issue for Most users. Review your application proactively and ensure that you either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
+As the migration should be transparent and no impact to customer's connectivity, we expect there will be no issue for most users. Review your application proactively and ensure that you either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string.
### I'm using private link, will my connections get affected? No, this is a gateway hardware decommission and have no relation to private link or private IP addresses, it will only affect public IP addresses mentioned under the decommissioning IP addresses.
operator-insights Change Ingestion Agent Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/change-ingestion-agent-configuration.md
+
+ Title: Change configuration for ingestion agents for Azure Operator Insights
+description: Learn how to make and roll back configuration changes for Azure Operator Insights ingestion agents.
+++++ Last updated : 02/29/2024+
+#CustomerIntent: As a someone managing an agent that has already been set up, I want to update its configuration so that data products in Azure Operator Insights receive the correct data.
++
+# Change configuration for Azure Operator Insights ingestion agents
+
+The ingestion agent is a software package that is installed onto a Linux Virtual Machine (VM) owned and managed by you. You might need to change the agent configuration.
+
+In this article, you'll change your ingestion agent configuration and roll back a configuration change.
+
+## Prerequisites
+
+- Using the documentation for your data product, check for required or recommended configuration for the ingestion agent.
+- See [Configuration reference for Azure Operator Insights ingestion agent](ingestion-agent-configuration-reference.md) for full details of the configuration options.
+
+## Update agent configuration
+
+> [!WARNING]
+> Changing the configuration requires restarting the agent. For the MCC EDR source, a small number of EDRs being handled might be dropped.  It is not possible to gracefully restart without dropping any data. For safety, update agents one at a time, only updating the next when you are sure the previous was successful.
+
+> [!WARNING]
+> If you change the pipeline ID for an SFTP pull source, the agent treats it as a new source and might upload duplicate files with the new pipeline ID. To avoid this, add the `exclude_before_time` parameter to the file source configuration. For example, if you configure `exclude_before_time: "2024-01-01T00:00:00-00:00"` then any files last modified before midnight on January 1, 2024 UTC will be ignored by the agent.
+
+If you need to change the agent's configuration, carry out the following steps.
+
+1. Save a copy of the existing */etc/az-aoi-ingestion/config.yaml* configuration file.
+1. Edit the configuration file to change the config values.
+1. Restart the agent.
+ ```
+ sudo systemctl restart az-aoi-ingestion.service
+ ```
+
+## Roll back configuration changes
+
+If a configuration change fails:
+
+1. Copy the backed-up configuration file from before the change to the */etc/az-aoi-ingestion/config.yaml* file.
+1. Restart the agent.
+ ```
+ sudo systemctl restart az-aoi-ingestion.service
+ ```
+
+## Related content
+
+Learn how to:
+
+- [Monitor and troubleshoot ingestion agents](monitor-troubleshoot-ingestion-agent.md).
+- [Upgrade ingestion agents](upgrade-ingestion-agent.md).
+- [Rotate secrets for ingestion agents](rotate-secrets-for-ingestion-agent.md).
operator-insights Concept Mcc Data Product https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/concept-mcc-data-product.md
The data produced by the MCC varies according to the functionality. This variati
The following data types are provided for all Quality of Experience - Affirmed MCC Data Products. -- `edr`: This data type handles EDRs from the MCC.-- `edr-sanitized`: This data type contains the same information as `edr` but with personal data suppressed to support operators' compliance with privacy legislation.
+- `edr` contains data from the Event Data Records (EDRs) written by the MCC network elements. EDRs record each significant event arising during calls or sessions handled by the MCC. They provide a comprehensive record of what happened, allowing operators to explore both individual problems and more general patterns.
+- `edr-sanitized` contains data from the `edr` data type but with personal data suppressed. Sanitized data types can be used to support data analysis while also enforcing subscriber privacy.
- `edr-validation`: This data type contains a subset of performance management statistics and provides you with the ability to optionally ingest a minimum number of PMstats tables for a data quality check. - `device`: This optional data type contains device data (for example, device model, make and capabilities) that the Data Product can use to enrich the MCC Event Data Records. To use this data type, you must upload the device reference data in a CSV file. The CSV must conform to the [Device reference schema for the Quality of Experience Affirmed MCC Data Product](device-reference-schema.md). - `enrichment`: This data type holds the enriched Event Data Records and covers multiple sub data types for precomputed aggregations targeted to accelerate specific dashboards, granularities, and queries. These multiple sub data types include:
The following data types are provided for all Quality of Experience - Affirmed M
To use the Quality of Experience - Affirmed MCC Data Product: 1. Deploy the Data Product by following [Create an Azure Operator Insights Data Product](data-product-create.md).
-1. Configure your network to provide data by setting up an MCC EDR Ingestion Agent. The MCC EDR Ingestion Agent uploads EDRs from your network to Azure Operator Insights. See [Create and configure MCC EDR Ingestion Agents for Azure Operator Insights](how-to-install-mcc-edr-agent.md). Alternatively, you can provide your own ingestion agent.
+1. Configure your network to provide data by setting up an Azure Operator Insights ingestion agent on a virtual machine (VM).
+
+ 1. Read [Requirements for the Azure Operator Insights ingestion agent](#requirements-for-the-azure-operator-insights-ingestion-agent).
+ 1. [Install the Azure Operator Insights ingestion agent and configure it to upload data](set-up-ingestion-agent.md).
+
+ Alternatively, you can provide your own ingestion agent.
+
+## Requirements for the Azure Operator Insights ingestion agent
+
+Use the VM requirements to set up a suitable VM for the ingestion agent. Use the example configuration to configure the ingestion agent to upload data to the Data Product, as part of following [Install the Azure Operator Insights ingestion agent and configure it to upload data](set-up-ingestion-agent.md).
+
+### VM requirements
+
+Each agent instance must run on its own Linux VM. The number of VMs needed depends on the scale and redundancy characteristics of your deployment. This recommended specification can achieve 1.5-Gbps throughput on a standard D4s_v3 Azure VM. For any other VM spec, we recommend that you measure throughput at the network design stage.
+
+Latency on the MCC to agent connection can negatively affect throughput. Latency should usually be low if the MCC and agent are colocated or the agent runs in an Azure region close to the MCC.
+
+Talk to the Affirmed Support Team to determine your requirements.
+
+Each VM running the agent must meet the following minimum specifications.
+
+| Resource | Requirements |
+|-||
+| OS | Red Hat Enterprise Linux 8.6 or later, or Oracle Linux 8.8 or later |
+| vCPUs | 4 |
+| Memory | 32 GB |
+| Disk | 64 GB |
+| Network | Connectivity from MCCs and to Azure |
+| Software | systemd, logrotate, and zip installed |
+| Other | SSH or alternative access to run shell commands |
+| DNS | (Preferable) Ability to resolve Microsoft hostnames. If not, you need to perform extra configuration when you set up the agent (described in [Map Microsoft hostnames to IP addresses for ingestion agents that can't resolve public hostnames](map-hostnames-ip-addresses.md).) |
+
+#### Deploying multiple VMs for fault tolerance
+
+The ingestion agent is designed to be highly reliable and resilient to low levels of network disruption. If an unexpected error occurs, the agent restarts and provides service again as soon as it's running.
+
+The agent doesn't buffer data, so if a persistent error or extended connectivity problems occur, EDRs are dropped.
+
+For extra fault tolerance, you can deploy multiple instances of the ingestion agent and configure the MCC to switch to a different instance if the original instance becomes unresponsive, or to share EDR traffic across a pool of agents. For more information, see the [Affirmed Networks Active Intelligent vProbe System Administration Guide](https://manuals.metaswitch.com/vProbe/latest/vProbe_System_Admin/Content/02%20AI-vProbe%20Configuration/Generating_SESSION__BEARER__FLOW__and_HTTP_Transac.htm) (only available to customers with Affirmed support) or speak to the Affirmed Networks Support Team.
+
+### Required agent configuration
+
+Use the information in this section when [setting up the agent and configuring the agent software](set-up-ingestion-agent.md#configure-the-agent-software).
+
+The ingestion agent must use MCC EDRs as a data source.
+
+|Information | Configuration setting for Azure Operator Ingestion agent | Value |
+||||
+|Container in the Data Product input storage account |`sink.container_name` | `edr` |
+
+> [!IMPORTANT]
+> `sink.container_name` must be set exactly as specified here. You can change other configuration to meet your requirements.
+
+For more information about all the configuration options, see [Configuration reference for Azure Operator Insights ingestion agent](ingestion-agent-configuration-reference.md).
+
+### Configure Affirmed MCCs
+
+Once the agents are installed and running, configure the MCCs to send EDRs to them.
+
+1. Follow the steps under "Generating SESSION, BEARER, FLOW, and HTTP Transaction EDRs" in the [Affirmed Networks Active Intelligent vProbe System Administration Guide](https://manuals.metaswitch.com/vProbe/latest/vProbe_System_Admin/Content/02%20AI-vProbe%20Configuration/Generating_SESSION__BEARER__FLOW__and_HTTP_Transac.htm) (only available to customers with Affirmed support), making the following changes:
+
+ - Replace the IP addresses of the MSFs in MCC configuration with the IP addresses of the VMs running the ingestion agents.
+
+ - Confirm that the following EDR server parameters are set.
+
+ - port: 36001
+ - encoding: protobuf
+ - keep-alive: 2 seconds
## Related content
operator-insights Concept Monitoring Mcc Data Product https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/concept-monitoring-mcc-data-product.md
Title: Monitoring - Affirmed MCC Data Product - Azure Operator Insights
-description: This article gives an overview of the Monitoring - Affirmed MCC Data Product provided by Azure Operator Insights
+description: This article gives an overview of the Monitoring - Affirmed MCC Data Product provided by Azure Operator Insights.
Last updated 12/06/2023
# Monitoring - Affirmed MCC Data Product overview
-The Monitoring - Affirmed MCC Data Product supports data analysis and insight for operators of the Affirmed Networks Mobile Content Cloud (MCC). It ingests performance management data (performance statistics) from MCC network elements. It then digests and enriches this data to provide visualizations for the operator and to provide access to the underlying enriched data for operator data scientists.
+The Monitoring - Affirmed MCC Data Product supports data analysis and insight for operators of the Affirmed Networks Mobile Content Cloud (MCC). It ingests performance management data (performance statistics) from MCC network elements. It then digests and enriches this data to provide visualizations for the operator and to provide access to the underlying enriched data for operator data scientists.
## Background
The Monitoring - Affirmed MCC Data Product supports all of the MCC variants desc
The following data type is provided as part of the Monitoring - Affirmed MCC Data Product. -- *pmstats* contains performance management data reported by the MCC management node, giving insight into the performance characteristics of the MCC network elements.
+- `pmstats` contains performance management data reported by the MCC management node, giving insight into the performance characteristics of the MCC network elements.
## Setup To use the Monitoring - Affirmed MCC Data Product: 1. Deploy the Data Product by following [Create an Azure Operator Insights Data Product](data-product-create.md).
-1. Configure your network to provide data by setting up an SFTP Ingestion Agent. The agent collects data from an SFTP server in your network and uploads it to Azure Operator Insights. See [SFTP Ingestion Agent overview](sftp-agent-overview.md) and [Create and configure SFTP Ingestion Agents for Azure Operator Insights](how-to-install-sftp-agent.md). Alternatively, you can provide your own ingestion agent.
+1. Configure your network to provide data by setting up an Azure Operator Insights ingestion agent on a virtual machine (VM).
+
+ 1. Read [Requirements for the Azure Operator Insights ingestion agent](#requirements-for-the-azure-operator-insights-ingestion-agent).
+ 1. [Install the Azure Operator Insights ingestion agent and configure it to upload data](set-up-ingestion-agent.md).
+
+ Alternatively, you can provide your own ingestion agent.
+
+## Requirements for the Azure Operator Insights ingestion agent
+
+Use the VM requirements to set up a suitable VM for the ingestion agent. Use the example configuration to configure the ingestion agent to upload data to the Data Product, as part of following [Install the Azure Operator Insights ingestion agent and configure it to upload data](set-up-ingestion-agent.md).
+
+## Choosing agents and VMs
+
+An ingestion agent collects files from _ingestion pipelines_ that you configure on it. Ingestion pipelines include the details of the SFTP server, the files to collect from it and how to manage those files.
+
+You must choose how to set up your agents, pipelines, and VMs using the following rules.
+
+- Pipelines must not overlap, meaning that they must not collect the same files from the same servers.
+- You must configure each pipeline on exactly one agent. If you configure a pipeline on multiple agents, Azure Operator Insights receives duplicate data.
+- Each agent must run on a separate VM.
+- The number of agents and therefore VMs also depends on:
+ - The scale and redundancy characteristics of your deployment.
+ - The number and size of the files, and how frequently the files are copied.
+
+As a guide, this table documents the throughput that the recommended specification on a standard D4s_v3 Azure VM can achieve.
+
+| File count | File size (KiB) | Time (seconds) | Throughput (Mbps) |
+||--|-|-|
+| 64 | 16,384 | 6 | 1,350 |
+| 1,024 | 1,024 | 10 | 910 |
+| 16,384 | 64 | 80 | 100 |
+| 65,536 | 16 | 300 | 25 |
+
+For example, if you need to collect from two file sources, you could:
+
+- Deploy one VM with one agent that collects from both file sources.
+- Deploy two VMs, each with one agent. Each agent (and therefore each VM) collects from one file source.
+
+### VM requirements
+
+Each Linux VM running the agent must meet the following minimum specifications.
+
+| Resource | Requirements |
+|-||
+| OS | Red Hat Enterprise Linux 8.6 or later, or Oracle Linux 8.8 or later |
+| vCPUs | Minimum 4, recommended 8 |
+| Memory | Minimum 32 GB |
+| Disk | 30 GB |
+| Network | Connectivity to the SFTP server and to Azure |
+| Software | systemd, logrotate, and zip installed |
+| Other | SSH or alternative access to run shell commands |
+| DNS | (Preferable) Ability to resolve Microsoft hostnames. If not, you need to perform extra configuration when you set up the agent (described in [Map Microsoft hostnames to IP addresses for ingestion agents that can't resolve public hostnames](map-hostnames-ip-addresses.md).) |
+
+### Required agent configuration
+
+Use the information in this section when [setting up the agent and configuring the agent software](set-up-ingestion-agent.md#configure-the-agent-software).
+
+The ingestion agent must use SFTP pull as a data source.
+
+|Information | Configuration setting for Azure Operator Ingestion agent | Value |
+||||
+|Container in the Data Product input storage account |`sink.container_name` | `pmstats` |
+| [Settling time](ingestion-agent-overview.md#processing-files) | `source.sftp_pull.filtering.settling_time` | `60s` (upload files that haven't been modified in the last 60 seconds) |
+| Schedule for checking for new files | `source.sftp_pull.scheduling.cron` | `0 */5 * * * * *` (every 5 minutes) |
+
+> [!IMPORTANT]
+> `sink.container_name` must be set exactly as specified here. You can change other configuration to meet your requirements.
+
+For more information about all the configuration options, see [Configuration reference for Azure Operator Insights ingestion agent](ingestion-agent-configuration-reference.md).
## Related content
To use the Monitoring - Affirmed MCC Data Product:
- [Azure Operator Insights Data Types](concept-data-types.md) - [Affirmed Networks MCC documentation](https://manuals.metaswitch.com/MCC)
- > [!NOTE]
- > Affirmed Networks login credentials are required to access the MCC product documentation.
+> [!NOTE]
+> Affirmed Networks login credentials are required to access the MCC product documentation.
operator-insights Data Product Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/data-product-create.md
The consumption URL also allows you to write your own Kusto query to get insight
| render linechart ```
-## Delete Azure resources
+## Optionally, delete Azure resources
-When you have finished exploring Azure Operator Insights Data Product, you should delete the resources you've created to avoid unnecessary Azure costs.
+If you're using this data product to explore Azure Operator Insights, you should delete the resources you've created to avoid unnecessary Azure costs.
# [Portal](#tab/azure-portal)
When you have finished exploring Azure Operator Insights Data Product, you shoul
az group delete --name "ResourceGroup" ``` +
+## Next step
+
+Upload data to your data product. If you're planning to do this with the Azure Operator Insights ingestion agent:
+
+1. Read the documentation for your data product to determine the requirements.
+1. [Install the Azure Operator Insights ingestion agent and configure it to upload data](set-up-ingestion-agent.md).
operator-insights How To Install Mcc Edr Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/how-to-install-mcc-edr-agent.md
This process assumes that you're connecting to Azure over ExpressRoute and are u
Repeat these steps for each VM onto which you want to install the agent: 1. In an SSH session, change to the directory where the RPM was copied.
-1. Install the RPM:  `sudo dnf install /*.rpm`.  Answer 'y' when prompted.  If there are any missing dependencies, the RPM isn't installed.
+1. Install the RPM:  `sudo dnf install ./*.rpm`.  Answer 'y' when prompted.  If there are any missing dependencies, the RPM isn't installed.
1. Change to the configuration directory: `cd /etc/az-mcc-edr-uploader` 1. Make a copy of the default configuration file:  `sudo cp example_config.yaml config.yaml` 1. Edit the *config.yaml* and fill out the fields.  Most of them are set to default values and don't need to be changed.  The full reference for each parameter is described in [MCC EDR Ingestion Agents configuration reference](mcc-edr-agent-configuration.md). The following parameters must be set:
Repeat these steps for each VM onto which you want to install the agent:
1. **tenant\_id** as your Microsoft Entra ID tenant.
- 2. **identity\_name** as the application ID of the service principle that you created in [Create a service principle](#create-a-service-principal).
+ 2. **identity\_name** as the application ID of the service principal that you created in [Create a service principal](#create-a-service-principal).
3. **cert\_path** as the file path of the base64-encoded pkcs12 certificate for the service principal to authenticate with.
operator-insights How To Install Sftp Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/how-to-install-sftp-agent.md
This process assumes that you're connecting to Azure over ExpressRoute and are u
Repeat these steps for each VM onto which you want to install the agent: 1. In an SSH session, change to the directory where the RPM was copied.
-1. Install the RPM:  `sudo dnf install /*.rpm`.  Answer 'y' when prompted.  If there are any missing dependencies, the RPM won't be installed.
+1. Install the RPM:  `sudo dnf install ./*.rpm`.  Answer 'y' when prompted.  If there are any missing dependencies, the RPM won't be installed.
1. Change to the configuration directory: `cd /etc/az-sftp-uploader` 1. Make a copy of the default configuration file:  `sudo cp example_config.yaml config.yaml` 1. Edit the *config.yaml* file and fill out the fields. Start by filling out the parameters that don't depend on the type of Data Product.  Many parameters are set to default values and don't need to be changed.  The full reference for each parameter is described in [SFTP Ingestion Agents configuration reference](sftp-agent-configuration.md). The following parameters must be set:
operator-insights How To Manage Mcc Edr Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/how-to-manage-mcc-edr-agent.md
To upgrade to a new release of the agent, repeat the following steps on each VM
1. Save a copy of the existing */etc/az-mcc-edr-uploader/config.yaml* configuration file.
-1. Upgrade the RPM: `sudo dnf install \*.rpm`.  Answer 'y' when prompted.  
+1. Upgrade the RPM: `sudo dnf install ./*.rpm`.  Answer 'y' when prompted.  
1. Create a new config file based on the new sample, keeping values from the original. Follow specific instructions in the release notes for the upgrade to ensure the new configuration is generated correctly.
operator-insights How To Manage Sftp Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/how-to-manage-sftp-agent.md
To upgrade to a new release of the agent, repeat the following steps on each VM
2. Save a copy of the existing */etc/az-sftp-uploader/config.yaml* configuration file.
-3. Upgrade the RPM: `sudo dnf install \*.rpm`.  Answer 'y' when prompted.  
+3. Upgrade the RPM: `sudo dnf install ./*.rpm`.  Answer 'y' when prompted.
4. Create a new config file based on the new sample, keeping values from the original. Follow specific instructions in the release notes for the upgrade to ensure the new configuration is generated correctly.
operator-insights Ingestion Agent Configuration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/ingestion-agent-configuration-reference.md
+
+ Title: Configuration reference for Azure Operator Insights ingestion agent
+description: This article documents the complete set of configuration for the Azure Operator Insights ingestion agent.
+++++ Last updated : 12/06/2023+
+# Configuration reference for Azure Operator Insights ingestion agent
+
+This reference provides the complete set of configuration for the [Azure Operator Insights ingestion agent](ingestion-agent-overview.md), listing all fields with explanatory comments.
+
+Configuration comprises three parts:
+
+- Agent ID.
+- Secrets providers.
+- A list of one or more pipelines, where each pipeline defines an ID, a source, and a sink.
+
+This reference shows two pipelines: one with an MCC EDR source and one with an SFTP pull source.
+
+```
+# A unique identifier for this agent instance. Reserved URL characters must be percent-encoded. It's included in the upload path to the Data Product's input storage account.
+agent_id: agent01
+# Config for secrets providers. We support reading secrets from Azure Key Vault and from the VM's local filesystem.
+# Multiple secret providers can be defined and each must be given a unique name, which is referenced later in the config.
+# A secret provider of type `key_vault` which contains details required to connect to the Azure Key Vault and allow connection to the Data Product's input storage account. This is always required.
+# A secret provider of type `file_system`, which specifies a directory on the VM where secrets are stored. For example for an SFTP pull source, for storing credentials for connecting to an SFTP server.
+secret_providers:
+ - name: data_product_keyvault
+ provider:
+ type: key_vault
+ vault_name: contoso-dp-kv
+ auth:
+ tenant_id: ad5421f5-99e4-44a9-8a46-cc30f34e8dc7
+ identity_name: 98f3263d-218e-4adf-b939-eacce6a590d2
+ cert_path: /path/to/local/certkey.pkcs
+ - name: local_file_system
+ provider:
+ # The file system provider specifies a folder in which secrets are stored.
+ # Each secret must be an individual file without a file extension, where the secret name is the file name, and the file contains the secret only.
+ type: file_system
+ # The absolute path to the secrets directory
+ secrets_directory: /path/to/secrets/directory
+pipelines:
+ # Pipeline IDs must be unique for a given agent instance. Any URL reserved characters must be percent-encoded.
+ - id: mcc-edrs
+ source:
+ mcc_edrs:
+ <mcc edrs source configuration>
+ sink:
+ <sink configuration>
+ - id: contoso-logs
+ source:
+ sftp_pull:
+ <sftp pull source configuration>
+ sink:
+ <sink configuration>
+```
+
+## Sink configuration
+
+All pipelines require sink config, which covers upload of files to the Data Product's input storage account.
+
+```
+sink:
+ # The container within the Data Product's input storage account. This *must* be exactly the name of the container that Azure Operator Insights expects. See the Data Product documentation for what value is required.
+ container_name: example-container
+ # Optional A string giving an optional base path to use in the container in the Data Product's input storage account. Reserved URL characters must be percent-encoded. See the Data Product for what value, if any, is required.
+ base_path: base-path
+ # Optional. How often the sink should refresh its SAS token for the Data Product's input storage account. Defaults to 1h. Examples: 30s, 10m, 1h, 1d.
+ sas_token_cache_period: 1h
+ auth:
+ type: sas_token
+ # This must reference a secret provider configured above.
+ secret_provider: data_product_keyvault
+ # The name of a secret in the corresponding provider.
+ # This will be the name of a secret in the Key Vault.
+ # This is created by the Data Product and should not be changed.
+ secret_name: input-storage-sas
+ # Optional. The maximum number of blobs that can be uploaded to the Data Product's input storage account in parallel. Further blobs will be queued in memory until an upload completes. Defaults to 10.
+ # Note: This value is also the maximum number of concurrent SFTP reads for the SFTP pull source. Ensure your SFTP server can handle this many concurrent connections. If you set this to a value greater than 10 and are using an OpenSSH server, you may need to increase `MaxSessions` and/or `MaxStartups` in `sshd_config`.
+ maximum_parallel_uploads: 10
+ # Optional. The maximum size of each block that is uploaded to the Data Product's input storage account.
+ # Each blob is composed of one or more blocks. Defaults to 32 MiB. Units are B, KiB, MiB, GiB, etc.
+ block_size: 32 MiB
+```
+
+## Source configuration
+
+All pipelines require source config, which covers how the ingestion agent ingests files and where from. There are two supported source types: MCC EDRs and SFTP pull.
+
+Combining different types of source in one agent instance isn't recommended in production, but is supported for lab trials and testing.
+
+### MCC EDR source configuration
+
+```
+source:
+ mcc_edrs:
+ # The maximum amount of data to buffer in memory before uploading. Units are B, KiB, MiB, GiB, etc.
+ message_queue_capacity: 32 MiB
+ # Quick check on the maximum RAM that the agent should use.
+ # This is a guide to check the other tuning parameters, rather than a hard limit.
+ maximum_overall_capacity: 1216 MiB
+ listener:
+ # The TCP port to listen on. Must match the port MCC is configured to send to. Defaults to 36001.
+ port: 36001
+ # EDRs greater than this size are dropped. Subsequent EDRs continue to be processed.
+ # This condition likely indicates MCC sending larger than expected EDRs. MCC is not normally expected
+ # to send EDRs larger than the default size. If EDRs are being dropped because of this limit,
+ # investigate and confirm that the EDRs are valid, and then increase this value. Units are B, KiB, MiB, GiB, etc.
+ soft_maximum_message_size: 20480 B
+ # EDRs greater than this size are dropped and the connection from MCC is closed. This condition
+ # likely indicates an MCC bug or MCC sending corrupt data. It prevents the agent from uploading
+ # corrupt EDRs to Azure. You should not need to change this value. Units are B, KiB, MiB, GiB, etc.
+ hard_maximum_message_size: 100000 B
+ batching:
+ # The maximum size of a single blob (file) to store in the Data Product's input storage account.
+ maximum_blob_size: 128 MiB. Units are B, KiB, MiB, GiB, etc.
+ # The maximum time to wait when no data is received before uploading pending batched data to the Data Product's input storage account. Examples: 30s, 10m, 1h, 1d.
+ blob_rollover_period: 5m
+```
+
+### SFTP pull source configuration
+
+This configuration specifies which files are ingested from the SFTP server.
+
+Multiple SFTP pull sources can be defined for one agent instance, where they can reference either different SFTP servers, or different folders on the same SFTP server.
+
+```
+source:
+ sftp_pull:
+ server: Information relating to the SFTP session.
+ # The IP address or hostname of the SFTP server.
+ host: 192.0.2.0
+ # Optional. The port to connect to on the SFTP server. Defaults to 22.
+ port: 22
+ # The path on the VM to the 'known_hosts' file for the SFTP server. This file must be in SSH format and contain details of any public SSH keys used by the SFTP server. This is required by the agent to verify it is connecting to the correct SFTP server.
+ known_hosts_file: /path/to/known_hosts
+ # The name of the user on the SFTP server which the agent will use to connect.
+ user: sftp-user
+ auth:
+ # The name of the secret provider configured above which contains the secret for the SFTP user.
+ secret_provider: local_file_system
+ # The form of authentication to the SFTP server. This can take the values 'password' or 'ssh_key'. The appropriate field(s) must be configured below depending on which type is specified.
+ type: password
+ # Only for use with 'type: password'. The name of the file containing the password in the secrets_directory folder
+ secret_name: sftp-user-password
+ # Only for use with 'type: ssh_key'. The name of the file containing the SSH key in the secrets_directory folder
+ key_secret: sftp-user-ssh-key
+ # Optional. Only for use with 'type: ssh_key'. The passphrase for the SSH key. This can be omitted if the key is not protected by a passphrase.
+ passphrase_secret_name: sftp-user-ssh-key-passphrase
+ filtering:
+ # The path to a folder on the SFTP server that files will be uploaded to Azure Operator Insights from.
+ base_path: /path/to/sftp/folder
+ # Optional. A regular expression to specify which files in the base_path folder should be ingested. If not specified, the agent will attempt to ingest all files in the base_path folder (subject to exclude_pattern, settling_time and exclude_before_time).
+ include_pattern: "*\.csv$"
+ # Optional. A regular expression to specify any files in the base_path folder which should not be ingested. Takes priority over include_pattern, so files which match both regular expressions will not be ingested.
+ exclude_pattern: '\.backup$'
+ # A duration, such as "10s", "5m", "1h".. During an upload run, any files last modified within the settling time are not selected for upload, as they may still be being modified.
+ settling_time: 1m
+ # Optional. A datetime that adheres to the RFC 3339 format. Any files last modified before this datetime will be ignored.
+ exclude_before_time: "2022-12-31T21:07:14-05:00"
+ scheduling:
+ # An expression in cron format, specifying when upload runs are scheduled for this source. All times refer to UTC. The cron schedule should include fields for: second, minute, hour, day of month, month, day of week, and year. E.g.:
+ # `* /3 * * * * *` for once every 3 minutes
+ # `0 30 5 * * * *` for 05:30 every day
+ # `0 15 3 * * Fri,Sat *` for 03:15 every Friday and Saturday
+ cron: "*/30 * * * Apr-Jul Fri,Sat,Sun 2025"
+```
operator-insights Ingestion Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/ingestion-agent-overview.md
+
+ Title: Overview of the Azure Operator Insights ingestion agent
+description: Understand how ingestion agents for Azure Operator Insights collect and upload data about your network to Azure.
+++++ Last updated : 12/8/2023+
+#CustomerIntent: As a someone deploying Azure Operator Insights, I want to understand how ingestion agents work so that I can set one up and configure it for my network.
++
+# Ingestion agent overview
+
+An _ingestion agent_ uploads data to an Azure Operator Insights data product. We provide an ingestion agent called the Azure Operator Insights ingestion agent that you can install on a Linux virtual machine to upload data from your network. This ingestion agent supports uploading:
+
+- Affirmed Mobile Content Cloud (MCC) Event Data Record (EDR) data streams.
+- Files stored on an SFTP server.
+
+Combining different types of source in one agent instance isn't recommended in production, but is supported for lab trials and testing.
+
+## MCC EDR source overview
+
+An ingestion agent configured with an MCC EDR source is designed for use with an Affirmed Networks Mobile Content Cloud (MCC). It ingests Event Data Records (EDRs) from MCC network elements, and uploads them to Azure Operator Insights. To learn more, see [Quality of Experience - Affirmed MCC Data Product](concept-mcc-data-product.md).
+
+## SFTP pull source overview
+
+An ingestion agent configured with an SFTP pull source collects files from one or more SFTP servers, and uploads them to Azure Operator Insights.
+
+### File sources
+
+An ingestion agent collects files from _ingestion pipelines_ that you configure on it. A pipeline includes the details of the SFTP server, the files to collect from it and how to manage those files.
+
+For example, a single SFTP server might have logs, CSV files and text files. You could configure each type of file as a separate ingestion pipeline. For each ingestion pipeline, you can specify the directory to collect files from (optionally including or excluding specific files based on file paths), how often to collect files and other options. For full details of the available options, see [Configuration reference for Azure Operator Insights ingestion agent](ingestion-agent-configuration-reference.md).
+
+Ingestion pipelines have the following restrictions:
+
+- They must not overlap, meaning that they must not collect the same files from the same servers.
+- You must configure each pipeline on exactly one agent. If you configure a pipeline on multiple agents, Azure Operator Insights receives duplicate data.
+
+### Processing files
+
+The ingestion agent uploads files to Azure Operator Insights during scheduled _upload runs_. The frequency of these runs is defined in the pipeline's configuration. Each upload run uploads files according to the pipeline's configuration:
+
+- File paths and regular expressions for including and excluding files specify the files to upload.
+- The _settling time_ excludes files last modified within this period from any upload. For example, if the upload run starts at 05:30 and the settling time is 60 seconds (one minute), the upload run only uploads files modified before 05:29.
+- The _exclude before time_ (if set) excludes files last modified before the specified date and time.
+
+The ingestion agent records when it last completed an upload run for a file source. It uses this record to determine which files to upload during the next upload run, using the following process:
+
+1. The agent checks the last recorded time.
+1. The agent uploads any files modified since that time. It assumes that it processed older files during a previous upload run.
+1. At the end of the upload run:
+ - If the agent uploaded all the files or the only errors were nonretryable errors, the agent updates the record. The new time is based on the time the upload run started, minus the settling time.
+ - If the upload run had retryable errors (for example, if the connection to Azure was lost), the agent doesn't update the record. Not updating the record allows the agent to retry the upload for any files that didn't upload successfully. Retries don't duplicate any data previously uploaded.
+
+The ingestion agent is designed to be highly reliable and resilient to low levels of network disruption. If an unexpected error occurs, the agent restarts and provides service again as soon as it's running. After a restart, the agent carries out an immediate catch-up upload run for all configured file sources. It then returns to its configured schedule.
+
+## Authentication
+
+The ingestion agent authenticates to two separate systems, with separate credentials.
+
+- To authenticate to the ingestion endpoint of an Azure Operator Insights Data Product, the agent obtains a connection string from an Azure Key Vault. The agent authenticates to this Key Vault with a Microsoft Entra ID service principal and certificate that you setup when you created the agent.
+- To authenticate to your SFTP server, the agent can use password authentication or SSH key authentication.
+
+For configuration instructions, see [Set up authentication to Azure](set-up-ingestion-agent.md#set-up-authentication-to-azure), [Prepare the VMs](set-up-ingestion-agent.md#prepare-the-vms) and [Configure the agent software](set-up-ingestion-agent.md#configure-the-agent-software).
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Install the Azure Operator Insights ingestion agent and configure it to upload data](set-up-ingestion-agent.md)
operator-insights Map Hostnames Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/map-hostnames-ip-addresses.md
+
+ Title: Map hostnames to IP addresses for the Azure Operator Insights ingestion agent.
+description: Configure the Azure Operator Insights ingestion agent to use fixed IP addresses instead of hostnames.
+++++ Last updated : 02/29/2024+
+#CustomerIntent: As an admin in an operator network, I want to make the ingestion agent work without DNS, so that the ingestion agent can upload data to Azure Operator Insights.
++
+# Map Microsoft hostnames to IP addresses for ingestion agents that can't resolve public hostnames
+
+The Azure Operator Insights ingestion agent needs to resolve some Microsoft hostnames. If the VMs onto which you install the agent can't use DNS to resolve these hostnames, you need to add entries on each agent VM to map the hostnames to IP addresses.
+
+This process assumes that you're connecting to Azure over ExpressRoute and are using Private Links and/or Service Endpoints. If you're connecting over public IP addressing, you **cannot** use this workaround. Your VMs must be able to resolve public hostnames.
+
+## Prerequisites
+
+- Peer an Azure virtual network to your ingestion agent.
+- [Create the Data Product that you want to use with this ingestion agent](data-product-create.md).
+- [Set up authentication to Azure](set-up-ingestion-agent.md#set-up-authentication-to-azure) and [Prepare the VMs](set-up-ingestion-agent.md#prepare-the-vms) for the ingestion agent.
+
+## Create service endpoints and private links
+
+1. Create the following resources from a virtual network that is peered to your ingestion agents.
+ - A Service Endpoint to Azure Storage.
+ - A Private Link or Service Endpoint to the Key Vault created by your Data Product. The Key Vault is the same one that you found in [Grant permissions for the Data Product Key Vault](set-up-ingestion-agent.md#grant-permissions-for-the-data-product-key-vault) when you started setting up the ingestion agent.
+1. Note the IP addresses of these two connections.
+
+## Find URLs for your Data Product
+
+1. Note the ingestion URL for your Data Product. You can find the ingestion URL on your Data Product overview page in the Azure portal, in the form *`<account-name>.blob.core.windows.net`*.
+1. Note the URL of the Data Product Key Vault. The URL appears as *`<vault-name>.vault.azure.net`*.
+
+## Look up a public IP address for login.microsoft.com
+
+Use a DNS lookup tool to find a public IP address for `login.microsoftonline.com`. For example:
+
+- On Windows:
+ ```
+ nslookup login.microsoftonline.com
+ ```
+- On Linux:
+ ```
+ dig login.microsoftonline.com
+ ```
+
+You can use any of the IP addresses.
++
+## Configure the ingestion agent to map between the IP addresses and the hostnames
+
+1. Add a line to */etc/hosts* on the VM linking the two values in the following format, for each of the storage and Key Vault.
+ ```
+ <Storage private IP>   <ingestion URL>
+ <Key Vault private IP>  <Key Vault URL>
+ ````
+1. Add the public IP address for `login.microsoftonline.com` to */etc/hosts*.
+ ```
+ <Public IP>   login.microsoftonline.com
+ ````
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Continue setting up the ingestion agent](set-up-ingestion-agent.md#install-the-agent-software)
operator-insights Monitor Operator Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/monitor-operator-insights.md
For a list of common queries for Azure Operator Insights, see the [Log Analytics
Azure Operator Insights also requires ingestion agents deployed in your network.
-Ingestion agents that we provide automatically collect metrics and logs for troubleshooting. Metrics and logs are stored on the VM on which you installed the agent, and aren't uploaded to Azure Monitor. For details, see the troubleshooting guidance for [MCC EDR Ingestion Agents](troubleshoot-mcc-edr-agent.md) or [SFTP Ingestion Agents](troubleshoot-sftp-agent.md).
+Ingestion agents that we provide automatically collect metrics and logs for troubleshooting. Metrics and logs are stored on the VM on which you installed the agent, and aren't uploaded to Azure Monitor. For details, see [Monitor and troubleshoot ingestion agents for Azure Operator Insights](monitor-troubleshoot-ingestion-agent.md).
## Next steps
operator-insights Monitor Troubleshoot Ingestion Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/monitor-troubleshoot-ingestion-agent.md
+
+ Title: Monitor and troubleshoot ingestion agents for Azure Operator Insights
+description: Learn how to detect, troubleshoot, and fix problems with Azure Operator Insights ingestion agents.
+++++ Last updated : 02/29/2024+
+#CustomerIntent: As a someone managing an agent that has already been set up, I want to monitor and troubleshoot it so that data products in Azure Operator Insights receive the correct data.
+++
+# Monitor and troubleshoot Azure Operator Insights ingestion agents
+
+For an overview of ingestion agents, see [Ingestion agent overview](ingestion-agent-overview.md).
+
+If you notice problems with data collection from your ingestion agents, use the information in this section to fix common problems or create a diagnostics package. You can upload the diagnostics package to support tickets that you create in the Azure portal.
+
+The ingestion agent is a software package, so the diagnostics are limited to the functioning of the application. We don't provide OS or resource monitoring. You're encouraged to use standard tooling such as snmpd, Prometheus node exporter, or other tools to send OS-level data, logs, and metrics to your own monitoring systems. [Monitor virtual machines with Azure Monitor](../azure-monitor/vm/monitor-virtual-machine.md) describes tools you can use if your ingestion agents are running on Azure VMs.
+
+The agent writes logs and metrics to files under */var/log/az-aoi-ingestion/*. If the agent is failing to start for any reason, such as misconfiguration, the *stdout.log* file contains human-readable logs explaining the issue.
+
+Metrics are reported in a simple human-friendly form.
+
+## Prerequisites
+
+- For most of these troubleshooting techniques, you need an SSH connection to the VM running the agent.
+
+## Collect diagnostics
+
+Microsoft Support might request diagnostic packages when investigating an issue.
+
+To collect a diagnostics package, SSH to the Virtual Machine and run the command `/usr/bin/microsoft/az-aoi-ingestion-gather-diags`. This command generates a date-stamped zip file in the current directory that you can copy from the system.
+
+> [!NOTE]
+> Diagnostics packages don't contain any customer data or the value of any credentials.
+
+## Problems common to all sources
+
+Problems broadly fall into four categories.
+
+- An agent misconfiguration, which prevents the agent from starting.
+- A problem with receiving data from the source, typically misconfiguration, or network connectivity.
+- A problem with uploading files to the Data Product's input storage account, typically network connectivity.
+- A problem with the VM on which the agent is running.
+
+### Agent fails to start
+
+Symptoms: `sudo systemctl status az-aoi-ingestion` shows that the service is in failed state.
+
+- Ensure the service is running.
+ ```
+ sudo systemctl start az-aoi-ingestion
+ ```
+- Look at the */var/log/az-aoi-ingestion/stdout.log* file and check for any reported errors. Fix any issues with the configuration file and start the agent again.
+
+### No data appearing in AOI
+
+Symptoms: no data appears in Azure Data Explorer.
+
+- Check the network connectivity and firewall configuration between the ingestion agent VM and the Data Product's input storage account.
+- Check the logs from the ingestion agent for errors uploading to Azure. If the logs point to authentication issues, check that the agent configuration has the correct sink settings and authentication for your data product. Then restart the agent.
+- Check that the ingestion agent is receiving data from its source. Check the network connectivity and firewall configuration between your network and the ingestion agent.
+
+## Problems with the MCC EDR source
+
+This section covers problems specific to the MCC EDR source.
+
+You can also use the diagnostics provided by the MCCs, or by Azure Operator Insights itself in Azure Monitor, to help identify and debug ingestion issues.
+
+### MCC can't connect
+
+Symptoms: MCC reports alarms about MSFs being unavailable.
+
+- Check that the agent is running.
+- Ensure that MCC is configured with the correct IP and port.
+- Check the logs from the agent and see if it's reporting connections. If not, check the network connectivity to the agent VM and verify that the firewalls aren't blocking traffic to port 36001.
+- Collect a packet capture to see where the connection is failing.
+
+### No EDRs appearing in AOI
+
+Symptoms: no data appears in Azure Data Explorer.
+
+- Check that the MCC is healthy and ingestion agents are running.
+- Check the logs from the ingestion agent for errors uploading to Azure. If the logs point to an invalid connection string, or connectivity issues, fix the configuration, connection string, or SAS token, and restart the agent.
+- Check the network connectivity and firewall configuration on the storage account.
+
+### Data missing or incomplete
+
+Symptoms: Azure Monitor shows a lower incoming EDR rate in ADX than expected.
+
+- Check that the agent is running on all VMs and isn't reporting errors in logs.
+- Verify that the agent VMs aren't being sent more than the rated load.
+- Check agent metrics for dropped bytes/dropped EDRs. If the metrics don't show any dropped data, then MCC isn't sending the data to the agent. Check the "received bytes" metrics to see how much data is being received from MCC.
+- Check that the agent VM isn't overloaded ΓÇô monitor CPU and memory usage. In particular, ensure no other process is taking resources from the VM.
+
+## Problems with the SFTP pull source
+
+This section covers problems specific to the SFTP pull source.
+
+You can also use the diagnostics provided by Azure Operator Insights itself in Azure Monitor to help identify and debug ingestion issues.
+
+### Agent can't connect to SFTP server
+
+Symptoms: No files are uploaded to AOI. The agent log file, */var/log/az-aoi-ingestion/stdout.log*, contains errors about connecting the SFTP server.
+
+- Verify the SFTP user and credentials used by the agent are valid for the SFTP server.
+- Check network connectivity and firewall configuration between the agent and the SFTP server. By default, the SFTP server must have port 22 open to accept SFTP connections.
+- Check that the `known_hosts` file on the agent VM contains a valid public SSH key for the SFTP server:
+ - On the agent VM, run `ssh-keygen -l -F *<sftp-server-IP-or-hostname>*`.
+ - If there's no output, then `known_hosts` doesn't contain a matching entry. Follow the instructions in [Setup the Azure Operator Insights ingestion agent](set-up-ingestion-agent.md) to add a `known_hosts` entry for the SFTP server.
+
+### No files are uploaded to Azure Operator Insights
+
+Symptoms: No data appears in Azure Data Explorer. The AOI *Data Ingested* metric for the relevant data type is zero.
+
+- Check that the agent is running on all VMs and isn't reporting errors in logs.
+- Check that files exist in the correct location on the SFTP server, and that they aren't being excluded due to file source config (see [Files are missing](#files-are-missing)).
+- Check the network connectivity and firewall configuration between the ingestion agent VM and the Data Product's input storage account.
+
+### Files are missing
+
+Symptoms: Data is missing from Azure Data Explorer. The AOI *Data Ingested* and *Processed File Count* metrics for the relevant data type are lower than expected.
+
+- Check that the agent is running on all VMs and isn't reporting errors in logs. Search the logs for the name of the missing file to find errors related to that file.
+- Check that the files exist on the SFTP server and that they aren't being excluded due to file source config. Check the file source config and confirm that:
+ - The files exist on the SFTP server under the path defined in `base_path`. Ensure that there are no symbolic links in the file paths of the files to upload: the ingestion agent ignores symbolic links.
+ - The "last modified" time of the files is at least `settling_time` seconds earlier than the time of the most recent upload run for this file source.
+ - The "last modified" time of the files is later than `exclude_before_time` (if specified).
+ - The file path relative to `base_path` matches the regular expression given by `include_pattern` (if specified).
+ - The file path relative to `base_path` *doesn't* match the regular expression given by `exclude_pattern` (if specified).
+- If recent files are missing, check the agent logs to confirm that the ingestion agent performed an upload run for the source at the expected time. The `cron` parameter in the source config gives the expected schedule.
+- Check that the agent VM isn't overloaded ΓÇô monitor CPU and memory usage. In particular, ensure no other process is taking resources from the VM.
+
+### Files are uploaded more than once
+
+Symptoms: Duplicate data appears in Azure Operator Insights.
+
+- Check whether the ingestion agent encountered a retryable error on a previous upload and then retried that upload more than 24 hours after the last successful upload. In that case, the agent might upload duplicate data during the retry attempt. The duplication of data should affect only the retry attempt.
+- Check that the file sources defined in the config file refer to nonoverlapping sets of files. If multiple file sources are configured to pull files from the same location on the SFTP server, use the `include_pattern` and `exclude_pattern` config fields to specify distinct sets of files that each file source should consider.
+- If you're running multiple instances of the SFTP ingestion agent, check that the file sources configured for each agent don't overlap with file sources on any other agent. In particular, look out for file source config that was accidentally copied from another agent's config.
+- If you recently changed the pipeline `id` for a configured file source, use the `exclude_before_time` field to avoid files being reuploaded with the new pipeline `id`. For instructions, see [Change configuration for ingestion agents for Azure Operator Insights](change-ingestion-agent-configuration.md).
+
+## Related content
+
+Learn how to:
+
+- [Change configuration for ingestion agents](change-ingestion-agent-configuration.md).
+- [Upgrade ingestion agents](upgrade-ingestion-agent.md).
+- [Rotate secrets for ingestion agents](rotate-secrets-for-ingestion-agent.md).
operator-insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/overview.md
We provide the following Data Products.
|Data Product |Purpose |Supporting ingestion agent| ||||
-|[Quality of Experience - Affirmed MCC Data Product](concept-mcc-data-product.md) | Analysis and insight from EDRs provided by Affirmed Networks Mobile Content Cloud (MCC) network elements| [MCC EDR ingestion agent](how-to-install-mcc-edr-agent.md)|
-| [Monitoring - Affirmed MCC Data Product](concept-monitoring-mcc-data-product.md) | Analysis and insight from performance management data (performance statistics) from Affirmed Networks MCC network elements| [SFTP ingestion agent](sftp-agent-overview.md) |
+|[Quality of Experience - Affirmed MCC Data Product](concept-mcc-data-product.md) | Analysis and insight from EDRs provided by Affirmed Networks Mobile Content Cloud (MCC) network elements| [Azure Operator Insights ingestion agent](ingestion-agent-overview.md) configured to use EDRs as a source|
+| [Monitoring - Affirmed MCC Data Product](concept-monitoring-mcc-data-product.md) | Analysis and insight from performance management data (performance statistics) from Affirmed Networks MCC network elements| [Azure Operator Insights ingestion agent](ingestion-agent-overview.md) configured to use SFTP as a source |
If you prefer, you can provide your own ingestion agent to upload data to your chosen Data Product.
operator-insights Rotate Secrets For Ingestion Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/rotate-secrets-for-ingestion-agent.md
+
+ Title: Rotate secrets for ingestion agents for Azure Operator Insights
+description: Learn how to rotate secrets for Azure Operator Insights ingestion agents.
+++++ Last updated : 02/29/2024+
+#CustomerIntent: As a someone managing an agent that has already been set up, I want to rotate its secrets so that data products in Azure Operator Insights continue to receive the correct data.
+
+# Rotate secrets for Azure Operator Insights ingestion agents
+
+The ingestion agent is a software package that is installed onto a Linux Virtual Machine (VM) owned and managed by you.
+
+It uses a service principal to obtain, from the Data Product's Azure Key Vault, the credentials needed to upload data to the Data Product's input storage account.
+
+You must refresh your service principal credentials before they expire. In this article, you'll rotate the service principal certificates on the ingestion agent.
+
+## Prerequisites
+
+None.
+
+## Rotate certificates
+
+1. Create a new certificate, and add it to the service principal. For instructions, refer to [Upload a trusted certificate issued by a certificate authority](/entra/identity-platform/howto-create-service-principal-portal).
+1. Obtain the new certificate and private key in the base64-encoded PKCS12 format, as described in [Set up Ingestion Agents for Azure Operator Insights](set-up-ingestion-agent.md).
+1. Copy the certificate to the ingestion agent VM.
+1. Save the existing certificate file and replace with the new certificate file.
+1. Restart the agent.
+ ```
+ sudo systemctl restart az-aoi-ingestion.service
+ ```
+
+## Related content
+
+Learn how to:
+
+- [Monitor and troubleshoot ingestion agents](monitor-troubleshoot-ingestion-agent.md).
+- [Change configuration for ingestion agents](change-ingestion-agent-configuration.md).
+- [Upgrade ingestion agents](upgrade-ingestion-agent.md).
operator-insights Set Up Ingestion Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/set-up-ingestion-agent.md
+
+ Title: Set up the Azure Operator Insights ingestion agent
+description: Set up the ingestion agent for Azure Operator Insights by installing it and configuring it to upload data to data products.
+++++ Last updated : 02/29/2024+
+#CustomerIntent: As a admin in an operator network, I want to upload data to Azure Operator Insights so that my organization can use Azure Operator Insights.
++
+# Install the Azure Operator Insights ingestion agent and configure it to upload data
+
+When you follow this article, you set up an Azure Operator Insights _ingestion agent_ on a virtual machine (VM) in your network and configure it to upload data to a data product. This ingestion agent supports uploading:
+
+- Files stored on an SFTP server.
+- Affirmed Mobile Content Cloud (MCC) Event Data Record (EDR) data streams.
+
+For an overview of ingestion agents, see [Ingestion agent overview](ingestion-agent-overview.md).
+
+## Prerequisites
+
+From the documentation for your data product, obtain the:
+- Specifications for the VM on which you plan to install the VM agent.
+- Sample configuration for the ingestion agent.
+
+## VM security recommendations
+
+The VM used for the ingestion agent should be set up following best practice for security. For example:
+
+- Networking - Only allow network traffic on the ports that are required to run the agent and maintain the VM.
+- OS version - Keep the OS version up-to-date to avoid known vulnerabilities.
+- Access - Limit access to the VM to a minimal set of users, and set up audit logging for their actions. We recommend that you restrict the following.
+ - Admin access to the VM (for example, to stop/start/install the ingestion agent).
+ - Access to the directory where the logs are stored: */var/log/az-aoi-ingestion/*.
+ - Access to the certificate and private key for the service principal that you create during this procedure.
+ - Access to the directory for secrets that you create on the VM during this procedure.
+
+## Download the RPM for the agent
+
+Download the RPM for the ingestion agent using the details you received as part of the [Azure Operator Insights onboarding process](overview.md#how-do-i-get-access-to-azure-operator-insights) or from [https://go.microsoft.com/fwlink/?linkid=2260508](https://go.microsoft.com/fwlink/?linkid=2260508).
+
+## Set up authentication to Azure
+
+You must have a service principal with a certificate credential that can access the Azure Key Vault created by the Data Product to retrieve storage credentials. Each agent must also have a copy of a valid certificate and private key for the service principal stored on this virtual machine.
+
+### Create a service principal
+
+> [!IMPORTANT]
+> You may need a Microsoft Entra tenant administrator in your organization to perform this setup for you.
+
+1. Create or obtain a Microsoft Entra ID service principal. Follow the instructions detailed in [Create a Microsoft Entra app and service principal in the portal](/entra/identity-platform/howto-create-service-principal-portal). Leave the **Redirect URI** field empty.
+1. Note the Application (client) ID, and your Microsoft Entra Directory (tenant) ID (these IDs are UUIDs of the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, where each character is a hexadecimal digit).
+
+### Prepare certificates
+
+The ingestion agent only supports certificate-based authentication for service principals. It's up to you whether you use the same certificate and key for each VM, or use a unique certificate and key for each. Using a certificate per VM provides better security and has a smaller impact if a key is leaked or the certificate expires. However, this method adds a higher maintainability and operational complexity.
+
+1. Obtain one or more certificates. We strongly recommend using trusted certificates from a certificate authority.
+2. Add the certificate or certificates as credentials to your service principal, following [Create a Microsoft Entra app and service principal in the portal](/entra/identity-platform/howto-create-service-principal-portal).
+3. We **strongly recommend** additionally storing the certificates in a secure location such as Azure Key Vault. Doing so allows you to configure expiry alerting and gives you time to regenerate new certificates and apply them to your ingestion agents before they expire. Once a certificate expires, the agent is unable to authenticate to Azure and no longer uploads data. For details of this approach see [Renew your Azure Key Vault certificates](../key-vault/certificates/overview-renew-certificate.md). If you choose to use Azure Key Vault then:
+ - This Azure Key Vault must be a different instance, either one you already control, or a new one. You can't use the Data Product's Azure Key Vault.
+ - You need the 'Key Vault Certificates Officer' role on this Azure Key Vault in order to add the certificate to the Key Vault. See [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) for details of how to assign roles in Azure.
+
+4. Ensure the certificates are available in pkcs12 format, with no passphrase protecting them. On Linux, you can convert a certificate and key from PEM format using openssl.
+ ```
+ openssl pkcs12 -nodes -export -in <pem-certificate-filename> -inkey <pem-key-filename> -out <pkcs12-certificate-filename>
+ ```
+
+> [!IMPORTANT]
+> The pkcs12 file must not be protected with a passphrase. When OpenSSL prompts you for an export password, press <kbd>Enter</kbd> to supply an empty passphrase.
+
+5. Validate your pkcs12 file. This displays information about the pkcs12 file including the certificate and private key.
+ ```
+ openssl pkcs12 -nodes -in <pkcs12-certificate-filename> -info
+ ```
+
+6. Ensure the pkcs12 file is base64 encoded. On Linux, you can base64 encode a pkcs12-formatted certificate by using the `base64` command.
+ ```
+ base64 -w 0 <pkcs12-certificate-filename> > <base64-encoded-pkcs12-certificate-filename>
+ ```
+
+### Grant permissions for the Data Product Key Vault
+
+1. Find the Azure Key Vault that holds the storage credentials for the input storage account. This Key Vault is in a resource group named *`<data-product-name>-HostedResources-<unique-id>`*.
+1. Grant your service principal the 'Key Vault Secrets User' role on this Key Vault. You need Owner level permissions on your Azure subscription. See [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) for details of how to assign roles in Azure.
+1. Note the name of the Key Vault.
+
+## Prepare the SFTP server
+
+This section is only required for the SFTP pull source.
+
+On the SFTP server:
+
+1. Ensure port 22/TCP to the VM is open.
+1. Create a new user, or determine an existing user on the SFTP server that the ingestion agent should use to connect to the SFTP server.
+1. Determine the authentication method that the ingestion agent should use to connect to the SFTP server. The agent supports:
+ - Password authentication
+ - SSH key authentication
+1. Configure the SFTP server to remove files after a period of time (a _retention period_). Ensure the retention period is long enough that the agent should process the files before the SFTP server deletes them. The example configuration file contains configuration for checking for new files every five minutes.
+
+> [!IMPORTANT]
+> Your SFTP server must remove files after a suitable retention period so that it does not run out of disk space. The ingestion agent does not remove files automatically.
+>
+> A shorter retention time reduces disk usage, increases the speed of the agent and reduces the risk of duplicate uploads. However, a shorter retention period increases the risk that data is lost if data cannot be retrieved by the agent or uploaded to Azure Operator Insights.
+
+## Prepare the VMs
+
+Repeat these steps for each VM onto which you want to install the agent.
+
+1. Ensure you have an SSH session open to the VM, and that you have `sudo` permissions.
+1. Install systemd, logrotate, and zip on the VM, if not already present. For example:
+ ```
+ sudo dnf install systemd logrotate zip
+ ```
+1. Obtain the ingestion agent RPM and copy it to the VM.
+1. Copy the pkcs12-formatted base64-encoded certificate (created in the [Prepare certificates](#prepare-certificates) step) to the VM, in a location accessible to the ingestion agent.
+1. Configure the agent VM based on the type of ingestion source.
+
+ # [SFTP sources](#tab/sftp)
+
+ 1. Verify that the VM has the following ports open. These ports must be open both in cloud network security groups and in any firewall running on the VM itself (such as firewalld or iptables).
+ - Port 443/TCP outbound to Azure
+ - Port 22/TCP outbound to the SFTP server
+ 1. Create a directory to use for storing secrets for the agent. We call this directory the _secrets directory_. Note its path.
+ 1. Create a file in the secrets directory containing password or private SSH key for the SFTP server.
+ - The file must not have a file extension.
+ - Choose an appropriate name for this file, and note it for later. This name is referenced in the agent configuration.
+ - The file must contain only the secret value (password or SSH key), with no extra whitespace.
+ 1. If you're using an SSH key that has a passphrase to authenticate, use the same method to create a separate file that contains the passphrase.
+ 1. Ensure the SFTP server's public SSH key is listed on the VM's global known_hosts file located at */etc/ssh/ssh_known_hosts*.
+
+ > [!TIP]
+ > Use the Linux command `ssh-keyscan` to add a server's SSH public key to a VM's *known_hosts* file manually. For example, `ssh-keyscan -H <server-ip> | sudo tee -a /etc/ssh/ssh_known_hosts`.
+
+ # [MCC EDR sources](#tab/edr)
+
+ Verify that the VM has the following ports open. These ports must be open both in cloud network security groups and in any firewall running on the VM itself (such as firewalld or iptables).
+ - Port 36001/TCP inbound from the MCCs
+ - Port 443/TCP outbound to Azure
+
+
+
+## Ensure that VM can resolve Microsoft hostnames
+
+Check that the VM can resolve public hostnames to IP addresses. For example, open an SSH session and use `dig login.microsoftonline.com` to check that the VM can resolve `login.microsoftonline.com` to an IP address.
+
+If the VM can't use DNS to resolve public Microsoft hostnames to IP addresses, [map the required hostnames to IP addresses](map-hostnames-ip-addresses.md). Return to this procedure when you have finished the configuration.
+
+## Install the agent software
+
+Repeat these steps for each VM onto which you want to install the agent:
+
+1. In an SSH session, change to the directory where the RPM was copied.
+1. Install the RPM.
+ ```
+ sudo dnf install ./*.rpm
+ ```
+ Answer `y` when prompted. If there are any missing dependencies, the RPM won't be installed.
+
+## Configure the agent software
+
+The configuration you need is specific to the type of source and your Data Product. Ensure you have access to your Data Product's documentation to see the required values. For example:
+- [Quality of Experience - Affirmed MCC Data Product - required agent configuration](concept-mcc-data-product.md#required-agent-configuration)
+- [Monitoring - Affirmed MCC Data Product - required agent configuration](concept-monitoring-mcc-data-product.md#required-agent-configuration)
+
+1. Connect to the VM over SSH.
+1. Change to the configuration directory.
+ ```
+ cd /etc/az-aoi-ingestion
+ ```
+1. Make a copy of the default configuration file.
+ ```
+ sudo cp example_config.yaml config.yaml
+ ```
+1. Set the `agent_id` field to a unique identifier for the agent instance – for example `london-sftp-1`. This name becomes searchable metadata in Operator Insights for all data ingested by this agent. Reserved URL characters must be percent-encoded.
+1. Configure the `secret_providers` section.
+ # [SFTP sources](#tab/sftp)
+
+ SFTP sources require two types of secret providers.
+
+ - A secret provider of type `key_vault`, which contains details required to connect to the Data Product's Azure Key Vault and allow connection to the Data Product's input storage account.
+ - A secret provider of type `file_system`, which specifies a directory on the VM for storing credentials for connecting to an SFTP server.
+
+ 1. For the secret provider with type `key_vault` and name `data_product_keyvault`, set the following fields.
+ - `provider.vault_name` must be the name of the Key Vault for your Data Product. You identified this name in [Grant permissions for the Data Product Key Vault](#grant-permissions-for-the-data-product-key-vault).  
+ - `provider.auth`, containing:
+ - `tenant_id`: your Microsoft Entra ID tenant.
+ - `identity_name`: the application ID of the service principal that you created in [Create a service principal](#create-a-service-principal).
+ - `cert_path`: the file path of the base64-encoded pcks12 certificate for the service principal to authenticate with. This can be any path on the agent VM.
+
+ 1. For the secret provider with type `file_system` and name `local_file_system`, set the following fields.
+ - `provider.auth.secrets_directory`: the absolute path to the secrets directory on the agent VM, which was created in the [Prepare the VMs](#prepare-the-vms) step.
+
+ You can add more secret providers (for example, if you want to upload to multiple data products) or change the names of the default secret providers.
+
+ # [MCC EDR sources](#tab/edr)
+
+ Configure a secret provider with type `key_vault` and name `data_product_keyvault`, setting the following fields.
+
+ 1. `provider.vault_name`: the name of the Key Vault for your Data Product. You identified this name in [Grant permissions for the Data Product Key Vault](#grant-permissions-for-the-data-product-key-vault).  
+ 1. `provider.auth`, containing:
+ - `tenant_id`: your Microsoft Entra ID tenant.
+ - `identity_name`: the application ID of the service principal that you created in [Create a service principal](#create-a-service-principal).
+ - `cert_path`: the file path of the base64-encoded pcks12 certificate for the service principal to authenticate with. This can be any path on the agent VM.
+
+ You can add more secret providers (for example, if you want to upload to multiple data products) or change the names of the default secret provider.
+
+
+1. Configure the `pipelines` section using the example configuration and your Data Product's documentation. Each `pipeline` has three configuration sections.
+ - `id`. The ID identifies the pipeline and must not be the same as any other pipeline ID for this ingestion agent. Any URL reserved characters must be percent-encoded. Refer to your Data Product's documentation for any recommendations.
+ - `source`. Source configuration controls which files are ingested. You can configure multiple sources.
+
+ # [SFTP sources](#tab/sftp)
+
+ Delete all pipelines in the example except the `contoso-logs` example, which contains `sftp_pull` source configuration.
+
+ Update the example to meet your requirements. The following fields are required for each source.
+
+ - `host`: the hostname or IP address of the SFTP server.
+ - `filtering.base_path`: the path to a folder on the SFTP server that files will be uploaded to Azure Operator Insights from.
+ - `known_hosts_file`: the path on the VM to the global known_hosts file, located at `/etc/ssh/ssh_known_hosts`. This file should contain the public SSH keys of the SFTP host server as outlined in [Prepare the VMs](#prepare-the-vms).
+ - `user`: the name of the user on the SFTP server that the agent should use to connect.
+ - In `auth`, the `type` (`password` or `key`) you chose in [Prepare the VMs](#prepare-the-vms). For password authentication, set `secret_name` to the name of the file containing the password in the `secrets_directory` folder. For SSH key authentication, set `key_secret` to the name of the file containing the SSH key in the `secrets_directory` folder. If the key is protected with a passphrase, set `passphrase_secret_name`.
+
+ For required or recommended values for other fields, refer to the documentation for your Data Product.
+
+ > [!TIP]
+ > The agent supports additional optional configuration for the following:
+ > - Specifying a pattern of files in the `base_path` folder which will be uploaded (by default all files in the folder are uploaded).
+ > - Specifying a pattern of files in the `base_path` folder which should not be uploaded.
+ > - A time and date before which files in the `base_path` folder will not be uploaded.
+ > - How often the ingestion agent uploads files (the value provided in the example configuration file corresponds to every hour).
+ > - A settling time, which is a time period after a file is last modified that the agent will wait before it is uploaded (the value provided in the example configuration file is 5 minutes).
+ >
+ > For more information about these configuration options, see [Configuration reference for Azure Operator Insights ingestion agent](ingestion-agent-configuration-reference.md).
+
+ # [MCC EDR sources](#tab/edr)
+
+ Delete all pipelines in the example except `mcc_edrs`. Most of the fields in `mcc_edrs` are set to default values. You can leave them unchanged unless you need a specific value.
+
+
+ - `sink`. Sink configuration controls uploading data to the Data Product's input storage account.
+ - In the `auth` section, set the `secret_provider` to the appropriate `key_vault` secret provider for the Data Product, or use the default `data_product_keyvault` if you used the default name earlier. Leave `type` and `secret_name` unchanged.
+ - Refer to your Data Product's documentation for information on required values for other parameters.
+ > [!IMPORTANT]
+ > The `container_name` field must be set exactly as specified by your Data Product's documentation.
+
+## Start the agent software
+
+1. Start the agent.
+ ```
+ sudo systemctl start az-aoi-ingestion
+ ```
+1. Check that the agent is running.
+ ```
+ sudo systemctl status az-aoi-ingestion
+ ```
+ 1. If you see any status other than `active (running)`, look at the logs as described in [Monitor and troubleshoot ingestion agents for Azure Operator Insights](monitor-troubleshoot-ingestion-agent.md) to understand the error. It's likely that some configuration is incorrect.
+ 1. Once you resolve the issue,  attempt to start the agent again.
+ 1. If issues persist, raise a support ticket.
+1. Once the agent is running, ensure it starts automatically after reboot.
+ ```
+ sudo systemctl enable az-aoi-ingestion.service
+ ```
+1. Save a copy of the delivered RPM ΓÇô you need it to reinstall or to back out any future upgrades.
+
+## Related content
+
+Learn how to:
+
+- [View data in dashboards](dashboards-use.md).
+- [Query data](data-query.md).
+- [Monitor and troubleshoot ingestion agents](monitor-troubleshoot-ingestion-agent.md).
+- [Change configuration for ingestion agents](change-ingestion-agent-configuration.md).
+- [Upgrade ingestion agents](upgrade-ingestion-agent.md).
+- [Rotate secrets for ingestion agents](rotate-secrets-for-ingestion-agent.md).
operator-insights Upgrade Ingestion Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/upgrade-ingestion-agent.md
+
+ Title: Upgrade the Azure Operator Insights ingestion agent
+description: Learn how to upgrade the Azure Operator Insights ingestion agent to receive the latest new features or fixes.
+++++ Last updated : 02/29/2024+
+#CustomerIntent: As a someone managing an agent that has already been set up, I want to upgrade it to receive the latest enhancements or fixes.
+
+# Upgrade Azure Operator Insights ingestion agents
+
+The ingestion agent is a software package that is installed onto a Linux Virtual Machine (VM) owned and managed by you. You might need to upgrade the agent.
+
+In this article, you'll upgrade your ingestion agent and roll back an upgrade.
+
+## Prerequisites
+
+Obtain the latest version of the ingestion agent RPM from [https://go.microsoft.com/fwlink/?linkid=2260508](https://go.microsoft.com/fwlink/?linkid=2260508).
+
+## Upgrade the agent software
+
+To upgrade to a new release of the agent, repeat the following steps on each VM that has the old agent.
+
+1. Ensure you have a copy of the currently running version of the RPM, in case you need to roll back the upgrade.
+1. Copy the new RPM to the VM.
+1. Connect to the VM over SSH, and change to the directory where the RPM was copied.
+1. Save a copy of the existing */etc/az-aoi-ingestion/config.yaml* configuration file.
+1. Upgrade the RPM.
+ ```
+ sudo dnf install ./*.rpm
+ ```
+ Answer `y` when prompted.  
+1. Make any changes to the configuration file described by your support contact or the documentation for the new version. Most upgrades don't require any configuration changes.
+1. Restart the agent.
+ ```
+ sudo systemctl restart az-aoi-ingestion.service
+ ```
+1. Once the agent is running, configure the az-aoi-ingestion service to automatically start on a reboot.
+ ```
+ sudo systemctl enable az-aoi-ingestion.service
+ ```
+1. Verify that the agent is running and that it's copying files as described in [Monitor and troubleshoot Ingestion Agents for Azure Operator Insights](monitor-troubleshoot-ingestion-agent.md).
+
+## Roll back an upgrade
+
+If an upgrade or configuration change fails:
+
+1. Copy the backed-up configuration file from before the change to the */etc/az-aoi-ingestion/config.yaml* file.
+1. Downgrade back to the original RPM.
+1. Restart the agent.
+ ```
+ sudo systemctl restart az-aoi-ingestion.service
+ ```
+1. When the agent is running, configure the az-aoi-ingestion service to automatically start on a reboot.
+ ```
+ sudo systemctl enable az-aoi-ingestion.service
+ ```
+
+## Related content
+
+Learn how to:
+
+- [Monitor and troubleshoot ingestion agents](monitor-troubleshoot-ingestion-agent.md).
+- [Change configuration for ingestion agents](change-ingestion-agent-configuration.md).
+- [Rotate secrets for ingestion agents](rotate-secrets-for-ingestion-agent.md).
postgresql How To Use Pgvector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-use-pgvector.md
Title: How to enable and use pgvector
-description: How to enable and use pgvector on Azure Database for PostgreSQL - Flexible Server.
+ Title: Vector search on Azure Database for PostgreSQL
+description: Vector search capabilities for retrieval augmented generation (RAG) on Azure Database for PostgreSQL .
sap Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/extensibility.md
The Ansible playbooks must be named according to the following naming convention
'Playbook name_pre' for playbooks to be run before the SDAF playbook and 'Playbook name_post' for playbooks to be run after the SDAF playbook.
-| Playbook name | Playbook name for 'pre' tasks | Playbook name for 'post' tasks | Description |
-| -- | | - | -- |
-| `playbook_01_os_base_config.yaml` | `playbook_01_os_base_config_pre.yaml` | `playbook_01_os_base_config_post.yaml` | Base operating system configuration |
-| `playbook_02_os_sap_specific_config.yaml` | `playbook_02_os_sap_specific_config_pre.yaml` | `playbook_02_os_sap_specific_config_post.yaml` | SAP specific configuration |
-| `playbook_03_bom_processing.yaml` | `playbook_03_bom_processing_pre.yaml` | `playbook_03_bom_processing_post.yaml` | Bill of Material processing |
-| `playbook_04_00_00_db_install.yaml` | `playbook_04_00_00_db_install_pre.yaml` | `playbook_04_00_00_db_install_post.yaml` | Database server installation |
-| `playbook_04_00_01_db_ha.yaml` | `playbook_04_00_01_db_ha_pre.yaml` | `playbook_04_00_01_db_ha_post.yaml` | Database High Availability configuration |
-| `playbook_05_00_00_sap_scs_install.yaml` | `playbook_05_00_00_sap_scs_install_pre.yaml` | `playbook_05_00_00_sap_scs_install_post.yaml` | Central Services Installation and High Availability configuration |
-| `playbook_05_01_sap_dbload.yaml` | `playbook_05_01_sap_dbload_pre.yaml` | `playbook_05_01_sap_dbload_post.yaml` | Database load |
-| `playbook_05_02_sap_pas_install.yaml` | `playbook_05_02_sap_pas_install_pre.yaml` | `playbook_05_02_sap_pas_install_post.yaml` | Primary Application Server installation |
-| `playbook_05_03_sap_app_install.yaml` | `playbook_05_03_sap_app_install_pre.yaml` | `playbook_05_03_sap_app_install_post.yaml` | Application Server installation |
-| `playbook_05_04_sap_web_install.yaml` | `playbook_05_04_sap_web_install_pre.yaml` | `playbook_05_04_sap_web_install.yaml` | Web dispatcher installation |
+| Playbook name | Playbook name for 'pre' tasks | Playbook name for 'post' tasks | Description |
+| - | - | -- | -- |
+| `playbook_01_os_base_config.yaml` | `playbook_01_os_base_config_pre.yaml` | `playbook_01_os_base_config_post.yaml` | Base operating system configuration |
+| `playbook_02_os_sap_specific_config.yaml` | `playbook_02_os_sap_specific_config_pre.yaml` | `playbook_02_os_sap_specific_config_post.yaml` | SAP specific configuration |
+| `playbook_03_bom_processing.yaml` | `playbook_03_bom_processing_pre.yaml` | `playbook_03_bom_processing_post.yaml` | Bill of Material processing |
+| `playbook_04_00_00_db_install.yaml` | `playbook_04_00_00_db_install_pre.yaml` | `playbook_04_00_00_db_install_post.yaml` | Database server installation |
+| `playbook_04_00_01_db_ha.yaml` | `playbook_04_00_01_db_ha_pre.yaml` | `playbook_04_00_01_db_ha_post.yaml` | Database High Availability configuration |
+| `playbook_05_00_00_sap_scs_install.yaml` | `playbook_05_00_00_sap_scs_install_pre.yaml` | `playbook_05_00_00_sap_scs_install_post.yaml` | Central Services Installation and High Availability configuration |
+| `playbook_05_01_sap_dbload.yaml` | `playbook_05_01_sap_dbload_pre.yaml` | `playbook_05_01_sap_dbload_post.yaml` | Database load |
+| `playbook_05_02_sap_pas_install.yaml` | `playbook_05_02_sap_pas_install_pre.yaml` | `playbook_05_02_sap_pas_install_post.yaml` | Primary Application Server installation |
+| `playbook_05_03_sap_app_install.yaml` | `playbook_05_03_sap_app_install_pre.yaml` | `playbook_05_03_sap_app_install_post.yaml` | Application Server installation |
+| `playbook_05_04_sap_web_install.yaml` | `playbook_05_04_sap_web_install_pre.yaml` | `playbook_05_04_sap_web_install_post.yaml` | Web dispatcher installation |
+| `playbook_08_00_00_post_configuration_actions.yaml` | `playbook_08_00_00_post_configuration_actions_pre.yml` | `playbook_08_00_00_post_configuration_actions_post.yml` | Post Configuration Actions |
+> [!NOTE]
+> The playbook_08_00_00_post_configuration_actions.yaml step has no SDAF provided roles/tasks, it's only there to facilitate `_pre` and `_post` hooks after SDAF has completed the installation and configuration.
### Sample Ansible playbook
search Retrieval Augmented Generation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/retrieval-augmented-generation-overview.md
Last updated 11/20/2023
# Retrieval Augmented Generation (RAG) in Azure AI Search
-Retrieval Augmentation Generation (RAG) is an architecture that augments the capabilities of a Large Language Model (LLM) like ChatGPT by adding an information retrieval system that provides grounding data. Adding an information retrieval system gives you control over grounding data used by an LLM when it formulates a response. For an enterprise solution, RAG architecture means that you can constrain generative AI to *your enterprise content* sourced from vectorized documents and images, and other data formats if you have embedding models for that content.
+Retrieval Augmented Generation (RAG) is an architecture that augments the capabilities of a Large Language Model (LLM) like ChatGPT by adding an information retrieval system that provides grounding data. Adding an information retrieval system gives you control over grounding data used by an LLM when it formulates a response. For an enterprise solution, RAG architecture means that you can constrain generative AI to *your enterprise content* sourced from vectorized documents and images, and other data formats if you have embedding models for that content.
The decision about which information retrieval system to use is critical because it determines the inputs to the LLM. The information retrieval system should provide:
search Search Howto Run Reset Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-run-reset-indexers.md
- ignite-2023 Previously updated : 12/18/2023 Last updated : 02/26/2024 # Run or reset indexers, skills, or documents
This article explains how to run indexers on demand, with and without a reset. I
## Indexer execution
-You can run multiple indexers at one time assuming you sufficient replicas (one indexer job per replica), but each indexer itself is single-instance. Starting a new instance while the indexer is already in execution produces this error: `"Failed to run indexer "<indexer name>" error: "Another indexer invocation is currently in progress; concurrent invocations are not allowed."`
+A search service runs one indexer job per [search unit](search-capacity-planning.md#concepts-search-units-replicas-partitions-shards). Every search service starts with one search unit, but each new partition or replica increases the search units of your service. You can check the search unit count in the portal's Essential section of the **Overview** page. If you need concurrent processing, make sure you have sufficient replicas. Indexers don't run in the background, so you might detect more query throttling than usual if the service is under pressure.
+
+The following screenshot shows the number of search units, which determines how many indexers can run at once.
++
+Once indexer execution starts, you can't pause or stop it. Indexer execution stops when there are no more documents to load or refresh, or when the [maximum running time limit](search-limits-quotas-capacity.md#indexer-limits) is reached.
+
+You can run multiple indexers at one time assuming sufficient capacity, but each indexer itself is single-instance. Starting a new instance while the indexer is already in execution produces this error: `"Failed to run indexer "<indexer name>" error: "Another indexer invocation is currently in progress; concurrent invocations are not allowed."`
An indexer job runs in a managed execution environment. Currently, there are two environments. You can't control or configure which environment is used. Azure AI Search determines the environment based on job composition and the ability of the service to move an indexer job onto a content processor (some [security features](search-indexer-securing-resources.md#indexer-execution-environment) block the multitenant environment).
search Vector Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-store.md
Considerations for vector storage include the following points:
In Azure AI Search, there are two patterns for working with search results.
-+ Generative search. Language models formulate a response to the user's query using data from Azure AI Search. This pattern usually includes an orchestration layer to coordinate prompts and maintain context. In this pattern, results are fed into prompt flows, received by chat models like GPT and Text-Davinci. This approach is based on [**Retrieval augmented generation (RAG)**](retrieval-augmented-generation-overview.md) architecture, where the search index provides the grounding data.
++ Generative search. Language models formulate a response to the user's query using data from Azure AI Search. This pattern includes an orchestration layer to coordinate prompts and maintain context. In this pattern, search results are fed into prompt flows, received by chat models like GPT and Text-Davinci. This approach is based on [**Retrieval augmented generation (RAG)**](retrieval-augmented-generation-overview.md) architecture, where the search index provides the grounding data.
-+ Classic search. Search engine formulates a response based on content in your index, and you render those results in a client app. In a direct response from the search engine, results are returned in a flattened row set, and you can choose which fields are passed to the client app. It's expected that you would populate the vector store (search index) with nonvector content that's human readable in your response. The search engine matches on vectors, but can return nonvector values from the same search document. [**Vector queries**](vector-search-how-to-query.md) and [**hybrid queries**](hybrid-search-how-to-query.md) cover the types of requests.
++ Classic search using a search bar, query input string, and rendered results. The search engine accepts and executes the vector query, formulates a response, and you render those results in a client app. In Azure AI Search, results are returned in a flattened row set, and you can choose which fields to include search results. Since there's no chat model, it's expected that you would populate the vector store (search index) with nonvector content that's human readable in your response. Although the search engine matches on vectors, you should use nonvector values to populate the search results. [**Vector queries**](vector-search-how-to-query.md) and [**hybrid queries**](hybrid-search-how-to-query.md) cover the types of query requests you can formulate for classic search scenarios. Your index schema should reflect your primary use case. The following section highlights the differences in field composition for solutions built for generative AI or classic search.
An index schema for a vector store requires a name, a key field (string), one or
### Basic vector field configuration
-A vector field, such as `"content_vector"` in the following example, is of type `Collection(Edm.Single)`. It must be searchable and retrievable. It can't be filterable, facetable, or sortable, and it can't have analyzers, normalizers, or synonym map assignments. It must have dimensions set to the number of embeddings generated by the embedding model. For instance, if you're using text-embedding-ada-002, it generates 1,536 embeddings. A vector search profile is specified in a separate [vector search configuration](vector-search-how-to-create-index.md) and assigned to a vector field using a profile name.
+Vector fields are distinguished by their data type and vector-specific properties. Here's what a vector field looks like in a fields collection:
```json {
A vector field, such as `"content_vector"` in the following example, is of type
} ```
-### Fields collection for basic vector workloads
+Vector fields are of type `Collection(Edm.Single)`.
+
+Vector fields must be searchable and retrievable, but they can't be filterable, facetable, or sortable, or have analyzers, normalizers, or synonym map assignments.
+
+Vector fields must have `dimensions` set to the number of embeddings generated by the embedding model. For example, text-embedding-ada-002 generates 1,536 embeddings for each chunk of text.
+
+Vector fields are indexed using algorithms indicated by a *vector search profile*, which is defined elsewhere in the index and thus not shown in the example. For more information, see [vector search configuration](vector-search-how-to-create-index.md).
-Here's an example showing a vector field in context, with other fields in a collection.
+### Fields collection for basic vector workloads
-The key field (required) is `"id"` in this example. The `"content"` field is the human readable equivalent of the `"content_vector"` field. Although if you're using language models exclusively for response formulation, you can skip nonvector content fields. Metadata fields are useful for filters, especially if metadata includes origin information about the source document. You can't filter on a vector field directly, but you can set prefilter or postfilter modes to filter before or after vector query execution.
+Vector stores require more fields besides vector fields. For example, a key field (`"id"` in this example) is an index requirement.
```json "name": "example-basic-vector-idx",
The key field (required) is `"id"` in this example. The `"content"` field is the
] ```
+Other fields, such as the `"content"` field, provide the human readable equivalent of the `"content_vector"` field. If you're using language models exclusively for response formulation, you can omit nonvector content fields, but solutions that push search results directly to client apps should have nonvector content.
+
+Metadata fields are useful for filters, especially if metadata includes origin information about the source document. You can't filter on a vector field directly, but you can set prefilter or postfilter modes to filter before or after vector query execution.
+ ### Schema generated by the Import and vectorize data wizard We recommend the [Import and vectorize data wizard](search-get-started-portal-import-vectors.md) for evaluation and proof-of-concept testing. The wizard generates the example schema in this section.
All vector indexing and query requests target an index. Endpoints are usually on
| `<your-service>.search.windows.net/indexes` | Targets the indexes collection. Used when creating, listing, or deleting an index. Admin rights are required for these operations, available through admin [API keys](search-security-api-keys.md) or a [Search Contributor role](search-security-rbac.md#built-in-roles-used-in-search). | | `<your-service>.search.windows.net/indexes/<your-index>/docs` | Targets the documents collection of a single index. Used when querying an index or data refresh. For queries, read rights are sufficient, and available through query API keys or a data reader role. For data refresh, admin rights are required. |
-#### How to connect to Azure AI Search
+### How to connect to Azure AI Search
+
+1. [Make sure you have permissions](search-security-rbac.md) or an [API access key](search-security-api-keys.md). Unless you're querying an existing index, you need admin rights or a contributor role assignment to manage and view content on a search service.
-1. [Start with the Azure portal](https://portal.azure.com). Azure subscribers, or the person who created the search service, can manage the search service in the Azure portal. An Azure subscription requires Contributor or above permissions to create or delete services. This permission level is sufficient for fully managing a search service in the Azure portal.
+1. [Start with the Azure portal](https://portal.azure.com). The person who created the search service can view and manage the search service, including granting access to others through the **Access control (IAM)** page.
-1. Try other clients for programmatic access. We recommend the quickstarts and samples for first steps:
+1. Move on to other clients for programmatic access. We recommend the quickstarts and samples for first steps:
+ [Quickstart: REST](search-get-started-vector.md) + [Vector samples](https://github.com/Azure/azure-search-vector-samples/blob/main/README.md)
sentinel Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/resources.md
Title: Useful resources when working with Microsoft Sentinel
-description: This document provides you with a list of useful resources when working with Microsoft Sentinel.
-
+ Title: Compare playbooks, workbooks, and notebooks | Microsoft Sentinel
+description: Learn about the differences between playbooks, workbooks, and notebooks in Microsoft Sentinel.
+ Previously updated : 11/09/2021- Last updated : 02/26/2024+
-# Useful resources for working with Microsoft Sentinel
+# Compare playbooks, workbooks, and notebooks
-This article lists resources that can help you get more information about working with Microsoft Sentinel.
+This article describes the differences between playbooks, workbooks, and notebooks in Microsoft Sentinel.
-## Learn more about creating queries
+## Compare by persona
-Microsoft Sentinel uses Azure Monitor Log Analytics's Kusto Query Language (KQL) to build queries. For more information, see:
+The following table compares Microsoft Sentinel playbooks, workbooks, and notebooks by the user persona:
-- [Kusto Query Language in Microsoft Sentinel](kusto-overview.md)-- [Useful resources for working with Kusto Query Language in Microsoft Sentinel](kusto-resources.md)
+|Resource |Description |
+|||
+|**Playbooks** | <ul><li>SOC engineers</li><li>Analysts of all tiers</li></ul> |
+|**Workbooks** | <ul><li> SOC engineers</li><li>Analysts of all tiers</li></ul> |
+|**Notebooks** | <ul><li>Threat hunters and Tier-2/Tier-3 analysts</li><li>Incident investigators</li><li>Data scientists</li><li>Security researchers</li></ul> |
-## Microsoft Sentinel templates for data to monitor
+## Compare by use
-The [Microsoft Entra Security Operations Guide](../active-directory/fundamentals/security-operations-introduction.md) includes specific guidance and knowledge about data that's important to monitor for security purposes, for several operational areas.
+The following table compares Microsoft Sentinel playbooks, workbooks, and notebooks by use case:
-In each article, check for sections named [Things to monitor](../active-directory/fundamentals/security-operations-privileged-accounts.md#things-to-monitor) for lists of events that we recommend alerting on and investigating, as well as analytics rule templates to deploy directly to Microsoft Sentinel.
+|Resource |Description |
+|||
+|**Playbooks** | Automation of simple, repeatable tasks:<ul><li>Ingesting external data </li><li>Data enrichment with TI, GeoIP lookups, and more </li><li> Investigation </li><li>Remediation </li></ul> |
+|**Workbooks** | <ul><li>Visualization</li></ul> |
+|**Notebooks** | <ul><li>Querying Microsoft Sentinel data and external data </li><li>Data enrichment with TI, GeoIP lookups, and WhoIs lookups, and more </li><li> Investigation </li><li> Visualization </li><li> Hunting </li><li>Machine learning and big data analytics </li></ul> |
-## Learn more about creating automation
-Create automation in Microsoft Sentinel using Azure Logic Apps, with a growing gallery of built-in playbooks.
+## Compare by advantages and challenges
-For more information, see [Azure Logic Apps connectors](/connectors/).
+The following table compares the advantages and disadvantages of playbooks, workbooks, and notebooks in Microsoft Sentinel:
-## Compare playbooks, workbooks, and notebooks
+|Resource |Advantages | Challenges |
+||||
+|**Playbooks** | <ul><li> Best for single, repeatable tasks </li><li>No coding knowledge required </li></ul> | <ul><li>Not suitable for ad-hoc and complex chains of tasks </li><li>Not ideal for documenting and sharing evidence</li></ul> |
+|**Workbooks** | <ul><li>Best for a high-level view of Microsoft Sentinel data </li><li>No coding knowledge required</li></ul> | <ul><li>Can't integrate with external data </li></ul> |
+|**Notebooks** | <ul><li>Best for complex chains of repeatable tasks </li><li>Ad-hoc, more procedural control</li><li>Easier to pivot with interactive functionality </li><li>Rich Python libraries for data manipulation and visualization </li><li>Machine learning and custom analysis </li><li>Easy to document and share analysis evidence </li></ul> | <ul><li> High learning curve and requires coding knowledge </li></ul> |
-The following table describes the differences between playbooks, workbooks, and notebooks in Microsoft Sentinel:
+## Related content
-| Category |Playbooks |Workbooks |Notebooks |
-|||||
-|**Personas** | <ul><li>SOC engineers</li><li>Analysts of all tiers</li></ul> | <ul><li> SOC engineers</li><li>Analysts of all tiers</li></ul> | <ul><li>Threat hunters and Tier-2/Tier-3 analysts</li><li>Incident investigators</li><li>Data scientists</li><li>Security researchers</li></ul> |
-|**Uses** | Automation of simple, repeatable tasks:<ul><li>Ingesting external data </li><li>Data enrichment with TI, GeoIP lookups, and more </li><li> Investigation </li><li>Remediation </li></ul> | <ul><li>Visualization</li></ul> | <ul><li>Querying Microsoft Sentinel data and external data </li><li>Data enrichment with TI, GeoIP lookups, and WhoIs lookups, and more </li><li> Investigation </li><li> Visualization </li><li> Hunting </li><li>Machine learning and big data analytics </li></ul> |
-|**Advantages** |<ul><li> Best for single, repeatable tasks </li><li>No coding knowledge required </li></ul> |<ul><li>Best for a high-level view of Microsoft Sentinel data </li><li>No coding knowledge required</li></ul> | <ul><li>Best for complex chains of repeatable tasks </li><li>Ad-hoc, more procedural control</li><li>Easier to pivot with interactive functionality </li><li>Rich Python libraries for data manipulation and visualization </li><li>Machine learning and custom analysis </li><li>Easy to document and share analysis evidence </li></ul> |
-|**Challenges** | <ul><li>Not suitable for ad-hoc and complex chains of tasks </li><li>Not ideal for documenting and sharing evidence</li></ul> | <ul><li>Cannot integrate with external data </li></ul> | <ul><li> High learning curve and requires coding knowledge </li></ul> |
-| **More information** | [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md) | [Visualize collected data](get-visibility.md) | [Use Jupyter notebooks to hunt for security threats](notebooks.md) |
+For more information, see:
-
-## Comment on our blogs and forums
-
-We love hearing from our users.
-
-In the TechCommunity space for Microsoft Sentinel:
--- [View and comment on recent blog posts](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/bg-p/MicrosoftSentinelBlog)-- [Post your own questions about Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel/bd-p/MicrosoftSentinel)-
-You can also send suggestions for improvements via our [User Voice](https://feedback.azure.com/d365community/forum/37638d17-0625-ec11-b6e6-000d3a4f07b8) program.
-
-## Join the Microsoft Sentinel GitHub community
-
-The [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel) is a powerful resource for threat detection and automation.
-
-Our Microsoft security analysts constantly create and add new workbooks, playbooks, hunting queries, and more, posting them to the community for you to use in your environment.
-
-Download sample content from the private community GitHub repository to create custom workbooks, hunting queries, notebooks, and playbooks for Microsoft Sentinel.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Get certified!](/training/paths/security-ops-sentinel/)
-
-> [!div class="nextstepaction"]
-> [Read customer use case stories](https://customers.microsoft.com/en-us/search?sq=%22Azure%20Sentinel%20%22&ff=&p=0&so=story_publish_date%20desc)
+- [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md)
+- [Visualize collected data with workbooks](get-visibility.md)
+- [Use Jupyter notebooks to hunt for security threats](notebooks.md)
site-recovery Avs Tutorial Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-replication.md
Enable replication for VMs as follows:
## Next step
-After you enable replication, run a drill to make sure that everything works as expected.
-
-> [!div class="nextstepaction"]
-> [Run a disaster recovery drill](avs-tutorial-dr-drill-azure.md)
+After you enable replication, [run a disaster recovery drill](avs-tutorial-dr-drill-azure.md) to make sure that everything works as expected.
site-recovery Azure Stack Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-stack-site-recovery.md
Title: Replicate Azure Stack VMs to Azure using Azure Site Recovery | Microsoft Docs
+ Title: Replicate Azure Stack VMs to Azure using Azure Site Recovery
description: Learn how to set up disaster recovery to Azure for Azure Stack VMs with the Azure Site Recovery service. Previously updated : 10/02/2021 Last updated : 02/20/2024
With these steps complete, you can then run a full failover to Azure as and when
**Location** | **Component** |**Details** | |
-**Configuration server** | Runs on a single Azure Stack VM. | In each subscription you set up a configuration server VM. This VM runs the following Site Recovery components:<br/><br/> - Configuration server: Coordinates communications between on-premises and Azure, and manages data replication. - Process server: Acts as a replication gateway. It receives replication data, optimizes with caching, compression, and encryption; and sends it to Azure storage.<br/><br/> If VMs you want to replicate exceed the limits stated below, you can set up a separate standalone process server. [Learn more](vmware-azure-set-up-process-server-scale.md).
+**Configuration server** | Runs on a single Azure Stack VM. | In each subscription you set up a configuration server VM. This VM runs the following Site Recovery components:<br/><br/> - **Configuration server**: Coordinates communications between on-premises and Azure, and manages data replication. <br> <br> - **Process server**: Acts as a replication gateway. It receives replication data, optimizes with caching, compression, and encryption; and sends it to Azure storage.<br/><br/> If VMs you want to replicate exceed the limits stated below, you can set up a separate standalone process server. [Learn more](vmware-azure-set-up-process-server-scale.md).
**Mobility service** | Installed on each VM you want to replicate. | In the steps in this article, we prepare an account so that the Mobility service is installed automatically on a VM when replication is enabled. If you don't want to install the service automatically, there are a number of other methods you can use. [Learn more](vmware-azure-install-mobility-service.md). **Azure** | In Azure you need a Recovery Services vault, a storage account, and a virtual network. | Replicated data is stored in the storage account. Azure VMs are added to the Azure network when failover occurs.
Replication works as follows:
1. In the vault, you specify the replication source and target, set up the configuration server, create a replication policy, and enable replication. 2. The Mobility service is installed on the machine (if you've used push installation), and machines begin replication in accordance with the replication policy. 3. An initial copy of the server data is replicated to Azure storage.
-4. After initial replication finishes, replication of delta changes to Azure begins. Tracked changes for a machine are held in a .hrl file.
+4. After initial replication finishes, replication of delta changes to Azure begins. Tracked changes for a machine are held in an .hrl file.
5. The configuration server orchestrates replication management with Azure (port HTTPS 443 outbound). 6. The process server receives data from source machines, optimizes and encrypts it, and sends it to Azure storage (port 443 outbound). 7. Replicated machines communicate with the configuration server (port HTTPS 443 inbound, for replication management. Machines send replication data to the process server (port HTTPS 9443 inbound - can be modified).
In this article we replicated Azure Stack VMs to Azure. With replication in plac
## Next steps
-After failing back, you can reprotect the VM and start replicating it to Azure again To do this, repeat the steps in this article.
+After failing back, you can reprotect the VM and start replicating it to Azure again. To do this, repeat the steps in this article.
site-recovery Azure To Azure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-architecture.md
# Azure to Azure disaster recovery architecture
-This article describes the architecture, components, and processes used when you deploy disaster recovery for Azure virtual machines (VMs) using the [Azure Site Recovery](site-recovery-overview.md) service. With disaster recovery set up, Azure VMs continuously replicate to a different target region. If an outage occurs, you can fail over VMs to the secondary region, and access them from there. When everything's running normally again, you can fail back and continue working in the primary location.
+This article describes the architecture, components, and processes used when you deploy disaster recovery for Azure virtual machines (VMs) using the [Azure Site Recovery](site-recovery-overview.md) service. With disaster recovery setup, Azure VMs continuously replicate to a different target region. If an outage occurs, you can fail over VMs to the secondary region, and access them from there. When everything's running normally again, you can fail back and continue working in the primary location.
The components involved in disaster recovery for Azure VMs are summarized in the
**Component** | **Requirements** | **VMs in source region** | One of more Azure VMs in a [supported source region](azure-to-azure-support-matrix.md#region-support).<br/><br/> VMs can be running any [supported operating system](azure-to-azure-support-matrix.md#replicated-machine-operating-systems).
-**Source VM storage** | Azure VMs can be managed, or have non-managed disks spread across storage accounts.<br/><br/>[Learn about](azure-to-azure-support-matrix.md#replicated-machinesstorage) supported Azure storage.
+**Source VM storage** | Azure VMs can be managed, or have nonmanaged disks spread across storage accounts.<br/><br/>[Learn about](azure-to-azure-support-matrix.md#replicated-machinesstorage) supported Azure storage.
**Source VM networks** | VMs can be located in one or more subnets in a virtual network (VNet) in the source region. [Learn more](azure-to-azure-support-matrix.md#replicated-machinesnetworking) about networking requirements. **Cache storage account** | You need a cache storage account in the source network. During replication, VM changes are stored in the cache before being sent to target storage. Cache storage accounts must be Standard.<br/><br/> Using a cache ensures minimal impact on production applications that are running on a VM.<br/><br/> [Learn more](azure-to-azure-support-matrix.md#cache-storage) about cache storage requirements.
-**Target resources** | Target resources are used during replication, and when a failover occurs. Site Recovery can set up target resource by default, or you can create/customize them.<br/><br/> In the target region, check that you're able to create VMs, and that your subscription has enough resources to support VM sizes that will be needed in the target region.
+**Target resources** | Target resources are used during replication, and when a failover occurs. Site Recovery can set up target resource by default, or you can create/customize them.<br/><br/> In the target region, check that you're able to create VMs, and that your subscription has enough resources to support VM sizes that are needed in the target region.
![Diagram showing source and target replication.](./media/concepts-azure-to-azure-architecture/enable-replication-step-1-v2.png)
When you enable replication for a VM, Site Recovery gives you the option of crea
You can manage target resources as follows: -- You can modify target settings as you enable replication. Please note that the default SKU for the target region VM is the same as the SKU of the source VM (or the next best available SKU in comparison to the source VM SKU). The dropdown list only shows relevant SKUs of the same family as the source VM (Gen 1 or Gen 2).-- You can modify target settings after replication is already working. Similar to other resources such as the target resource group, target name, and others, the target region VM SKU can also be updated after replication is in progress. A resource which cannot be updated is the availability type (single instance, set or zone). To change this setting, you need to disable replication, modify the setting, and then reenable.
+- You can modify target settings as you enable replication. Note that the default SKU for the target region VM is the same as the SKU of the source VM (or the next best available SKU in comparison to the source VM SKU). The dropdown list only shows relevant SKUs of the same family as the source VM (Gen 1 or Gen 2).
+- You can modify target settings after replication is already working. Similar to other resources such as the target resource group, target name, and others, the target region VM SKU can also be updated after replication is in progress. A resource, which can't be updated is the availability type (single instance, set, or zone). To change this setting, you need to disable replication, modify the setting, and then reenable.
## Replication policy
When you enable Azure VM replication, Site Recovery creates a new replication po
**Policy setting** | **Details** | **Default** | |
-**Recovery point retention** | Specifies how long Site Recovery keeps recovery points. | 1 day
-**App-consistent snapshot frequency** | How often Site Recovery takes an app-consistent snapshot. | 0 hours (Disabled)
+**Recovery point retention** | Specifies how long Site Recovery keeps recovery points. | One day
+**App-consistent snapshot frequency** | How often Site Recovery takes an app-consistent snapshot. | Zero hours (Disabled)
### Managing replication policies
The following table explains different types of consistency.
**Description** | **Details** | **Recommendation** | |
-A crash consistent snapshot captures data that was on the disk when the snapshot was taken. It doesn't include anything in memory.<br/><br/> It contains the equivalent of the on-disk data that would be present if the VM crashed or the power cord was pulled from the server at the instant that the snapshot was taken.<br/><br/> A crash-consistent doesn't guarantee data consistency for the operating system, or for apps on the VM. | Site Recovery creates crash-consistent recovery points every five minutes by default. This setting can't be modified.<br/><br/> | Today, most apps can recover well from crash-consistent points.<br/><br/> Crash-consistent recovery points are usually sufficient for the replication of operating systems, and apps such as DHCP servers and print servers.
+A crash consistent snapshot captures data that was on the disk when the snapshot was taken. It doesn't include anything in memory.<br/><br/> It contains the equivalent of the on-disk data that would be present if the VM crashed or the power cord was pulled from the server at the instant that the snapshot was taken.<br/><br/> A crash-consistent doesn't guarantee data consistency for the operating system, or for apps on the VM. | Site Recovery creates crash-consistent recovery points every five minutes by default. This setting can't be modified.<br/><br/> | Today, most apps can recover well from crash-consistent points.<br/><br/> Crash-consistent recovery points are sufficient for the replication of operating systems, and apps such as DHCP servers and print servers.
### App-consistent **Description** | **Details** | **Recommendation** | |
-App-consistent recovery points are created from app-consistent snapshots.<br/><br/> An app-consistent snapshot contain all the information in a crash-consistent snapshot, plus all the data in memory and transactions in progress. | App-consistent snapshots use the Volume Shadow Copy Service (VSS):<br/><br/> 1) Azure Site Recovery uses Copy Only backup (VSS_BT_COPY) method which does not change Microsoft SQL's transaction log backup time and sequence number </br></br> 2) When a snapshot is initiated, VSS perform a copy-on-write (COW) operation on the volume.<br/><br/> 3) Before it performs the COW, VSS informs every app on the machine that it needs to flush its memory-resident data to disk.<br/><br/> 4) VSS then allows the backup/disaster recovery app (in this case Site Recovery) to read the snapshot data and proceed. | App-consistent snapshots are taken in accordance with the frequency you specify. This frequency should always be less than you set for retaining recovery points. For example, if you retain recovery points using the default setting of 24 hours, you should set the frequency at less than 24 hours.<br/><br/>They're more complex and take longer to complete than crash-consistent snapshots.<br/><br/> They affect the performance of apps running on a VM enabled for replication.
+App-consistent recovery points are created from app-consistent snapshots.<br/><br/> An app-consistent snapshot contains all the information in a crash-consistent snapshot, plus all the data in memory and transactions in progress. | App-consistent snapshots use the Volume Shadow Copy Service (VSS):<br/><br/> 1) Azure Site Recovery uses Copy Only backup (VSS_BT_COPY) method, which doesn't change Microsoft SQL's transaction log backup time and sequence number </br></br> 2) When a snapshot is initiated, VSS perform a copy-on-write (COW) operation on the volume.<br/><br/> 3) Before it performs the COW, VSS informs every app on the machine that it needs to flush its memory-resident data to disk.<br/><br/> 4) VSS then allows the backup/disaster recovery app (in this case Site Recovery) to read the snapshot data and proceed. | App-consistent snapshots are taken in accordance with the frequency you specify. This frequency should always be less than you set for retaining recovery points. For example, if you retain recovery points using the default setting of 24 hours, you should set the frequency at less than 24 hours.<br/><br/>They're more complex and take longer to complete than crash-consistent snapshots.<br/><br/> They affect the performance of apps running on a VM enabled for replication.
## Replication process
If outbound access for VMs is controlled with URLs, allow these URLs.
| Replication | `*.hypervrecoverymanager.windowsazure.com` | `*.hypervrecoverymanager.windowsazure.us` | Allows the VM to communicate with the Site Recovery service. | | Service Bus | `*.servicebus.windows.net` | `*.servicebus.usgovcloudapi.net` | Allows the VM to write Site Recovery monitoring and diagnostics data. | | Key Vault | `*.vault.azure.net` | `*.vault.usgovcloudapi.net` | Allows access to enable replication for ADE-enabled virtual machines via portal |
-| Azure Automation | `*.automation.ext.azure.com` | `*.azure-automation.us` | Allows enabling auto-upgrade of mobility agent for a replicated item via portal |
+| Azure Automation | `*.automation.ext.azure.com` | `*.azure-automation.us` | Allows enabling autoupgrade of mobility agent for a replicated item via portal |
### Outbound connectivity for IP address ranges To control outbound connectivity for VMs using IP addresses, allow these addresses.
-Please note that details of network connectivity requirements can be found in [networking white paper](azure-to-azure-about-networking.md#outbound-connectivity-using-service-tags)
+Note that details of network connectivity requirements can be found in [networking white paper](azure-to-azure-about-networking.md#outbound-connectivity-using-service-tags).
#### Source region rules
Allow HTTPS outbound: port 443 | Allow ranges that correspond to Azure Key Vault
Allow HTTPS outbound: port 443 | Allow ranges that correspond to Azure Automation Controller (This is required only for enabling auto-upgrade of mobility agent for a replicated item via portal) | GuestAndHybridManagement
-#### Control access with NSG rules
+#### Control access with Network Security Group rules
-If you control VM connectivity by filtering network traffic to and from Azure networks/subnets using [NSG rules](../virtual-network/network-security-groups-overview.md), note the following requirements:
+If you control VM connectivity by filtering network traffic to and from Azure networks/subnets using [Network Security Group rules](../virtual-network/network-security-groups-overview.md), note the following requirements:
-- NSG rules for the source Azure region should allow outbound access for replication traffic.
+- Network Security Group rules for the source Azure region should allow outbound access for replication traffic.
- We recommend you create rules in a test environment before you put them into production. - Use [service tags](../virtual-network/network-security-groups-overview.md#service-tags) instead of allowing individual IP addresses. - Service tags represent a group of IP address prefixes gathered together to minimize complexity when creating security rules. - Microsoft automatically updates service tags over time.
-Learn more about [outbound connectivity](azure-to-azure-about-networking.md#outbound-connectivity-using-service-tags) for Site Recovery, and [controlling connectivity with NSGs](concepts-network-security-group-with-site-recovery.md).
+Learn more about [outbound connectivity](azure-to-azure-about-networking.md#outbound-connectivity-using-service-tags) for Site Recovery, and [controlling connectivity with Network Security Groups](concepts-network-security-group-with-site-recovery.md).
### Connectivity for multi-VM consistency If you enable multi-VM consistency, machines in the replication group communicate with each other over port 20004.-- Ensure that there is no firewall appliance blocking the internal communication between the VMs over port 20004.
+- Ensure that there's no firewall appliance blocking the internal communication between the VMs over port 20004.
- If you want Linux VMs to be part of a replication group, ensure the outbound traffic on port 20004 is manually opened as per the guidance of the specific Linux version.
When you initiate a failover, the VMs are created in the target resource group,
## Next steps
-[Quickly replicate](azure-to-azure-quickstart.md) an Azure VM to a secondary region.
+- [Quickly replicate](azure-to-azure-quickstart.md) an Azure VM to a secondary region.
site-recovery Azure To Azure Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-common-questions.md
description: This article answers common questions about Azure VM disaster recov
Previously updated : 10/06/2023 Last updated : 02/27/2024
Yes, you can replicate VMs in availability zones to another Azure region.
### Can I replicate non-zone VMs to a zone within the same region?
-This isn't supported in the portal. You can use the REST API/PowerShell to do this.
+This isn't supported.
### Can I replicate zoned VMs to a different zone in the same region?
site-recovery Azure To Azure Enable Replication Added Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-enable-replication-added-disk.md
After the enable replication job runs and the initial replication finishes, the
## Next steps
-[Learn more](site-recovery-test-failover-to-azure.md) about running a test failover.
+- [Learn more](site-recovery-test-failover-to-azure.md) about running a test failover.
site-recovery Azure To Azure Exclude Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-exclude-disks.md
After initial replication finishes, replication moves on to the differential-syn
## Next steps
-Learn about [running a test failover](site-recovery-test-failover-to-azure.md).
+- Learn about [running a test failover](site-recovery-test-failover-to-azure.md).
site-recovery Azure To Azure How To Enable Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-policy.md
If the VMs show up as noncompliant, it might be because policy evaluation happen
## Next steps
-[Learn more](site-recovery-test-failover-to-azure.md) about running a test failover.
+- [Learn more](site-recovery-test-failover-to-azure.md) about running a test failover.
site-recovery Azure To Azure How To Enable Replication Ade Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-ade-vms.md
Permission required on [target Key vault](#required-user-permissions)
## Next steps
-[Learn more](site-recovery-test-failover-to-azure.md) about running a test failover.
+- [Learn more](site-recovery-test-failover-to-azure.md) about running a test failover.
site-recovery Azure To Azure How To Enable Replication S2d Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-s2d-vms.md
Title: Replicate Azure VMs running Storage Spaces Direct with Azure Site Recovery
-description: Learn how to replicate Azure VMs running Storage Spaces Direct using Azure Site Recovery.
+ Title: Replicate Azure virtual machines running Storage Spaces Direct with Azure Site Recovery
+description: Learn how to replicate Azure virtual machines running Storage Spaces Direct using Azure Site Recovery.
Previously updated : 01/29/2019 Last updated : 02/19/2024
-# Replicate Azure VMs running Storage Spaces Direct to another region
+# Replicate Azure virtual machines running Storage Spaces Direct to another region
-This article describes how to enable disaster recovery of Azure VMs running storage spaces direct.
+This article describes how to enable disaster recovery of Azure virtual machines running storage spaces direct.
>[!NOTE] >Only crash consistent recovery points are supported for storage spaces direct clusters.
->
-[Storage spaces direct (S2D)](/windows-server/storage/storage-spaces/deploy-storage-spaces-direct) is software-defined storage, which provides a way to create [guest clusters](https://techcommunity.microsoft.com/t5/failover-clustering/bg-p/FailoverClustering) on Azure. A guest cluster in Microsoft Azure is a failover cluster comprised of IaaS VMs. It allows hosted VM workloads to fail over across guest clusters, achieving higher availability SLA for applications, than a single Azure VM can provide. It is useful in scenarios where a VM hosts a critical application like SQL or scale-out file server.
+
+[Storage spaces direct (S2D)](/windows-server/storage/storage-spaces/deploy-storage-spaces-direct) is software-defined storage, which provides a way to create [guest clusters](https://techcommunity.microsoft.com/t5/failover-clustering/bg-p/FailoverClustering) on Azure. A guest cluster in Microsoft Azure is a failover cluster comprised of IaaS virtual machines. It allows hosted virtual machine workloads to fail over across guest clusters, achieving higher availability SLA for applications than a single Azure virtual machine can provide. It is useful in scenarios where a virtual machine hosts a critical application like SQL or scale-out file server.
## Disaster recovery with storage spaces direct In a typical scenario, you may have virtual machines guest cluster on Azure for higher resiliency of your application like Scale out file server. While this can provide your application higher availability, you would like to protect these applications using Site Recovery for any region level failure. Site Recovery replicates the data from one region to another Azure region and brings up the cluster in disaster recovery region in an event of failover.
-Below diagram shows a two-node Azure VM failover cluster using storage spaces direct.
+The following diagram shows a two-node Azure virtual machine failover cluster using storage spaces direct.
-![storagespacesdirect](./media/azure-to-azure-how-to-enable-replication-s2d-vms/storagespacedirect.png)
+![Screenshot of storage spaces.](./media/azure-to-azure-how-to-enable-replication-s2d-vms/storagespacedirect.png)
- Two Azure virtual machines in a Windows Failover Cluster and each virtual machine have two or more data disks.
Below diagram shows a two-node Azure VM failover cluster using storage spaces di
**Disaster Recovery Considerations** 1. When you are setting up [cloud witness](/windows-server/failover-clustering/deploy-cloud-witness#CloudWitnessSetUp) for the cluster, keep witness in the Disaster Recovery region.
-2. If you are going to fail over the virtual machines to the subnet on the DR region which is different from the source region then cluster IP address needs to be change after failover. To change IP of the cluster you need to use the Site Recovery [recovery plan script.](./site-recovery-runbook-automation.md)</br>
-[Sample script](https://github.com/krnese/azure-quickstart-templates/blob/master/asr-automation-recovery/scripts/ASR-Wordpress-ChangeMysqlConfig.ps1) to execute command inside VM using custom script extension
+2. If you are going to fail over the virtual machines to the subnet on the disaster recovery region, which is different from the source region, then cluster IP address needs to be changed after failover. To change IP of the cluster, you need to use the Site Recovery [recovery plan script.](./site-recovery-runbook-automation.md)</br>
+[Sample script](https://github.com/krnese/azure-quickstart-templates/blob/master/asr-automation-recovery/scripts/ASR-Wordpress-ChangeMysqlConfig.ps1) to execute command inside virtual machine using custom script extension.
### Enabling Site Recovery for S2D cluster:
-1. Inside the recovery services vault, click ΓÇ£+replicateΓÇ¥
+1. Inside the recovery services vault, select **+replicate**
1. Select all the nodes in the cluster and make them part of a [Multi-VM consistency group](./azure-to-azure-common-questions.md#multi-vm-consistency) 1. Select replication policy with application consistency off* (only crash consistency support is available) 1. Enable the replication
Below diagram shows a two-node Azure VM failover cluster using storage spaces di
![Screenshot that shows the virtual machines are protected and a part of a multi-VM consistency group.](./media/azure-to-azure-how-to-enable-replication-s2d-vms/storagespacesdirectgroup.PNG) ## Creating a recovery plan+ A recovery plan supports the sequencing of various tiers in a multi-tier application during a failover. Sequencing helps maintain application consistency. When you create a recovery plan for a multi-tier web application, complete the steps described in [Create a recovery plan by using Site Recovery](site-recovery-create-recovery-plans.md). ### Adding virtual machines to failover groups 1. Create a recovery plan by adding the virtual machines.
-2. Click on 'Customize' to group the VMs. By default, all VMs are part of 'Group 1'.
+2. Select **Customize** to group the virtual machines. By default, all virtual machines are part of `Group 1`.
### Add scripts to the recovery plan+ For your applications to function correctly, you might need to do some operations on the Azure virtual machines after the failover or during a test failover. You can automate some post-failover operations. For example, here we are attaching load balancer and changing cluster IP. ### Failover of the virtual machines
-Both the nodes of the VMs need to be fail over using the Site Recovery [recovery plan](./site-recovery-create-recovery-plans.md)
-![storagespacesdirect protection](./media/azure-to-azure-how-to-enable-replication-s2d-vms/recoveryplan.PNG)
+Both the nodes of the virtual machines need to be failed over using the Site Recovery [recovery plan](./site-recovery-create-recovery-plans.md).
+
+![Screenshot showing storagespacesdirect protection.](./media/azure-to-azure-how-to-enable-replication-s2d-vms/recoveryplan.PNG)
## Run a test failover 1. In the Azure portal, select your Recovery Services vault.
For more information, see [Test failover to Azure in Site Recovery](site-recover
4. To start the failover process, select the recovery point. For more information, see [Failover in Site Recovery](site-recovery-failover.md).+ ## Next steps
-[Learn more](./azure-to-azure-tutorial-failover-failback.md) about running failback.
+- [Learn more](./azure-to-azure-tutorial-failover-failback.md) about running failback.
+++
site-recovery Azure To Azure How To Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication.md
After the enable replication job runs, and the initial replication finishes, the
## Next steps
-[Learn more](site-recovery-test-failover-to-azure.md) about running a test failover.
+- [Learn more](site-recovery-test-failover-to-azure.md) about running a test failover.
site-recovery Azure To Azure How To Enable Zone To Zone Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md
Previously updated : 02/20/2024 Last updated : 02/19/2024
No. You must fail over to a different resource group.
The steps that you follow to run a disaster recovery drill, fail over, reprotect, and failback are the same as the steps in an Azure-to-Azure disaster recovery scenario.
-To perform a disaster recovery drill, follow the steps outlined in [Tutorial: Run a disaster recovery drill for Azure VMs](./azure-to-azure-tutorial-dr-drill.md).
+- To perform a disaster recovery drill, follow the steps outlined in [Tutorial: Run a disaster recovery drill for Azure VMs](./azure-to-azure-tutorial-dr-drill.md).
-To perform a failover and reprotect VMs in the secondary zone, follow the steps outlined in [Tutorial: Fail over Azure VMs to a secondary region](./azure-to-azure-tutorial-failover-failback.md).
+- To perform a failover and reprotect VMs in the secondary zone, follow the steps outlined in [Tutorial: Fail over Azure VMs to a secondary region](./azure-to-azure-tutorial-failover-failback.md).
-To fail back to the primary zone, follow the steps outlined [Tutorial: Fail back Azure VMs to the primary region](./azure-to-azure-tutorial-failback.md).
+- To fail back to the primary zone, follow the steps outlined [Tutorial: Fail back Azure VMs to the primary region](./azure-to-azure-tutorial-failback.md).
site-recovery Azure To Azure How To Reprotect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-reprotect.md
Previously updated : 01/31/2024 Last updated : 02/27/2024
By default, the following occurs:
In most cases, Azure Site Recovery doesn't replicate the complete data to the source region. The amount of data replicated depends on the following conditions:
-1. If the source VM data is deleted, corrupted, or inaccessible for some reason, such as a resource group change or delete, a complete initial replication will happen during reprotection because there's no data available on the source region to use. In this case, the reprotection time taken will be at least as long as the initial replication time taken from the primary to the disaster recovery location.
+1. Azure Site Recovery doesn't support reprotection if the source virtual machine's data is deleted, corrupted, or inaccessible for some reason. For example, a resource group change or deletion. Alternatively, you can disable the previous disaster recovery protection and enable a new protection from the current region.
2. If the source VM data is accessible, then differentials are computed by comparing both the disks and only the differences are transferred. In this case, the **reprotection time** is greater than or equal to the `checksum calculation time + checksum differentials transfer time + time taken to process the recovery points from Azure Site Recovery agent + auto scale time`.
However, when the VM is re-protected again from the primary region to disaster r
## Next steps
-After the VM is protected, you can initiate a failover. The failover shuts down the VM in the secondary region and creates and boots the VM in the primary region, with brief downtime during this process. We recommend you choose an appropriate time for this process and that you run a test failover before initiating a full failover to the primary site. [Learn more](site-recovery-failover.md) about Azure Site Recovery failover.
+After the VM is protected, you can initiate a failover. The failover shuts down the VM in the secondary region and creates and boots the VM in the primary region, with brief downtime during this process. We recommend you choose an appropriate time for this process and that you run a test failover before initiating a full failover to the primary site.
+
+[Learn more](site-recovery-failover.md) about Azure Site Recovery failover.
site-recovery Azure To Azure Move Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-move-overview.md
description: Using Azure Site Recovery to move Azure VMs from one Azure region t
Previously updated : 12/14/2023 Last updated : 02/19/2024
Based on the [architectures](#typical-architectures-for-a-multi-tier-deployment)
## Next steps
-> [!div class="nextstepaction"]
->
-> * [Move Azure VMs to another region](azure-to-azure-tutorial-migrate.md)
->
-> * [Move Azure VMs into Availability Zones](move-azure-vms-avset-azone.md)
+- [Move Azure VMs to another region](azure-to-azure-tutorial-migrate.md)
+- [Move Azure VMs into Availability Zones](move-azure-vms-avset-azone.md)
+
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
SAS key rotation | Not Supported | If the SAS key for storage accounts is rotate
Host Caching | Supported Hot add | Supported | Enabling replication for a data disk that you add to a replicated Azure VM is supported for VMs that use managed disks. <br/><br/> Only one disk can be hot added to an Azure VM at a time. Parallel addition of multiple disks isn't supported. | Hot remove disk | Not supported | If you remove data disk on the VM, you need to disable replication and enable replication again for the VM.
-Exclude disk | Support. You must use [PowerShell](azure-to-azure-exclude-disks.md) to configure. | Temporary disks are excluded by default.
+Exclude disk | Supported. You can use [PowerShell](azure-to-azure-exclude-disks.md) or navigate to **Advanced Setting** > **Storage Settings** > **Disk to Replicate** option from the portal. | Temporary disks are excluded by default.
Storage Spaces Direct | Supported for crash consistent recovery points. Application consistent recovery points aren't supported. | Scale-out File Server | Supported for crash consistent recovery points. Application consistent recovery points aren't supported. | DRBD | Disks that are part of a DRBD setup aren't supported. |
site-recovery Failover Failback Overview Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/failover-failback-overview-modernized.md
Title: About failover and failback in Azure Site Recovery - Modernized
-description: Learn about failover and failback in Azure Site Recovery - Modernized
+description: Learn about failover and failback in Azure Site Recovery - Modernized.
Previously updated : 12/04/2023 Last updated : 02/13/2024
To connect to the Azure VMs created after failover using RDP/SSH, there are seve
**Failover** | **Location** | **Actions** | | **Azure VM running Windows** | On the on-premises machine before failover | **Access over the internet**: Enable RDP. Make sure that TCP and UDP rules are added for **Public**, and that RDP is allowed for all profiles in **Windows Firewall** > **Allowed Apps**.<br/><br/> **Access over site-to-site VPN**: Enable RDP on the machine. Check that RDP is allowed in the **Windows Firewall** -> **Allowed apps and features**, for **Domain and Private** networks.<br/><br/> Make sure the operating system SAN policy is set to **OnlineAll**. [Learn more](https://support.microsoft.com/kb/3031135).<br/><br/> Make sure there are no Windows updates pending on the VM when you trigger a failover. Windows Update might start when you failover, and you won't be able to log onto the VM until updates are done.
-**Azure VM running Windows** | On the Azure VM after failover | [Add a public IP address](/archive/blogs/srinathv/how-to-add-a-public-ip-address-to-azure-vm-for-vm-failed-over-using-asr) for the VM.<br/><br/> The network security group rules on the failed over VM (and the Azure subnet to which it is connected) must allow incoming connections to the RDP port.<br/><br/> Check **Boot diagnostics** to verify a screenshot of the VM. If you can't connect, check that the VM is running, and review [troubleshooting tips](https://social.technet.microsoft.com/wiki/contents/articles/31666.troubleshooting-remote-desktop-connection-after-failover-using-asr.aspx).
+**Azure VM running Windows** | On the Azure VM after failover | [Add a public IP address](/archive/blogs/srinathv/how-to-add-a-public-ip-address-to-azure-vm-for-vm-failed-over-using-asr) for the VM.<br/><br/> The network security group rules on the failed over VM (and the Azure subnet to which it's connected) must allow incoming connections to the RDP port.<br/><br/> Check **Boot diagnostics** to verify a screenshot of the VM. If you can't connect, check that the VM is running, and review [troubleshooting tips](https://social.technet.microsoft.com/wiki/contents/articles/31666.troubleshooting-remote-desktop-connection-after-failover-using-asr.aspx).
**Azure VM running Linux** | On the on-premises machine before failover | Ensure that the Secure Shell service on the VM is set to start automatically on system boot.<br/><br/> Check that firewall rules allow an SSH connection to it.
-**Azure VM running Linux** | On the Azure VM after failover | The network security group rules on the failed over VM (and the Azure subnet to which it is connected) need to allow incoming connections to the SSH port.<br/><br/> [Add a public IP address](/archive/blogs/srinathv/how-to-add-a-public-ip-address-to-azure-vm-for-vm-failed-over-using-asr) for the VM.<br/><br/> Check **Boot diagnostics** for a screenshot of the VM.<br/><br/>
+**Azure VM running Linux** | On the Azure VM after failover | The network security group rules on the failed over VM (and the Azure subnet to which it's connected) need to allow incoming connections to the SSH port.<br/><br/> [Add a public IP address](/archive/blogs/srinathv/how-to-add-a-public-ip-address-to-azure-vm-for-vm-failed-over-using-asr) for the VM.<br/><br/> Check **Boot diagnostics** for a screenshot of the VM.<br/><br/>
## Types of failover
Site Recovery provides different failover options.
**Planned failover-Hyper-V** | Used for planned downtime.<br/><br/> Source VMs are shut down. The latest data is synchronized before initiating the failover. | Zero data loss for the planned workflow. | 1. Plan a downtime maintenance window and notify users.<br/><br/> 2. Take user-facing apps offline.<br/><br/> 3. Initiate a planned failover with the latest recovery point. The failover doesn't run if the machine isn't shut down, or if errors are encountered.<br/><br/> 4. After the failover, check that the replica Azure VM is active in Azure.<br/><br/> 5. Commit the failover to finish up. The commit action deletes all recovery points. **Failover-Hyper-V** | Usually run if there's an unplanned outage, or the primary site isn't available.<br/><br/> Optionally shut down the VM and synchronize final changes before initiating the failover. | Minimal data loss for apps. | 1. Initiate your BCDR plan. <br/><br/> 2. Initiate a failover. Specify whether Site Recovery should shut down the VM and synchronize/replicate the latest changes before triggering the failover.<br/><br/> 3. You can failover to many recovery point options, summarized [here](#recovery-point-options).<br/><br/> If you don't enable the option to shut down the VM, or if Site Recovery can't shut it down, the latest recovery point is used.<br/>The failover runs even if the machine can't be shut down.<br/><br/> 4. After failover, you check that the replica Azure VM is active in Azure.<br/> If necessary, you can select a different recovery point from the retention window of 24 hours.<br/><br/> 5. Commit the failover to finish up. The commit action deletes all available recovery points. **Failover-VMware** | Usually run if there's an unplanned outage, or the primary site isn't available.<br/><br/> Optionally specify that Site Recovery should try to trigger a shutdown of the VM, and to synchronize and replicate final changes before initiating the failover. | Minimal data loss for apps. | 1. Initiate your BCDR plan. <br/><br/> 2. Initiate a failover from Site Recovery. Specify whether Site Recovery should try to trigger VM shutdown and synchronize before running the failover.<br/> The failover runs even if the machines can't be shut down.<br/><br/> 3. After the failover, check that the replica Azure VM is active in Azure. <br/>If necessary, you can select a different recovery point from the retention window of 72 hours.<br/><br/> 5. Commit the failover to finish up. The commit action deletes all recovery points.<br/> For Windows VMs, Site Recovery disables the VMware tools during failover.
-**Planned failover-VMware** | You can perform a planned failover from Azure to on-premises. | Since it is a planned failover activity, the recovery point is generated after the planned failover job is triggered. | When the planned failover is triggered, pending changes are copied to on-premises, a latest recovery point of the VM is generated and Azure VM is shut down.<br/><br/> Follow the failover process as discussed [here](vmware-azure-tutorial-failover-failback-modernized.md#planned-failover-from-azure-to-on-premises). Post this, on-premises machine is turned on. After a successful planned failover, the machine will be active in your on-premises environment.
+**Planned failover-VMware** | You can perform a planned failover from Azure to on-premises. | Since it's a planned failover activity, the recovery point is generated after the planned failover job is triggered. | When the planned failover is triggered, pending changes are copied to on-premises, a latest recovery point of the VM is generated and Azure VM is shut down.<br/><br/> Follow the failover process as discussed [here](vmware-azure-tutorial-failover-failback-modernized.md#planned-failover-from-azure-to-on-premises). Post this, on-premises machine is turned on. After a successful planned failover, the machine will be active in your on-premises environment.
## Failover processing
After failover to Azure, the replicated Azure VMs are in an unprotected state.
- As a first step to failing back to your on-premises site, you need to start the Azure VMs replicating to on-premises. The reprotection process depends on the type of machines you failed over. - After machines are replicating from Azure to on-premises, you can run a failover from Azure to your on-premises site. - After machines are running on-premises again, you can enable replication so that they replicate to Azure for disaster recovery.-- Only disks replicated from on-premises to Azure are replicated back from Azure during re-protect operation. Newly added disks to failed over Azure VM will not be replicated to on-premises machine.
+- Only disks replicated from on-premises to Azure are replicated back from Azure during reprotect operation. Newly added disks to failed over Azure VM won't be replicated to on-premises machine.
- An appliance can have up to 60 disks attached to it. If the VMs being failed back have more than a collective total of 60 disks, or if you're failing back large volumes of traffic, create a separate appliance for failback. **Planned failover works as follows**: - To fail back to on-premises, a VM needs at least one recovery point in order to fail back. In a recovery plan, all VMs in the plan need at least one recovery point.-- As this is a planned failover activity, you are allowed to select the type of recovery point you want to fail back to. We recommend that you use a crash-consistent point.
+- As this is a planned failover activity, you're allowed to select the type of recovery point you want to fail back to. We recommend that you use a crash-consistent point.
- There is also an app-consistent recovery point option. In this case, a single VM recovers to its latest available app-consistent recovery point. For a recovery plan with a replication group, each replication group recovers to its common available recovery point. - App-consistent recovery points can be behind in time, and there might be loss in data. - During failover from Azure to the on-premises site, Site Recovery shuts down the Azure VMs. When you commit the failover, Site Recovery removes the failed back Azure VMs in Azure.
+> [!NOTE]
+> The failover VM boot may take longer on Windows Server 2012 or older versions when using crash consistent recovery points.
## VMware/physical reprotection/failback
To reprotect and fail back VMware machines and physical servers from Azure to on
**Appliance selection** -- You can select any of the Azure Site Recovery replication appliances registered under a vault to re-protect to on-premises. You do not require a separate Process server in Azure for re-protect operation and a scale-out Master Target server for Linux VMs.-- Replication appliance doesnΓÇÖt require additional network connection/ports (as compared with forward protection) during failback. Same appliance can be used for forward and backward protections if it is in healthy state. It should not impact the performance of the replications.
+- You can select any of the Azure Site Recovery replication appliances registered under a vault to reprotect to on-premises. You don't require a separate Process server in Azure for reprotect operation and a scale-out Master Target server for Linux VMs.
+- Replication appliance doesnΓÇÖt require another network connection/ports (as compared with forward protection) during failback. Same appliance can be used for forward and backward protections if it's in healthy state. It shouldn't impact the performance of the replications.
- When selecting the appliance, ensure that the target datastore where the source machine is located, is accessible by the appliance. The datastore of the source machine should always be accessible by the appliance. Even if the machine and appliance are located in different ESX servers, as long as the data store is shared between them, reprotection succeeds. > [!NOTE]
- > - Storage vMotion of replicated items is not supported. Storage vMotion of replication appliance is not supported after re-protect operation.
+ > - Storage vMotion of replicated items is not supported. Storage vMotion of replication appliance is not supported after reprotect operation.
> - When selecting the appliance, ensure that the target datastore where the source machine is located, is accessible by the appliance.
-**Re-protect job**
+**Reprotect job**
-- If this is a new re-protect operation, then by default, a new log storage account is automatically created by Azure Site Recovery in the target region. Retention disk is not required.-- In case of Alternate Location Recovery and Original Location Recovery, the original configurations of source machines are retrieved.
+- If this is a new reprotect operation, then by default, a new log storage account is automatically created by Azure Site Recovery in the target region. Retention disk is not required.
+- In Alternate Location Recovery and Original Location Recovery, the original configurations of source machines are retrieved.
> [!NOTE]
- > - Static IP address canΓÇÖt be retained in case of Alternate location re-protect (ALR) or Original location re-protect (OLR).
+ > - Static IP address canΓÇÖt be retained in case of Alternate location reprotect (ALR) or Original location reprotect (OLR).
> - fstab, LVMconf would be changed. **Failure** -- Any failed re-protect job can be retried. During retry, you can choose any healthy replication appliance.
+- Any failed reprotect job can be retried. During retry, you can choose any healthy replication appliance.
-When you reprotect Azure machines to on-premises, you are notified that you are failing back to the original location, or to an alternate location.
+When you reprotect Azure machines to on-premises, you're notified that you're failing back to the original location, or to an alternate location.
- **Original location recovery**: This fails back from Azure to the same source on-premises machine if it exists. In this scenario, only changes are replicated back to on-premises. - **Data store selection during OLR**: The data store attached to the source machine is automatically selected. - **Alternate location recovery**: If the on-premises machine doesn't exist, you can fail back from Azure to an alternate location. When you reprotect the Azure VM to on-premises, the on-premises machine is created. Full data replication occurs from Azure to on-premises. [Review](concepts-types-of-failback.md) the requirements and limitations for location failback.
- - **Data store selection during ALR**: Any data store managed by vCenter on which the appliance is situated and is accessible (read and write permissions) by the appliance can be chosen (original/new). You can choose cache storage account used for re-protection.
+ - **Data store selection during ALR**: Any data store managed by vCenter on which the appliance is situated and is accessible (read and write permissions) by the appliance can be chosen (original/new). You can choose cache storage account used for reprotection.
- After failover is complete, mobility agent in the Azure VM is registered with Site Recovery Services automatically. If registration fails, a critical health issue will be raised on the failed over VM. After issue is resolved, registration is automatically triggered. You can manually complete the registration after resolving the errors.
Once you have initiated the planned failover and it completes successfully, your
> [Planned failover (modernized)](vmware-azure-tutorial-failover-failback-modernized.md#planned-failover-from-azure-to-on-premises) > [!div class="nextstepaction"]
-> [Re-protect (modernized)](vmware-azure-tutorial-failover-failback-modernized.md#re-protect-the-on-premises-machine-to-azure-after-successful-planned-failover)
+> [Reprotect (modernized)](vmware-azure-tutorial-failover-failback-modernized.md#re-protect-the-on-premises-machine-to-azure-after-successful-planned-failover)
> [!div class="nextstepaction"] > [Cancel failover (modernized)](vmware-azure-tutorial-failover-failback-modernized.md#cancel-planned-failover)
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Physical servers with the HP CCISS storage controller | Not supported.
Device/Mount point naming convention | Device name or mount point name should be unique.<br/> Ensure that no two devices/mount points have case-sensitive names. For example, naming devices for the same VM as *device1* and *Device1* isn't supported. Directories | If you're running a version of the Mobility service earlier than version 9.20 (released in [Update Rollup 31](https://support.microsoft.com/help/4478871/)), then these restrictions apply:<br/><br/> - These directories (if set up as separate partitions/file-systems) must be on the same OS disk on the source server: /(root), /boot, /usr, /usr/local, /var, /etc.</br> - The /boot directory should be on a disk partition and not be an LVM volume.<br/><br/> From version 9.20 onwards, these restrictions don't apply. Boot directory | - Boot disks with GPT partition format are supported. GPT disks are also supported as data disks.<br/><br/> Multiple boot disks on a VM aren't supported.<br/><br/> - /boot on an LVM volume across more than one disk isn't supported.<br/> - A machine without a boot disk can't be replicated.
-Free space requirements| 2 GB on the /(root) partition <br/><br/> 250 MB on the installation folder
+Free space requirements| 2 GB on the /(root) partition <br/><br/> 600 MB on the installation folder
XFSv5 | XFSv5 features on XFS file systems, such as metadata checksum, are supported (Mobility service version 9.10 onwards).<br/> Use the xfs_info utility to check the XFS superblock for the partition. If `ftype` is set to 1, then XFSv5 features are in use. BTRFS | BTRFS is supported from [Update Rollup 34](https://support.microsoft.com/help/4490016) (version 9.22 of the Mobility service) onwards. BTRFS isn't supported if:<br/><br/> - The BTRFS file system subvolume is changed after enabling protection.</br> - The BTRFS file system is spread over multiple disks.</br> - The BTRFS file system supports RAID.
storage Map Rest Apis Transaction Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/map-rest-apis-transaction-categories.md
The price of each type appears in the [Azure Blob Storage pricing](https://azure
| [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) | Other | Other | Write | | [Delete Blob](/rest/api/storageservices/delete-blob) | Free | Free | Free | | [Undelete Blob](/rest/api/storageservices/undelete-blob) | Write | Write | Write |
-| [Set Blob Tier](/rest/api/storageservices/set-blob-tier) (tier down) | Write | Write | Write |
-| [Set Blob Tier](/rest/api/storageservices/set-blob-tier) (tier up) | Read | Read | Read |
-| [Blob Batch](/rest/api/storageservices/blob-batch) (Set Blob Tier) | Other | Other | Other |
+| [Set Blob Tier](/rest/api/storageservices/set-blob-tier) (tier down) | Write | Write | N/A |
+| [Set Blob Tier](/rest/api/storageservices/set-blob-tier) (tier up) | Read | Read | N/A |
+| [Blob Batch](/rest/api/storageservices/blob-batch) (Set Blob Tier) | Other | Other | N/A |
| [Set Immutability Policy](/rest/api/storageservices/set-blob-immutability-policy) | Other | Other | Other | | [Delete Immutability Policy](/rest/api/storageservices/delete-blob-immutability-policy) | Other | Other | Other | | [Set Legal Hold](/rest/api/storageservices/set-blob-legal-hold) | Other | Other | Other |
storage Storage Quickstart Blobs Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-python.md
From an empty directory, follow these steps to initialize the `azd` template, pr
## Run the sample code
-At this point, the resources are deployed to Azure and the code is ready to run. Follow these steps to update the name of the storage account in the code and run the sample console app:
+At this point, the resources are deployed to Azure and the code is almost ready to run. Follow these steps to install packages, update the name of the storage account in the code, and run the sample console app:
+- **Install packages**: In the local directory, install packages for the Azure Blob Storage and Azure Identity client libraries using the following command: `pip install azure-storage-blob azure-identity`
- **Update the storage account name**: In the local directory, edit the file named **blob_quickstart.py**. Find the `<storage-account-name>` placeholder and replace it with the actual name of the storage account created by the `azd up` command. Save the changes. - **Run the project**: Execute the following command to run the app: `python blob_quickstart.py`. - **Observe the output**: This app creates a test file in your local *data* folder and uploads it to a container in the storage account. The example then lists the blobs in the container and downloads the file with a new name so that you can compare the old and new files.
storage Files Smb Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-smb-protocol.md
description: Learn about file shares hosted in Azure Files using the Server Mess
Previously updated : 09/29/2023 Last updated : 02/26/2024
Azure Files exposes the following settings:
- **SMB versions**: Which versions of SMB are allowed. Supported protocol versions are SMB 3.1.1, SMB 3.0, and SMB 2.1. By default, all SMB versions are allowed, although SMB 2.1 is disallowed if "require secure transfer" is enabled, because SMB 2.1 does not support encryption in transit. - **Authentication methods**: Which SMB authentication methods are allowed. Supported authentication methods are NTLMv2 (storage account key only) and Kerberos. By default, all authentication methods are allowed. Removing NTLMv2 disallows using the storage account key to mount the Azure file share. Azure Files doesn't support using NTLM authentication for domain credentials. - **Kerberos ticket encryption**: Which encryption algorithms are allowed. Supported encryption algorithms are AES-256 (recommended) and RC4-HMAC.-- **SMB channel encryption**: Which SMB channel encryption algorithms are allowed. Supported encryption algorithms are AES-256-GCM, AES-128-GCM, and AES-128-CCM.
+- **SMB channel encryption**: Which SMB channel encryption algorithms are allowed. Supported encryption algorithms are AES-256-GCM, AES-128-GCM, and AES-128-CCM. If you select only AES-256-GCM, you'll need to tell connecting clients to use it by opening a PowerShell terminal as administrator on each client and running `Set-SmbClientConfiguration -EncryptionCiphers "AES_256_GCM" -Confirm:$false`. Using AES-256-GCM isn't supported on Windows clients older than Windows 11/Windows Server 2022.
You can view and change the SMB security settings using the Azure portal, PowerShell, or CLI. Select the desired tab to see the steps on how to get and set the SMB security settings.
Get-AzStorageFileServiceProperty -StorageAccount $storageAccount | `
} ```
-Depending on your organization's security, performance, and compatibility requirements, you may wish to modify the SMB protocol settings. The following PowerShell command restricts your SMB file shares to only the most secure options.
+Depending on your organization's security, performance, and compatibility requirements, you might want to modify the SMB protocol settings. The following PowerShell command restricts your SMB file shares to only the most secure options.
-> [!Important]
-> Restricting SMB Azure file shares to only the most secure options may result in some clients not being able to connect if they do not meet the requirements. For example, AES-256-GCM was introduced as an option for SMB channel encryption starting in Windows Server 2022 and Windows 11. This means that older clients that do not support AES-256-GCM will not be able to connect.
+> [!IMPORTANT]
+> Restricting SMB Azure file shares to only the most secure options might result in some clients not being able to connect. For example, AES-256-GCM was introduced as an option for SMB channel encryption starting in Windows Server 2022 and Windows 11. This means that older clients that don't support AES-256-GCM won't be able to connect. If you select only AES-256-GCM, you'll need to tell Windows Server 2022 and Windows 11 clients to only use AES-256-GCM by opening a PowerShell terminal as administrator on each client and running `Set-SmbClientConfiguration -EncryptionCiphers "AES_256_GCM" -Confirm:$false`.
```PowerShell Update-AzStorageFileServiceProperty `
echo $PROTOCOLSETTINGS
Depending on your organization's security, performance, and compatibility requirements, you might wish to modify the SMB protocol settings. The following Azure CLI command restricts your SMB file shares to only the most secure options.
-> [!Important]
-> Restricting SMB Azure file shares to only the most secure options might result in some clients not being able to connect if they don't meet the requirements. For example, AES-256-GCM was introduced as an option for SMB channel encryption starting in Windows Server 2022 and Windows 11. This means that older clients that don't support AES-256-GCM won't be able to connect.
+> [!IMPORTANT]
+> Restricting SMB Azure file shares to only the most secure options might result in some clients not being able to connect. For example, AES-256-GCM was introduced as an option for SMB channel encryption starting in Windows Server 2022 and Windows 11. This means that older clients that don't support AES-256-GCM won't be able to connect. If you select only AES-256-GCM, you'll need to tell Windows Server 2022 and Windows 11 clients to only use AES-256-GCM by opening a PowerShell terminal as administrator on each client and running `Set-SmbClientConfiguration -EncryptionCiphers "AES_256_GCM" -Confirm:$false`.
```azurecli az storage account file-service-properties update \
stream-analytics Custom Deserializer Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/custom-deserializer-examples.md
Previously updated : 6/16/2021 Last updated : 02/26/2024 # Read input in any format using .NET custom deserializers (Preview)
+> [!IMPORTANT]
+> Custom .net deserializer for Azure Stream Analytics will be retired on 30th September 2024. After that date, it won't be possible to use the feature. Please transition to a [JSON, AVRO, or CSV built-in deserializer](./stream-analytics-parsing-json.md) by that date.
+ .NET custom deserializers allow your Azure Stream Analytics job to read data from formats outside of the three [built-in data formats](stream-analytics-parsing-json.md). This article explains the serialization format and the interfaces that define .NET custom deserializers for Azure Stream Analytics cloud and edge jobs. There are also example deserializers for Protocol Buffer and CSV format. ## .NET custom deserializer
stream-analytics Custom Deserializer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/custom-deserializer.md
Previously updated : 01/12/2023 Last updated : 02/26/2024 # Custom .NET deserializers for Azure Stream Analytics in Visual Studio (Preview)
+> [!IMPORTANT]
+> Custom .net deserializer for Azure Stream Analytics will be retired on 30th September 2024. After that date, it won't be possible to use the feature. Please transition to a [JSON, AVRO, or CSV built-in deserializer](./stream-analytics-parsing-json.md) by that date.
+ Azure Stream Analytics has [built-in support for three data formats](stream-analytics-parsing-json.md): JSON, CSV, and Avro. With custom .NET deserializers, you can read data from other formats such as [Protocol Buffer](https://developers.google.com/protocol-buffers/), [Bond](https://github.com/Microsoft/bond) and other user defined formats for both cloud and edge jobs. This tutorial demonstrates how to create a custom .NET deserializer for an Azure Stream Analytics cloud job using Visual Studio. To learn how to create .NET deserializers in Visual Studio Code, see [Create .NET deserializers for Azure Stream Analytics jobs in Visual Studio Code](visual-studio-code-custom-deserializer.md).
stream-analytics Stream Analytics Edge Csharp Udf Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-edge-csharp-udf-methods.md
Previously updated : 6/09/2021 Last updated : 02/26/2024 # Develop .NET Standard user-defined functions for Azure Stream Analytics jobs (Preview)
+> [!IMPORTANT]
+> .Net Standard user-defined functions for Azure Stream Analytics will be retired on 30th September 2024. After that date, it will not be possible to use the feature. Please transition to [JavaScript user-defined functions](./stream-analytics-javascript-user-defined-functions.md) for Azure Stream Analytics.
+ Azure Stream Analytics offers a SQL-like query language for performing transformations and computations over streams of event data. There are many built-in functions, but some complex scenarios require additional flexibility. With .NET Standard user-defined functions (UDF), you can invoke your own functions written in any .NET standard language (C#, F#, etc.) to extend the Stream Analytics query language. UDFs allow you to perform complex math computations, import custom ML models using ML.NET, and use custom imputation logic for missing data. The UDF feature for Stream Analytics jobs is currently in preview and shouldn't be used in production workloads. ## Regions
stream-analytics Stream Analytics Edge Csharp Udf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-edge-csharp-udf.md
Previously updated : 03/29/2023 Last updated : 02/26/2024 # Tutorial: Write a C# user-defined function for Azure Stream Analytics job (Preview)
+> [!IMPORTANT]
+> .Net Standard user-defined functions for Azure Stream Analytics will be retired on 30th September 2024. After that date, it will not be possible to use the feature. Please transition to [JavaScript user-defined functions](./stream-analytics-javascript-user-defined-functions.md) for Azure Stream Analytics.
+ C# user-defined functions (UDFs) created in Visual Studio allow you to extend the Azure Stream Analytics query language with your own functions. You can reuse existing code (including DLLs) and use mathematical or complex logic with C#. There are three ways to implement UDFs: - CodeBehind files in a Stream Analytics project
update-manager Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/deploy-updates.md
Title: Deploy updates and track results in Azure Update Manager description: This article details how to use Azure Update Manager in the Azure portal to deploy updates and view results for supported machines. Previously updated : 11/20/2023 Last updated : 02/26/2024
After your scheduled deployment starts, you can see its status on the **History*
:::image type="content" source="./media/deploy-updates/updates-history-inline.png" alt-text="Screenshot that shows update history." lightbox="./media/deploy-updates/updates-history-expanded.png":::
-**Windows update history** currently doesn't show the updates that are installed from Azure Update Management. To view a summary of the updates applied on your machines, go to **Update Manager** > **Manage** > **History**.
+Currently, the **Windows update history** for a VM doesn't show the updates that are installed from Azure Update Manager. To view a summary of the updates applied on your machines, go to **Azure Update Manager** > **Manage** > **History** in [Azure portal](https://portal.azure.com).
> [!NOTE]
-> The **Windows update history** currently doesn't show the updates summary that are installed from Azure Update Management. To view a summary of the updates applied on your machines, go to **Update manager** > **Manage** > **History**.
+> - To view a summary of the updates applied on your machines, go to **Azure Update Manager** > **Manage** > **History** in [Azure portal](https://portal.azure.com).
+> - Alternatively, go to **Control Panel** > **Programs** > **Programs and Features** > **Installed Updates** to view the installed CBS updates. This view only shows history of CBS updates [Servicing stack updates - Windows Deployment](https://learn.microsoft.com/windows/deployment/update/servicing-stack-updates) which can be uninstalled.
A list of the deployments created are shown in the update deployment grid and include relevant information about the deployment. Every update deployment has a unique GUID, represented as **Operation ID**, which is listed along with **Status**, **Updates Installed** and **Time** details. You can filter the results listed in the grid.
update-manager Scheduled Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/scheduled-patching.md
Title: Scheduling recurring updates in Azure Update Manager description: This article details how to use Azure Update Manager to set update schedules that install recurring updates on your machines. Previously updated : 02/05/2024 Last updated : 02/26/2024
To view the current compliance state of your existing resources:
You can check the deployment status and history of your maintenance configuration runs from the Update Manager portal. For more information, see [Update deployment history by maintenance run ID](./manage-multiple-machines.md#update-deployment-history-by-maintenance-run-id). +
+## Timeline of Maintenance Window
+
+The maintenance window controls the number of updates that can be installed on your virtual machine and Arc-enabled servers. We recommend that you go through the following table to understand the timeline for a maintenance window while installing an update:
+
+For example, if a maintenance window is of 3 hours and starts at 3:00 PM, the following are the details on how the updates are installed:
+
+#### [Windows](#tab/windows-maintenance)
+
+| **Update Type** | **Details** |
+| - | - |
+| Service Pack | If you are installing a Service Pack, you need 20 mins left in the maintenance window for the updates to be successfully installed, else the update is skipped. </br> In this example, you must finish installing the service pack by 5:40 PM. |
+| Other updates | If you are installing any other update besides Service Pack, you need to have 15 mins left in the maintenance window, else it is skipped. </br> In this example, you must finish installing the other updates by 5:45 PM.|
+| Reboot | If the machine(s) needs a reboot, you need to have 10 minutes left in the maintenance window, else the reboot is skipped. </br> In this example, you must start the reboot by 5:50 PM. </br> **Note**: For Azure virtual machines and Arc-enabled servers, Azure Update Manager waits for a maximum of 15 minutes for Azure VMs and 25 minutes for Arc servers after a reboot to complete the reboot operation before marking it as failed. |
+
+#### [Linux](#tab/linux-maintenance)
+
+| **Update Type** | **Details** |
+| - | - |
+| Reboot | If the VMs need a reboot, you need to have 15 minutes left in the maintenance window, else the reboot is skipped. </br> **Note**: This is only applicable for Azure VMs and not for Arc-enabled servers. </br> In this example, you must start the reboot by 5:45 PM. |
+| Updates installed in batches | If the batch size is X, then the minimum time required to update the packages is calculated as follows </br></br> - If X is less than or equal to 3, the minimum required time = 5 x X minutes. </br> - If X is greater than 3, the minimum required time = 15+2 x (X-3) minutes. </br> **Note**: Only Azure Update Manager service controls the batch size (X) of the updates. |
+++
+> [!NOTE]
+> - Azure Update Manager doesn't stop installing the new updates if it's approaching the end of the maintenance window.
+> - Azure Update Manger doesn't terminate in-progress updates if the maintenance window is exceeded and only the remaining updates that must be installed aren't attempted. We recommend that you re-evaluate the duration of your maintenance window to ensure all the updates are installed .
+> - If the maintenance window is exceeded on Windows, it's often because a service pack update is taking a long time to install.
+++ ## Next steps * Learn more about [Dynamic scope](dynamic-scope-overview.md), an advanced capability of schedule patching.
virtual-desktop App Attach Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-overview.md
Last updated 12/08/2023
> App attach is currently in PREVIEW. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-> [!NOTE]
-> App attach (preview) is gradually rolling out and you might not have access to it yet. If you don't have access, check back later. MSIX app attach is generally available.
- There are two features in Azure Virtual Desktop that enable you to dynamically attach applications from an application package to a user session in Azure Virtual Desktop - *MSIX app attach* and *app attach (preview)*. *MSIX app attach* is generally available, but *app attach* is now available in preview, which improves the administrative experience and user experience. With both *MSIX app attach* and *app attach*, applications aren't installed locally on session hosts or images, making it easier to create custom images for your session hosts, and reducing operational overhead and costs for your organization. Applications run within containers, which separate user data, the operating system, and other applications, increasing security and making them easier to troubleshoot. The following table compares MSIX app attach with app attach:
The following sections provide some guidance on the permissions, performance, an
Each session host mounts application images from the file share. You need to configure NTFS and share permissions to allow each session host computer object read access to the files and file share. How you configure the correct permission depends on which storage provider and identity provider you're using for your file share and session hosts. - To use Azure Files when your session hosts joined to Microsoft Entra ID, you need to assign the [Reader and Data Access](../role-based-access-control/built-in-roles.md#reader-and-data-access) Azure role-based access control (RBAC) role to the **Azure Virtual Desktop** and **Azure Virtual Desktop ARM Provider** service principals. This RBAC role assignment allows your session hosts to access the storage account using [access keys](../storage/common/storage-account-keys-manage.md). The storage account must be in the same Azure subscription as your session hosts. To learn how to assign an Azure RBAC role to the Azure Virtual Desktop service principals, see [Assign RBAC roles to the Azure Virtual Desktop service principals](service-principal-assign-roles.md). For more information about using Azure Files with session hosts that are joined to Microsoft Entra ID, Active Directory Domain Services, or Microsoft Entra Domain Services, see [Overview of Azure Files identity-based authentication options for SMB access](../storage/files/storage-files-active-directory-overview.md). > [!WARNING] > Assigning the **Azure Virtual Desktop ARM Provider** service principal to the storage account grants the Azure Virtual Desktop service to all data inside the storage account. We recommended you only store apps to use with app attach in this storage account and rotate the access keys regularly. - For Azure Files with Active Directory Domain Services, you need to assign the [Storage File Data SMB Share Reader](../role-based-access-control/built-in-roles.md#storage-file-data-smb-share-reader) Azure role-based access control (RBAC) role as the [default share-level permission](../storage/files/storage-files-identity-ad-ds-assign-permissions.md#share-level-permissions-for-all-authenticated-identities), and [configure NTFS permissions](../storage/files/storage-files-identity-ad-ds-configure-permissions.md) to give read access to each session host's computer object. For more information about using Azure Files with session hosts that are joined to Microsoft Entra ID, Active Directory Domain Services, or Microsoft Entra Domain Services, see [Overview of Azure Files identity-based authentication options for SMB access](../storage/files/storage-files-active-directory-overview.md).+
+- For Azure Files with Active Directory Domain Services, you need to assign the [Storage File Data SMB Share Reader](../role-based-access-control/built-in-roles.md#storage-file-data-smb-share-reader) Azure role-based access control (RBAC) role as the [default share-level permission](../storage/files/storage-files-identity-ad-ds-assign-permissions.md#share-level-permissions-for-all-authenticated-identities), and [configure NTFS permissions](../storage/files/storage-files-identity-ad-ds-configure-permissions.md) to give read access to each session host's computer object.
+
+ For more information about using Azure Files with session hosts that are joined to Active Directory Domain Services or Microsoft Entra Domain Services, see [Overview of Azure Files identity-based authentication options for SMB access](../storage/files/storage-files-active-directory-overview.md).
-- For Azure NetApp Files, you can [create an SMB volume](../azure-netapp-files/azure-netapp-files-create-volumes-smb.md) and configure NTFS permissions to give read access to each session host's computer object. Your session hosts need to be joined to Active Directory Domain Services or Microsoft Entra Domain Services. Microsoft Entra ID isn't supported.
+- For Azure NetApp Files, you can [create an SMB volume](../azure-netapp-files/azure-netapp-files-create-volumes-smb.md) and configure NTFS permissions to give read access to each session host's computer object. Your session hosts need to be joined to Active Directory Domain Services or Microsoft Entra Domain Services.
You can verify the permissions are correct by using [PsExec](/sysinternals/downloads/psexec). For more information, see [Check file share access](troubleshoot-app-attach.md#check-file-share-access).
virtual-desktop App Attach Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-setup.md
Last updated 12/08/2023
> [!TIP] > A new version of app attach for Azure Virtual Desktop is available in preview. Select a button at the top of this article to choose between *MSIX app attach* (current) and *app attach* (preview) to see the relevant documentation.
-> [!NOTE]
-> App attach (preview) is gradually rolling out and you might not have access to it yet. If you don't have access, check back later. MSIX app attach is generally available.
- ::: zone pivot="app-attach" App attach enables you to dynamically attach applications from an application package to a user session in Azure Virtual Desktop. Applications aren't installed locally on session hosts or images, enabling you to create fewer custom images for your session hosts, and reducing operational overhead and costs for your organization. Delivering applications with app attach also gives you greater control over which applications your users can access in a remote session. ::: zone-end
virtual-desktop Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/authentication.md
If you're using an Identity Provider (IdP) other than Microsoft Entra ID to mana
Azure Virtual Desktop currently doesn't support [external identities](../active-directory/external-identities/index.yml).
-## Service authentication
+## Authentication methods
+
+For users connecting to a remote session, there are three separate authentication points:
+
+- **Service authentication to Azure Virtual Desktop**: retrieving a list of resources the user has access to when accessing the client. The experience depends on the Microsoft Entra account configuration. For example, if the user has multifactor authentication enabled, the user is prompted for their user account and a second form of authentication, in the same way as accessing other services.
+
+- **Session host**: when starting a remote session. A username and password is required for a session host, but this is seamless to the user if single sign-on (SSO) is enabled.
+
+- **In-session authentication**: connecting to other resources within a remote session.
+
+The following sections explain each of these authentication points in more detail.
+
+### Service authentication
To access Azure Virtual Desktop resources, you must first authenticate to the service by signing in with a Microsoft Entra account. Authentication happens whenever you subscribe to a workspace to retrieve your resources and connect to apps or desktops. You can use [third-party identity providers](../active-directory/devices/azureadjoin-plan.md#federated-environment) as long as they federate with Microsoft Entra ID. <a name='multi-factor-authentication'></a>
-### Multifactor authentication
+#### Multifactor authentication
Follow the instructions in [Enforce Microsoft Entra multifactor authentication for Azure Virtual Desktop using Conditional Access](set-up-mfa.md) to learn how to enforce Microsoft Entra multifactor authentication for your deployment. That article will also tell you how to configure how often your users are prompted to enter their credentials. When deploying Microsoft Entra joined VMs, note the extra steps for [Microsoft Entra joined session host VMs](set-up-mfa.md#azure-ad-joined-session-host-vms).
-### Passwordless authentication
+#### Passwordless authentication
You can use any authentication type supported by Microsoft Entra ID, such as [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) and other [passwordless authentication options](../active-directory/authentication/concept-authentication-passwordless.md) (for example, FIDO keys), to authenticate to the service.
-### Smart card authentication
+#### Smart card authentication
To use a smart card to authenticate to Microsoft Entra ID, you must first [configure AD FS for user certificate authentication](/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication) or [configure Microsoft Entra certificate-based authentication](../active-directory/authentication/concept-certificate-based-authentication.md).
-## Session host authentication
+### Session host authentication
If you haven't already enabled [single sign-on](#single-sign-on-sso) or saved your credentials locally, you'll also need to authenticate to the session host when launching a connection. The following list describes which types of authentication each Azure Virtual Desktop client currently supports. Some clients might require a specific version to be used, which you can find in the link for each authentication type.
If you haven't already enabled [single sign-on](#single-sign-on-sso) or saved yo
>[!IMPORTANT] >In order for authentication to work properly, your local machine must also be able to access the [required URLs for Remote Desktop clients](safe-url-list.md#remote-desktop-clients).
-### Single sign-on (SSO)
+#### Single sign-on (SSO)
SSO allows the connection to skip the session host credential prompt and automatically sign the user in to Windows. For session hosts that are Microsoft Entra joined or Microsoft Entra hybrid joined, it's recommended to enable [SSO using Microsoft Entra authentication](configure-single-sign-on.md). Microsoft Entra authentication provides other benefits including passwordless authentication and support for third-party identity providers.
Azure Virtual Desktop also supports [SSO using Active Directory Federation Servi
Without SSO, the client will prompt users for their session host credentials for every connection. The only way to avoid being prompted is to save the credentials in the client. We recommend you only save credentials on secure devices to prevent other users from accessing your resources.
-### Smart card and Windows Hello for Business
+#### Smart card and Windows Hello for Business
Azure Virtual Desktop supports both NT LAN Manager (NTLM) and Kerberos for session host authentication, however Smart card and Windows Hello for Business can only use Kerberos to sign in. To use Kerberos, the client needs to get Kerberos security tickets from a Key Distribution Center (KDC) service running on a domain controller. To get tickets, the client needs a direct networking line-of-sight to the domain controller. You can get a line-of-sight by connecting directly within your corporate network, using a VPN connection or setting up a [KDC Proxy server](key-distribution-center-proxy.md).
-## In-session authentication
+### In-session authentication
Once you're connected to your RemoteApp or desktop, you may be prompted for authentication inside the session. This section explains how to use credentials other than username and password in this scenario.
-### In-session passwordless authentication
+#### In-session passwordless authentication
Azure Virtual Desktop supports in-session passwordless authentication using [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) or security devices like FIDO keys when using the [Windows Desktop client](users/connect-windows.md). Passwordless authentication is enabled automatically when the session host and local PC are using the following operating systems:
When enabled, all WebAuthn requests in the session are redirected to the local P
To access Microsoft Entra resources with Windows Hello for Business or security devices, you must enable the FIDO2 Security Key as an authentication method for your users. To enable this method, follow the steps in [Enable FIDO2 security key method](../active-directory/authentication/howto-authentication-passwordless-security-key.md#enable-fido2-security-key-method).
-### In-session smart card authentication
+#### In-session smart card authentication
To use a smart card in your session, make sure you've installed the smart card drivers on the session host and enabled [smart card redirection](configure-device-redirections.md#smart-card-redirection). Review the [client comparison chart](/windows-server/remote/remote-desktop-services/clients/remote-desktop-app-compare#other-redirection-devices-etc) to make sure your client supports smart card redirection.
virtual-desktop Configure Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-single-sign-on.md
Last updated 12/15/2023
# Configure single sign-on for Azure Virtual Desktop using Microsoft Entra ID authentication
-This article walks you through the process of configuring single sign-on (SSO) for Azure Virtual Desktop using Microsoft Entra ID authentication. When you enable single sign-on, users authenticate to Windows using a Microsoft Entra ID token. This token enables the use of passwordless authentication and third-party identity providers that federate with Microsoft Entra ID when connecting to a session host.
+This article walks you through the process of configuring single sign-on (SSO) for Azure Virtual Desktop using Microsoft Entra ID authentication. When you enable single sign-on, users authenticate to Windows using a Microsoft Entra ID token. This token enables the use of passwordless authentication and third-party identity providers that federate with Microsoft Entra ID when connecting to a session host, making the sign-in experience seamless.
Single sign-on using Microsoft Entra ID authentication also provides a seamless experience for Microsoft Entra ID-based resources inside the session. For more information on using passwordless authentication within a session, see [In-session passwordless authentication](authentication.md#in-session-passwordless-authentication).
virtual-desktop Set Up Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-mfa.md
Users can sign into Azure Virtual Desktop from anywhere using different devices and clients. However, there are certain measures you should take to help keep your environment and your users safe. Using Microsoft Entra multifactor authentication (MFA) with Azure Virtual Desktop prompts users during the sign-in process for another form of identification in addition to their username and password. You can enforce MFA for Azure Virtual Desktop using Conditional Access, and can also configure whether it applies to the web client, mobile apps, desktop clients, or all clients.
+When a user connects to a remote session, they need to authenticate to the Azure Virtual Desktop service and the session host. If MFA is enabled, it's used when connecting to the Azure Virtual Desktop service and the user is prompted for their user account and a second form of authentication, in the same way as accessing other services. When starting a remote session, a username and password is required for a session host, but this is seamless to the user if single sign-on (SSO) is enabled. For more information, see [Authentication methods](authentication.md#authentication-methods).
+ How often a user is prompted to reauthenticate depends on [Microsoft Entra session lifetime configuration settings](../active-directory/authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md#azure-ad-session-lifetime-configuration-settings). For example, if their Windows client device is registered with Microsoft Entra ID, it will receive a [Primary Refresh Token](../active-directory/devices/concept-primary-refresh-token.md) (PRT) to use for single sign-on (SSO) across applications. Once issued, a PRT is valid for 14 days and is continuously renewed as long as the user actively uses the device. While remembering credentials is convenient, it can also make deployments for Enterprise scenarios using personal devices less secure. To protect your users, you can make sure the client keeps asking for Microsoft Entra multifactor authentication credentials more frequently. You can use Conditional Access to configure this behavior.
virtual-desktop Client Features Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-web.md
Title: Use features of the Remote Desktop Web client - Azure Virtual Desktop
description: Learn how to use features of the Remote Desktop Web client when connecting to Azure Virtual Desktop. Previously updated : 11/07/2023 Last updated : 02/07/2024 # Use features of the Remote Desktop Web client when connecting to Azure Virtual Desktop
+Autoscale support for Azure Stack HCI with Azure Virtual Desktop is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ Once you've connected to Azure Virtual Desktop using the Remote Desktop client, it's important to know how to use the features. This article shows you how to use the features available in the Remote Desktop Web client. If you want to learn how to connect to Azure Virtual Desktop, see [Connect to Azure Virtual Desktop with the Remote Desktop Web client](connect-web.md). You can find a list of all the Remote Desktop clients at [Remote Desktop clients overview](remote-desktop-clients-overview.md). For more information about the differences between the clients, see [Compare the Remote Desktop clients](../compare-remote-desktop-clients.md). > [!NOTE]
-> Your admin can choose to override some of these settings in Azure Virtual Desktop, such as being able to copy and paste between your local device and your remote session. If some of these settings are disabled, please contact your admin.
+>* Your admin can choose to override some of these settings in Azure Virtual Desktop, such as being able to copy and paste between your local device and your remote session. If some of these settings are disabled, please contact your admin.
+>* Users can now only see the new client version of the Azure Virtual Desktop Web client user experience.
## Display preferences
Native resolution is set to off by default. To turn on native resolution:
1. Set **Enable native display resolution** to **On**.
-### New user interface
-
-A new user interface is available by default. It is recommended to use the New Client, as the original version will be deprecated soon.
-
-To revert to the original user interface, toggle the New Client to **Off** on the top navigation bar.
- ### Grid view and list view You can change the view of remote resources assigned to you between grid view (default) and list view. To change between grid view and list view:
-1. Sign in to the Remote Desktop Web client and make sure the New Client toggle is set to **On**. Then, select **Settings** on the taskbar.
+1. Sign in to the Remote Desktop Web client and select **Settings** on the taskbar.
1. In the top-right hand corner, select the **Grid View** icon or the **List View** icon. The change will take effect immediately.
You can change the view of remote resources assigned to you between grid view (d
You can change between light mode (default) and dark mode. To change between light mode and dark mode:
-1. Sign in to the Remote Desktop Web client and make sure the New Client toggle is set to **On**. Then, select **Settings** on the taskbar.
+1. Sign in to the Remote Desktop Web client and select **Settings** on the taskbar.
1. Toggle **Dark Mode** to **On** to use dark mode, or **Off** to use light mode. The change will take effect immediately.
If you have another Remote Desktop client installed, you can download an RDP fil
If you want to reset your user settings back to the default, you can do this in the web client for the current browser. To reset user settings:
-1. Sign in to the Remote Desktop Web client and make sure you have toggled **New Client** to **On**, then select **Settings** on the taskbar.
+1. Sign in to the Remote Desktop Web client and select **Settings** on the taskbar.
1. Select **Reset user settings**. You'll need to confirm that you want reset the web client settings to default.
virtual-desktop Connect Macos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-macos.md
Title: Connect to Azure Virtual Desktop with the Remote Desktop client for macOS
description: Learn how to connect to Azure Virtual Desktop using the Remote Desktop client for macOS. Previously updated : 10/02/2023 Last updated : 02/26/2024
Before you can access your resources, you'll need to meet the prerequisites:
- Internet access. -- A device running macOS 11 or later.
+- A device running macOS 12 or later.
- Download and install the Remote Desktop client from the [Mac App Store](https://apps.apple.com/app/microsoft-remote-desktop/id1295203466?mt=12).
virtual-desktop Whats New Client Macos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-macos.md
description: Learn about recent changes to the Remote Desktop client for macOS
Previously updated : 01/19/2024 Last updated : 02/26/2024 # What's new in the Remote Desktop client for macOS
virtual-machines Mitigate Se https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/mitigate-se.md
keywords: spectre,meltdown,specter
Previously updated : 07/12/2022 Last updated : 02/26/2024
virtual-machines Ncads H100 V5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ncads-h100-v5.md
Due to increased GPU memory I/O footprint, the NC A100 v4 requires the use of [G
| Size | vCPU | Memory (GiB) | Temp Disk NVMe (GiB) | GPU | GPU Memory (GiB) | Max data disks | Max uncached disk throughput (IOPS / MBps) | Max NICs/network bandwidth (MBps) | ||||||||||
-| Standard_NC40ads_H100_v5 | 40 | 320 | 3576| 1 | 94 | 8 | 30000/1000 | 2/40,000 |
-| Standard_NC80adis_H100_v5 | 80 | 640 | 7152 | 2 | 188 | 16 | 60000/2000 | 4/80,000 |
+| Standard_NC40ads_H100_v5 | 40 | 320 | 3576| 1 | 94 | 8 | 100000/3000 | 2/40,000 |
+| Standard_NC80adis_H100_v5 | 80 | 640 | 7152 | 2 | 188 | 16 | 240000/7000 | 4/80,000 |
<sup>1</sup> 1 GPU = one H100 card <br> <sup>2</sup> Local NVMe disks are ephemeral. Data is lost on these disks if you stop/deallocate your VM. Local NVMe disks aren't encrypted by Azure Storage encryption, even if you enable encryption at host. <br>
virtual-machines Security Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-policy.md
description: Learn about security and policies for virtual machines in Azure.
Previously updated : 11/27/2018 Last updated : 02/26/2024
virtual-machines Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-recommendations.md
- Title: Security recommendations for virtual machines in Azure
-description: Apply these recommendations for VMs in Azure to help fulfill the security obligations described in the shared responsibility model and to improve the overall security of your deployments.
----- Previously updated : 11/13/2019-----
-# Security recommendations for virtual machines in Azure
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-
-This article contains security recommendations for Azure Virtual Machines. Follow these recommendations to help fulfill the security obligations described in our model for shared responsibility. The recommendations will also help you improve overall security for your web app solutions. For more information about what Microsoft does to fulfill service-provider responsibilities, see [Shared responsibilities for cloud computing](https://gallery.technet.microsoft.com/Shared-Responsibilities-81d0ff91).
-
-Some of this article's recommendations can be automatically addressed by Microsoft Defender for Cloud. Microsoft Defender for Cloud is the first line of defense for your resources in Azure. It periodically analyzes the security state of your Azure resources to identify potential security vulnerabilities. It then recommends how to address the vulnerabilities. For more information, see [Security recommendations in Microsoft Defender for Cloud](../security-center/security-center-recommendations.md).
-
-For general information about Microsoft Defender for Cloud, see [What is Microsoft Defender for Cloud?](../security-center/security-center-introduction.md).
-
-## General
-
-| Recommendation | Comments | Defender for Cloud |
-|-|-|--|
-| When you build custom VM images, apply the latest updates. | Before you create images, install the latest updates for the operating system and for all applications that will be part of your image. | - |
-| Keep your VMs current. | You can use the [Update Management](../automation/update-management/overview.md) solution in Azure Automation to manage operating system updates for your Windows and Linux computers in Azure. | [Yes](../security-center/asset-inventory.md) |
-| Back up your VMs. | [Azure Backup](../backup/backup-overview.md) helps protect your application data and has minimal operating costs. Application errors can corrupt your data, and human errors can introduce bugs into your applications. Azure Backup protects your VMs that run Windows and Linux. | - |
-| Use multiple VMs for greater resilience and availability. | If your VM runs applications that must be highly available, use multiple VMs or [availability sets](./availability.md). | - |
-| Adopt a business continuity and disaster recovery (BCDR) strategy. | Azure Site Recovery allows you to choose from different options designed to support business continuity. It supports different replication and failover scenarios. For more information, see [About Site Recovery](../site-recovery/site-recovery-overview.md). | - |
-
-## Data security
-
-| Recommendation | Comments | Defender for Cloud |
-|-|-|--|
-| Encrypt operating system disks. | [Azure Disk Encryption](../virtual-machines/disk-encryption-overview.md) helps you encrypt your Windows and Linux IaaS VM disks. Without the necessary keys, the contents of encrypted disks are unreadable. Disk encryption protects stored data from unauthorized access that would otherwise be possible if the disk were copied.| [Yes](../security-center/asset-inventory.md) |
-| Encrypt data disks. | [Azure Disk Encryption](../virtual-machines/disk-encryption-overview.md) helps you encrypt your Windows and Linux IaaS VM disks. Without the necessary keys, the contents of encrypted disks are unreadable. Disk encryption protects stored data from unauthorized access that would otherwise be possible if the disk were copied.| - |
-| Limit installed software. | Limit installed software to what is required to successfully apply your solution. This guideline helps reduce your solution's attack surface. | - |
-| Use antivirus or antimalware. | In Azure, you can use antimalware software from security vendors such as Microsoft, Symantec, Trend Micro, and Kaspersky. This software helps protect your VMs from malicious files, adware, and other threats. You can deploy Microsoft Antimalware based on your application workloads. Microsoft Antimalware is available for Windows machines only. Use either basic secure-by-default or advanced custom configuration. For more information, see [Microsoft Antimalware for Azure Cloud Services and Virtual Machines](../security/fundamentals/antimalware.md). | - |
-| Securely store keys and secrets. | Simplify the management of your secrets and keys by providing your application owners with a secure, centrally managed option. This management reduces the risk of an accidental compromise or leak. Azure Key Vault can securely store your keys in hardware security modules (HSMs) that are certified to FIPS 140-2 Level 2. If you need to use FIPs 140.2 Level 3 to store your keys and secrets, you can use [Azure Dedicated HSM](../dedicated-hsm/overview.md). | - |
-
-## Identity and access management
-
-| Recommendation | Comments | Defender for Cloud |
-|-|-|--|
-| Centralize VM authentication. | You can centralize the authentication of your Windows and Linux VMs by using [Microsoft Entra authentication](../active-directory/develop/authentication-vs-authorization.md). | - |
-
-## Monitoring
-
-| Recommendation | Comments | Defender for Cloud |
-|-|-|--|
-| Monitor your VMs. | You can use [Azure Monitor for VMs](../azure-monitor/vm/vminsights-overview.md) to monitor the state of your Azure VMs and virtual machine scale sets. Performance issues with a VM can lead to service disruption, which violates the security principle of availability. | - |
-
-## Networking
-
-| Recommendation | Comments | Defender for Cloud |
-|-|-|--|
-| Restrict access to management ports. | Attackers scan public cloud IP ranges for open management ports and attempt "easy" attacks like common passwords and known unpatched vulnerabilities. You can use [just-in-time (JIT) VM access](../security-center/security-center-just-in-time.md) to lock down inbound traffic to your Azure VMs, reducing exposure to attacks while providing easy connections to VMs when they're needed. | - |
-| Limit network access. | Network security groups allow you to restrict network access and control the number of exposed endpoints. For more information, see [Create, change, or delete a network security group](../virtual-network/manage-network-security-group.md). | - |
-
-## Next steps
-
-Check with your application provider to learn about additional security requirements.
virtual-machines Vm Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-applications.md
description: Learn more about VM application packages in an Azure Compute Galler
Previously updated : 09/18/2023 Last updated : 02/26/2024 -+