Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
advisor | Advisor Cost Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-cost-recommendations.md | + + Title: Reduce service costs using Azure Advisor +description: Use Azure Advisor to optimize the cost of your Azure deployments. + Last updated : 11/08/2023++++# Reduce service costs by using Azure Advisor ++Azure Advisor helps you optimize and reduce your overall Azure spend by identifying idle and underutilized resources. You can get cost recommendations from the **Cost** tab on the Advisor dashboard. ++1. Sign in to the [**Azure portal**](https://portal.azure.com). ++1. Search for and select [**Advisor**](https://aka.ms/azureadvisordashboard) from any page. ++1. On the **Advisor** dashboard, select the **Cost** tab. ++## Optimize virtual machine (VM) or virtual machine scale set (VMSS) spend by resizing or shutting down underutilized instances ++Although certain application scenarios can result in low utilization by design, you can often save money by managing the size and number of your virtual machines or virtual machine scale sets. ++Advisor uses machine-learning algorithms to identify low utilization and to identify the ideal recommendation to ensure optimal usage of virtual machines and virtual machine scale sets. The recommended actions are shut down or resize, specific to the resource being evaluated. ++### Shutdown recommendations ++Advisor identifies resources that weren't used at all over the last seven days and makes a recommendation to shut them down. ++* Recommendation criteria include **CPU** and **Outbound Network utilization** metrics. **Memory** isn't considered since we found that **CPU** and **Outbound Network utilization** are sufficient. ++* The last seven days of utilization data are analyzed. You can change your lookback period in the configurations. The available lookback periods are 7, 14, 21, 30, 60, and 90 days. After you change the lookback period, it might take up to 48 hours for the recommendations to be updated. ++* Metrics are sampled every 30 seconds, aggregated to 1 min and then further aggregated to 30 mins (we take the max of average values while aggregating to 30 mins). On virtual machine scale sets, the metrics from individual virtual machines are aggregated using the average of the metrics across instances. ++* A shutdown recommendation is created if: + * P95 of the maximum value of CPU utilization summed across all cores is less than 3% + * P100 of average CPU in last 3 days (sum over all cores) <= 2% + * Outbound Network utilization is less than 2% over a seven-day period ++### Resize SKU recommendations ++Advisor recommends resizing virtual machines when it's possible to fit the current load on a more appropriate SKU, which is less expensive (based on retail rates). On virtual machine scale sets, Advisor recommends resizing when it's possible to fit the current load on a more appropriate cheaper SKU, or a lower number of instances of the same SKU. ++* Recommendation criteria include **CPU**, **Memory**, and **Outbound Network utilization**. ++* The last 7 days of utilization data are analyzed. You can change your lookback period in the configurations. The available lookback periods are 7, 14, 21, 30, 60, and 90 days. After you change the lookback period, it might take up to 48 hours for the recommendations to be updated. ++* Metrics are sampled every 30 seconds, aggregated to 1 minute, and then further aggregated to 30 minutes (taking the max of average values while aggregating to 30 minutes). On virtual machine scale sets, the metrics from individual virtual machines are aggregated using the average of the metrics for instance count recommendations, and aggregated using the max of the metrics for SKU change recommendations. ++* An appropriate SKU (for virtual machines) or instance count (for virtual machine scale set resources) is determined based on the following criteria: + * Performance of the workloads on the new SKU won't be impacted. + * Target for user-facing workloads: + * P95 of CPU and Outbound Network utilization at 40% or lower on the recommended SKU + * P100 of Memory utilization at 60% or lower on the recommended SKU + * Target for non user-facing workloads: + * P95 of the CPU and Outbound Network utilization at 80% or lower on the new SKU + * P100 of Memory utilization at 80% or lower on the new SKU + * The new SKU, if applicable, has the same Accelerated Networking and Premium Storage capabilities + * The new SKU, if applicable, is supported in the current region of the Virtual Machine with the recommendation + * The new SKU, if applicable, is less expensive + * Instance count recommendations also take into account if the virtual machine scale set is being managed by Service Fabric or AKS. For service fabric managed resources, recommendations take into account reliability and durability tiers. +* Advisor determines if a workload is user-facing by analyzing its CPU utilization characteristics. The approach is based on findings by Microsoft Research. You can find more details here: [Prediction-Based Power Oversubscription in Cloud Platforms - Microsoft Research](https://www.microsoft.com/research/publication/prediction-based-power-oversubscription-in-cloud-platforms/). ++* Based on the best fit and the cheapest costs with no performance impacts, Advisor not only recommends smaller SKUs in the same family (for example D3v2 to D2v2), but also SKUs in a newer version (for example D3v2 to D2v3), or a different family (for example D3v2 to E3v2). ++* For virtual machine scale set resources, Advisor prioritizes instance count recommendations over SKU change recommendations because instance count changes are easily actionable, resulting in faster savings. ++### Burstable recommendations ++We evaluate if workloads are eligible to run on specialized SKUs called **Burstable SKUs** that support variable workload performance requirements and are less expensive than general purpose SKUs. Learn more about burstable SKUs here: [B-series burstable - Azure Virtual Machines](../virtual-machines/sizes-b-series-burstable.md). ++A burstable SKU recommendation is made if: ++* The average **CPU utilization** is less than a burstable SKUs' baseline performance + * If the P95 of CPU is less than two times the burstable SKUs' baseline performance + * If the current SKU doesn't have accelerated networking enabled, since burstable SKUs don't support accelerated networking yet + * If we determine that the Burstable SKU credits are sufficient to support the average CPU utilization over 7 days. You can change your lookback period in the configurations. ++The resulting recommendation suggests that a user should resize their current virtual machine or virtual machine scale set to a burstable SKU with the same number of cores. This suggestion is made so a user can take advantage of lower cost and also the fact that the workload has low average utilization but high spikes in cases, which can be best served by the B-series SKU. ++Advisor shows the estimated cost savings for either recommended action: resize or shut down. For resize, Advisor provides current and target SKU/instance count information. +To be more selective about the actioning on underutilized virtual machines or virtual machine scale sets, you can adjust the CPU utilization rule by subscription. ++In some cases recommendations can't be adopted or might not be applicable, such as some of these common scenarios (there might be other cases): ++* Virtual machine or virtual machine scale set has been provisioned to accommodate upcoming traffic ++* Virtual machine or virtual machine scale set uses other resources not considered by the resize algorithm, such as metrics other than CPU, Memory and Network ++* Specific testing being done on the current SKU, even if not utilized efficiently ++* Need to keep virtual machine or virtual machine scale set SKUs homogeneous ++* Virtual machine or virtual machine scale set being utilized for disaster recovery purposes ++In such cases, simply use the Dismiss/Postpone options associated with the recommendation. ++### Limitations ++* The savings associated with the recommendations are based on retail rates and don't take into account any temporary or long-term discounts that might apply to your account. As a result, the listed savings might be higher than actually possible. ++* The recommendations don't take into account the presence of Reserved Instances (RI) / Savings plan purchases. As a result, the listed savings might be higher than actually possible. In some cases, for example in the case of cross-series recommendations, depending on the types of SKUs that reserved instances have been purchased for, the costs might increase when the optimization recommendations are followed. We caution you to consider your RI/Savings plan purchases when you act on the right-size recommendations. ++We're constantly working on improving these recommendations. Feel free to share feedback on [Advisor Forum](https://aka.ms/advisorfeedback). ++## Configure VM/VMSS recommendations ++You can adjust Advisor virtual machine (VM) and Virtual Machine Scale Sets recommendations. Specifically, you can set up a filter for each subscription to only show recommendations for machines with certain CPU utilization. This setting will filter recommendations but will not change how they are generated. ++> [!NOTE] +> If you don't have the required permissions, the option is disabled in the user interface. For information on permissions, see [Permissions in Azure Advisor](permissions.md). ++To adjust Advisor VM/Virtual Machine Scale Sets right sizing rules, follow these steps: ++1. From any Azure Advisor page, click **Configuration** in the left navigation pane. The Advisor Configuration page opens with the **Resources** tab selected, by default. ++1. Select the **VM/Virtual Machine Scale Sets right sizing** tab. ++1. Select the subscriptions you’d like to set up a filter for average CPU utilization, and then click **Edit**. ++1. Select the desired average CPU utilization value and click **Apply**. It can take up to 24 hours for the new settings to be reflected in recommendations. ++ :::image type="content" source="media/advisor-get-started/advisor-configure-rules.png" alt-text="Screenshot of Azure Advisor configuration option for VM/Virtual Machine Scale Sets sizing rules." lightbox="media/advisor-get-started/advisor-configure-rules.png"::: ++## Next steps ++To learn more about Advisor recommendations, see: ++* [Advisor cost recommendations (full list)](advisor-reference-cost-recommendations.md) +* [Introduction to Advisor](advisor-overview.md) +* [Advisor score](azure-advisor-score.md) |
advisor | Advisor Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-overview.md | Title: Introduction to Azure Advisor description: Use Azure Advisor to optimize your Azure deployments. Previously updated : 04/07/2022 Last updated : 07/08/2024 # Introduction to Azure Advisor You can access Advisor through the [Azure portal](https://aka.ms/azureadvisordas The Advisor dashboard displays personalized recommendations for all your subscriptions. The recommendations are divided into five categories: -* **Reliability**: To ensure and improve the continuity of your business-critical applications. For more information, see [Advisor Reliability recommendations](advisor-high-availability-recommendations.md). +* **Reliability**: To ensure and improve the continuity of your business-critical applications. For more information, see [Advisor Reliability recommendations](advisor-reference-reliability-recommendations.md). * **Security**: To detect threats and vulnerabilities that might lead to security breaches. For more information, see [Advisor Security recommendations](advisor-security-recommendations.md).-* **Performance**: To improve the speed of your applications. For more information, see [Advisor Performance recommendations](advisor-performance-recommendations.md). -* **Cost**: To optimize and reduce your overall Azure spending. For more information, see [Advisor Cost recommendations](advisor-cost-recommendations.md). -* **Operational Excellence**: To help you achieve process and workflow efficiency, resource manageability and deployment best practices. For more information, see [Advisor Operational Excellence recommendations](advisor-operational-excellence-recommendations.md). +* **Performance**: To improve the speed of your applications. For more information, see [Advisor Performance recommendations](advisor-reference-performance-recommendations.md). +* **Cost**: To optimize and reduce your overall Azure spending. For more information, see [Advisor Cost recommendations](advisor-reference-cost-recommendations.md). +* **Operational Excellence**: To help you achieve process and workflow efficiency, resource manageability and deployment best practices. For more information, see [Advisor Operational Excellence recommendations](advisor-reference-operational-excellence-recommendations.md). You can apply filters to display recommendations for specific subscriptions and resource types. To learn more about Advisor recommendations, see: * [Get started with Advisor](advisor-get-started.md) * [Advisor score](azure-advisor-score.md)-* [Advisor Reliability recommendations](advisor-high-availability-recommendations.md) +* [Advisor Reliability recommendations](advisor-reference-reliability-recommendations.md) * [Advisor Security recommendations](advisor-security-recommendations.md)-* [Advisor Performance recommendations](advisor-performance-recommendations.md) -* [Advisor Cost recommendations](advisor-cost-recommendations.md) -* [Advisor operational excellence recommendations](advisor-operational-excellence-recommendations.md) +* [Advisor Performance recommendations](advisor-reference-performance-recommendations.md) +* [Advisor Cost recommendations](advisor-reference-cost-recommendations.md) +* [Advisor Operational Excellence recommendations](advisor-reference-operational-excellence-recommendations.md) |
ai-services | Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md | You can also use the OpenAI text to speech voices via Azure AI Speech. To learn [!INCLUDE [Standard Models](../includes/model-matrix/standard-models.md)] -This table doesn't include fine-tuning regional availability, consult the dedicated [fine-tuning section](#fine-tuning-models) for this information. +This table doesn't include [global standard](../how-to/deployment-types.md) model deployment regional availability for GPT-4o, or fine-tuning regional availability information. Consult the dedicated [global standard deployment section](#global-standard-model-availability) and the [fine-tuning section](#fine-tuning-models) for this information. -### Standard deployment model quota +### Standard and global standard deployment model quota [!INCLUDE [Quota](../includes/model-matrix/quota.md)] For more information on Provisioned deployments, see our [Provisioned guidance]( **Supported regions:** +- australiaeast +- brazilsouth +- canadaeast +- eastus +- eastus2 +- francecentral +- germanywestcentral +- japaneast +- koreacentral +- northcentralus +- norwayeast +- polandcentral +- southafricanorth +- southcentralus +- southindia +- swedencentral +- switzerlandnorth +- uksouth +- westeurope +- westus +- westus3 ### GPT-4 and GPT-4 Turbo model availability For more information on Provisioned deployments, see our [Provisioned guidance]( [!INCLUDE [GPT-4](../includes/model-matrix/standard-gpt-4.md)] ++ #### Select customer access In addition to the regions above which are available to all Azure OpenAI customers, some select pre-existing customers have been granted access to versions of GPT-4 in additional regions: |
ai-services | Use Your Image Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-image-data.md | Title: 'Use your image data with Azure OpenAI Service in Azure OpenAI studio'- + Title: 'Use your image data with Azure OpenAI Service in Azure OpenAI Studio' + description: Use this article to learn about using your image data for image generation in Azure OpenAI. Last updated 05/09/2024 recommendations: false -# Azure OpenAI on your data with images using GPT-4 Turbo with Vision (preview) in Azure OpenAI studio +# Use your image data for Azure OpenAI by using GPT-4 Turbo with Vision (preview) in Azure OpenAI Studio -Use this article to learn how to provide your own image data for GPT-4 Turbo with Vision, Azure OpenAI’s vision model. GPT-4 Turbo with Vision on your data allows the model to generate more customized and targeted answers using Retrieval Augmented Generation based on your own images and image metadata. +Use this article to learn how to provide your own image data for GPT-4 Turbo with Vision, the vision model in Azure OpenAI Service. GPT-4 Turbo with Vision on your data allows the model to generate more customized and targeted answers by using Retrieval Augmented Generation (RAG), based on your own images and image metadata. > [!IMPORTANT]-> Once the GPT4-Turbo with vision preview model is deprecated, you will no longer be able to use Azure OpenAI On your image data. To implement a Retrieval Augmented Generation (RAG) solution with image data, see the following sample on [github](https://github.com/Azure-Samples/azure-search-openai-demo/). +> After the GPT4-Turbo with Vision preview model is deprecated, you'll no longer be able to use Azure OpenAI on your image data. To implement a RAG solution with image data, see the [sample on GitHub](https://github.com/Azure-Samples/azure-search-openai-demo/). -## Prerequisites +## Prerequisites -- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.-- Access granted to Azure OpenAI in the desired Azure subscription.+* An Azure subscription. [Create one for free](https://azure.microsoft.com/free/cognitive-services). +* Access granted to Azure OpenAI in the desired Azure subscription. - Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. -- An Azure OpenAI resource with the GPT-4 Turbo with Vision model deployed. For more information about model deployment, see the [resource deployment guide](../how-to/create-resource.md).-- Be sure that you're assigned at least the [Cognitive Services Contributor role](../how-to/role-based-access-control.md#cognitive-services-contributor) for the Azure OpenAI resource. + Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by [completing the form](https://aka.ms/oai/access). Open an issue on this repo to contact us if you have a problem. +* An Azure OpenAI resource with the GPT-4 Turbo with Vision model deployed. For more information about model deployment, see the [resource deployment guide](../how-to/create-resource.md). +* At least the [Cognitive Services Contributor role](../how-to/role-based-access-control.md#cognitive-services-contributor) assigned to you for the Azure OpenAI resource. ## Add your data source -Navigate to [Azure OpenAI Studio](https://oai.azure.com/) and sign-in with credentials that have access to your Azure OpenAI resource. During or after the sign-in workflow, select the appropriate directory, Azure subscription, and Azure OpenAI resource. +Go to [Azure OpenAI Studio](https://oai.azure.com/) and sign in with credentials that have access to your Azure OpenAI resource. During or after the sign-in workflow, select the appropriate directory, Azure subscription, and Azure OpenAI resource. On the **Assistant setup** tile, select **Add your data (preview)** > **+ Add a data source**. -In the pane that appears after you select **Add a data source**, you'll see multiple options to select a data source. +In the pane that appears after you select **Add a data source**, you have three options for selecting a data source: +* [Azure AI Search](#add-your-data-by-using-azure-ai-search) +* [Azure Blob Storage](#add-your-data-by-using-azure-blob-storage) +* [Your own image files and image metadata](#add-your-data-by-uploading-files) -You have three different options to add your data for GPT-4 Turbo with Vision’s data source: -* Using [your own image files and image metadata](#add-your-data-by-uploading-files) -* Using [Azure AI Search](#add-your-data-using-azure-ai-search) -* Using [Azure Blob Storage](#add-your-data-using-azure-blob-storage) +All three options use an Azure AI Search index to do an image-to-image search and retrieve the top search results for your input prompt image. For the **Azure Blob Storage** and **Upload files** options, Azure OpenAI generates an image search index for you. For **Azure AI Search**, you need to have an image search index. The following sections contain details on how to create the search index. -All three options use Azure AI Search index to do image-to-image search and retrieve the top search results for your input prompt image. For Azure Blob Storage and Upload files options, Azure OpenAI will generate an image search index for you. For Azure AI Search, you need to have an image search index. The following sections contain details on how to create the search index. +### Turn on CORS -When using these options for the first time, you might see this red notice asking you to turn on Cross-origin resource sharing (CORS). This is a notice asking you to enable CORS, so that Azure OpenAI can access your blob storage account. To fix the warning, select **Turn on CORS**. +When you're adding a data source for the first time, you might see a red notice that asks you to turn on cross-origin resource sharing (CORS). To stop the warning, select **Turn on CORS** so that Azure OpenAI can access the data source. ## Add your data by uploading files -You can manually upload your image files and enter metadata of them manually, using Azure OpenAI. This is especially useful if you are experimenting with a small set of images and would like to build your data source. +You can manually upload your image files and enter metadata for them manually by using Azure OpenAI. This capability is especially useful if you're experimenting with a small set of images and want to build your data source. -1. Navigate to the **Select a data source** button in Azure OpenAI as [described above](#add-your-data-source). Select **Upload files**. +1. Go to and select the **Add a data source** button in Azure OpenAI as [described earlier](#add-your-data-source). Then select **Upload files**. -1. Select your subscription. Select an Azure Blob storage to which your uploaded image files will be stored to. Select an Azure AI Search resource in which your new image search index will be created. Enter the image search index name of your choice. +1. Select your subscription. Select a Blob Storage account where you want to store your uploaded image files. Select an Azure AI Search resource that will contain your newly created image search index. Enter the image search index name of your choice. - Once you have filled out all the fields, check the two boxes at the bottom acknowledging the incurring usage, and select **Next**. + After you fill in all the values, select the two checkboxes at the bottom to acknowledge incurring usage, and then select **Next**. - :::image type="content" source="../media/use-your-image-data/completed-data-source-file-upload.png" alt-text="A screenshot showing the completed fields for Azure Blob storage." lightbox="../media/use-your-image-data/completed-data-source-file-upload.png"::: + :::image type="content" source="../media/use-your-image-data/completed-data-source-file-upload.png" alt-text="Screenshot that shows the completed boxes for selecting an Azure Blob Storage subscription." lightbox="../media/use-your-image-data/completed-data-source-file-upload.png"::: The following file types are supported for your image files: * .jpg You can manually upload your image files and enter metadata of them manually, us * .bmp * .tiff -1. Select **Browse for a file** to select image files you would like to use from your local directory. +1. Select **Browse for a file** to select image files that you want to use from your local directory. -1. Once you select your image files, you'll see the image files selected in the right table. Select **Upload files**. Once you have uploaded the files, you'll see the status for each is **Uploaded**. Select **Next**. +1. After you select your image files, they appear in the table. Select **Upload files**. After you upload the files, confirm that the status for each is **Uploaded**. Then select **Next**. - :::image type="content" source="../media/use-your-image-data/uploaded-files.png" alt-text="A screenshot showing uploaded files." lightbox="../media/use-your-image-data/uploaded-files.png"::: + :::image type="content" source="../media/use-your-image-data/uploaded-files.png" alt-text="Screenshot that shows uploaded files." lightbox="../media/use-your-image-data/uploaded-files.png"::: -1. For each image file, enter the metadata in the provided description fields. Once you have descriptions for each image, select **Next**. +1. For each image file, enter the metadata in the provided description boxes. Then select **Next**. - :::image type="content" source="../media/use-your-image-data/add-metadata.png" alt-text="A screenshot showing the metadata entry field." lightbox="../media/use-your-image-data/add-metadata.png"::: + :::image type="content" source="../media/use-your-image-data/add-metadata.png" alt-text="Screenshot that shows boxes for metadata entry." lightbox="../media/use-your-image-data/add-metadata.png"::: -1. Review that all the information is correct. Select **Save and close**. +1. Review all the information to make sure that it's correct. Then select **Save and close**. -## Add your data using Azure AI Search +## Add your data by using Azure AI Search -If you have an existing [Azure AI search](/azure/search/search-what-is-azure-search) index, you can use it as a data source. If you don't already have a search index created for your images, you can create one using the [AI Search vector search repository on GitHub](https://github.com/Azure/cognitive-search-vector-pr), which provides you with scripts to create an index with your image files. This option is also great if you would like to create your data source using your own files like the option above, and then come back to the playground experience to select that data source you already have created but have not added yet. +If you have an existing [Azure AI Search](/azure/search/search-what-is-azure-search) index, you can use it as a data source. If you don't already have a search index created for your images, you can create one by using the [AI Search vector search repository on GitHub](https://github.com/Azure/cognitive-search-vector-pr). This repo provides you with scripts to create an index with your image files. -1. Navigate to the **Select a data source** button in Azure OpenAI as [described above](#add-your-data-source). Select **Azure AI Search**. +This option is also useful if you want to create your data source by using your own files like the previous option, and then come back to the playground experience to select the data source that you created but haven't added yet. - > [!TIP] - > You can select an image search index that you have created with the Azure Blob Storage or Upload files options. - -1. Select your subscription, and the Azure AI Search service you used to create the image search index. +1. Go to and select the **Add a data source** button in Azure OpenAI as [described earlier](#add-your-data-source). Then select **Azure AI Search**. -1. Select your Azure AI Search index you have created with your images. + > [!TIP] + > You can select an image search index that you created by using the **Azure Blob Storage** or **Upload files** option. -1. After you have filled in all fields, select the two checkboxes at the bottom asking you to acknowledge the charges incurred from using GPT-4 Turbo with Vision vector embeddings and Azure AI Search. Select **Next**. If [CORS](#turn-on-cors) isn't already turned on for the AI Search resource, you will see a warning. To fix the warning, select **Turn on CORS**. +1. Select your subscription and the Azure AI Search service that you used to create the image search index. +1. Select the Azure AI Search index that you created with your images. - :::image type="content" source="../media/use-your-image-data/completed-data-source-cognitive-search.png" alt-text="A screenshot showing the completed fields for using an Azure AI Search index." lightbox="../media/use-your-image-data/completed-data-source-cognitive-search.png"::: +1. After you fill in all the values, select the two checkboxes at the bottom to acknowledge the charges incurred from using GPT-4 Turbo with Vision vector embeddings and Azure AI Search. Then select **Next**. -1. Review the details, then select **Save and close**. + :::image type="content" source="../media/use-your-image-data/completed-data-source-cognitive-search.png" alt-text="Screenshot that shows the completed boxes for using an Azure AI Search index." lightbox="../media/use-your-image-data/completed-data-source-cognitive-search.png"::: -## Add your data using Azure Blob Storage +1. Review the details, and then select **Save and close**. -If you have an existing [Azure Blob Storage](/azure/storage/blobs/storage-blobs-introduction) container, you can use it to create an image search index. If you want to create a new blob storage, see the [Azure Blob storage quickstart](/azure/storage/blobs/storage-quickstart-blobs-portal) documentation. +## Add your data by using Azure Blob Storage -Your blob storage should contain image files and a JSON file with the image file paths and metadata. This option is especially useful if you have a large number of image files and don't want to manually upload each one. +If you have an existing [Azure Blob Storage](/azure/storage/blobs/storage-blobs-introduction) container, you can use it to create an image search index. If you want to create a new container, see the [Blob Storage quickstart](/azure/storage/blobs/storage-quickstart-blobs-portal) documentation. -If you don't already have a blob storage populated with these files, and would like to upload files one by one, you can upload your files using Azure OpenAI studio instead. +The option of using a Blob Storage container is especially useful if you have a large number of image files and you don't want to manually upload each one. -Before you start adding your Azure Blob Storage container as your data source, make sure your blob storage contains all the images that you would like to ingest, and a JSON file that contains the image file paths and metadata. +If you don't already have a Blob Storage container populated with these files, and you want to upload your files one by one, you can upload the files by using Azure OpenAI Studio instead. +Before you start adding your Blob Storage container as your data source, make sure that it contains all the images that you want to ingest. Also make sure that it contains a JSON file that includes the image file paths and metadata. > [!IMPORTANT] > Your metadata JSON file must:-> * Have a file name that starts with the word “metadata”, all in lowercase without a space. +> +> * Have a file name that starts with the word *metadata*, all in lowercase without a space. > * Have a maximum of 10,000 image files. If you have more than this number of files in your container, you can have multiple JSON files each with up to this maximum. ```json Before you start adding your Azure Blob Storage container as your data source, m ] ``` -After you have a blob storage populated with image files and at least one metadata JSON file, you are ready to add the blob storage as a data source. +After you have a Blob Storage container that's populated with image files and at least one metadata JSON file, you're ready to add the container as a data source: -1. Navigate to the **Select a data source** button in Azure OpenAI as [described above](#add-your-data-source). Select **Azure Blob Storage**. +1. Go to and select the **Add a data source** button in Azure OpenAI as [described earlier](#add-your-data-source). Then select **Azure Blob Storage**. -1. Select your subscription, Azure Blob storage, and storage container. You'll also need to select an Azure AI Search resource, as a new image search index will be created in this resource group. If you don't have an Azure AI Search resource, you can create a new one using the link below the dropdown. If [CORS](#turn-on-cors) isn't already turned on for the Azure Blob storage resource, you will see a warning. To fix the warning, select **Turn on CORS**. +1. Select your subscription, Azure Blob Storage, and a storage container. You also need to select an Azure AI Search resource, because a new image search index will be created in this resource group. If you don't have an Azure AI Search resource, you can create a new one by using the link below the dropdown list. -1. Once you've selected an Azure AI search resource, enter a name for the search index in the **Index name** field. +1. In the **Index name** box, enter a name for the search index. > [!NOTE]- > The name of the index will be suffixed with `–v`, to indicate that this is an index with image vectors extracted from the images provided. The description filed in the metadata.json will be added as text metadata in the index. --1. After you've filled in all fields, select the two checkboxes at the bottom asking you to acknowledge the charges incurred from using GPT-4 Turbo with Vision vector embeddings and Azure AI Search. Select **Next**. -- :::image type="content" source="../media/use-your-image-data/data-source-fields-blob-storage.png" alt-text="A screenshot showing the data source selection fields for blob storage." lightbox="../media/use-your-image-data/data-source-fields-blob-storage.png"::: --1. Review the details, then select **Save and close**. + > The name of the index is suffixed with `–v`, to indicate that this index has image vectors extracted from the provided images. The description filed in *metadata.json* will be added as text metadata in the index. +1. After you fill in all values, select the two checkboxes at the bottom to acknowledge the charges incurred from using GPT-4 Turbo with Vision vector embeddings and Azure AI Search. Then select **Next**. + :::image type="content" source="../media/use-your-image-data/data-source-fields-blob-storage.png" alt-text="Screenshot that shows the data source selection boxes for Blob Storage." lightbox="../media/use-your-image-data/data-source-fields-blob-storage.png"::: -## Using your ingested data with your GPT-4 Turbo with Vision model +1. Review the details, and then select **Save and close**. -After you connect your data source using any of the three methods listed above, It will take some time for the data ingestion process to finish. You will see an icon and a **Ingestion in progress** message as the process progresses. Once the ingestion has been completed, you'll see that a data source has been created. +## Use your ingested data with your GPT-4 Turbo with Vision model +After you connect your data source by using any of the three methods listed earlier, the data ingestion process takes some time to finish. An icon and a **Ingestion in progress** message appear as the process progresses. -Once the data source has finished being ingested, you will see your data source details as well as the image search index name. Now this ingested data is ready to be used as the grounding data for your deployed GPT-4 Turbo with Vision model. Your model will use the top retrieval data from your image search index and generate a response specifically adhered to your ingested data. +After the ingestion finishes, confirm that a data source is created. The details of your data source appear, along with the name of your image search index. -## Turn on CORS +Now this ingested data is ready to be used as the grounding data for your deployed GPT-4 Turbo with Vision model. Your model will use the top retrieval data from your image search index and generate a response specifically adhered to your ingested data. -If CORS isn't already turned on for your data source, you will see the following message appear. +## Additional tips -If you see this message, select **Turn on CORS** when you connect your data source. +### Add and remove data sources +Azure OpenAI currently allows only one data source to be used for each chat session. If you want to add a new data source, you must remove the existing data source first. Remove it by selecting **Remove data source** under your data source information. -## Additional Tips +When you remove a data source, a warning message appears. Removing a data source clears the chat session and resets all playground settings. -### Adding and Removing Data Sources -Azure OpenAI currently allows only one data source to be used per a chat session. If you would like to add a new data source, you must remove the existing data source first. This can be done by selecting **Remove data source** under your data source information. -When you remove a data source, you'll see a warning message. Removing a data source clears the chat session and resets all playground settings. ---> [!IMPORTANT] -> If you switch to a model deployment which is not using the GPT-4 Turbo with Vision model, you will see a warning message for removing a data source. Please note that removing a data source will clear the chat session and reset all playground settings. --## Next steps --- You can also chat on Azure OpenAI text models. See [Use your text data](./use-your-data.md) for more information.-- Or, use GPT-4 Turbo with Vision in a chat scenario by following the [quickstart](../gpt-v-quickstart.md).-- [GPT-4 Turbo with Vision frequently asked questions](../faq.yml#gpt-4-turbo-with-vision)-- [GPT-4 Turbo with Vision API reference](https://aka.ms/gpt-v-api-ref)+> [!IMPORTANT] +> If you switch to a model deployment that doesn't use the GPT-4 Turbo with Vision model, a warning message appears for removing a data source. Removing a data source clears the chat session and resets all playground settings. +## Related content +* [Azure OpenAI On Your Data](./use-your-data.md) +* [Quickstart: Use images in your AI chats](../gpt-v-quickstart.md) +* [GPT-4 Turbo with Vision frequently asked questions](../faq.yml#gpt-4-turbo-with-vision) +* [GPT-4 Turbo with Vision API reference](https://aka.ms/gpt-v-api-ref) |
ai-services | Azure Developer Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/azure-developer-cli.md | Title: 'Use the Azure Developer CLI to deploy resources for Azure OpenAI On Your Data'-+ description: Use this article to learn how to automate resource deployment for Azure OpenAI On Your Data. recommendations: false # Use the Azure Developer CLI to deploy resources for Azure OpenAI On Your Data -Use this article to learn how to automate resource deployment for Azure OpenAI On Your Data. The Azure Developer CLI (`azd`) is an open-source, command-line tool that streamlines provisioning and deploying resources to Azure using a template system. The template contains infrastructure files to provision the necessary Azure OpenAI resources and configurations and includes the completed sample app code. +Use this article to learn how to automate resource deployment for Azure OpenAI Service On Your Data. The Azure Developer CLI (`azd`) is an open-source command-line tool that streamlines provisioning and deploying resources to Azure by using a template system. The template contains infrastructure files to provision the necessary Azure OpenAI resources and configurations. It also includes the completed sample app code. ## Prerequisites -- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.+- An Azure subscription. [Create one for free](https://azure.microsoft.com/free/cognitive-services). - Access granted to Azure OpenAI in the desired Azure subscription. - Azure OpenAI requires registration and is currently only available to approved enterprise customers and partners. [See Limited access to Azure OpenAI Service](/legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context) for more information. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. + Azure OpenAI requires registration and is currently available only to approved enterprise customers and partners. For more information, see [Limited access to Azure OpenAI Service](/legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context). You can apply for access to Azure OpenAI by [completing the form](https://aka.ms/oai/access). Open an issue on this repo to contact us if you have a problem. -- The Azure Developer CLI [installed](/azure/developer/azure-developer-cli/install-azd) on your machine+- The Azure Developer CLI [installed](/azure/developer/azure-developer-cli/install-azd) on your machine. ## Clone and initialize the Azure Developer CLI template ---1. For the steps ahead, clone and initialize the template. +1. For the steps ahead, clone and initialize the template: ```bash azd init --template openai-chat-your-own-data ```- -2. The `azd init` command prompts you for the following information: - * Environment name: This value is used as a prefix for all Azure resources created by Azure Developer CLI. The name must be unique across all Azure subscriptions and must be between 3 and 24 characters long. The name can contain numbers and lowercase letters only. +2. The `azd init` command prompts you to create an environment name. This value is used as a prefix for all Azure resources that Azure Developer CLI creates. The name: ++ - Must be unique across all Azure subscriptions. + - Must be 3 to 24 characters. + - Can contain numbers and lowercase letters only. ## Use the template to deploy resources -1. Sign-in to Azure: +1. Sign in to Azure: ```bash azd auth login ``` -1. Provision and deploy the OpenAI resource to Azure: +1. Provision and deploy the Azure OpenAI resource to Azure: ```bash azd up ```- - `azd` prompts you for the following information: - - * Subscription: The Azure subscription that your resources are deployed to. - * Location: The Azure region where your resources are deployed. - - > [!NOTE] - > The sample `azd` template uses the `gpt-35-turbo-16k` model. A recommended region for this template is East US, since different Azure regions support different OpenAI models. You can visit the [Azure OpenAI Service Models](/azure/ai-services/openai/concepts/models) support page for more details about model support by region. - ++1. The Azure Developer CLI prompts you for the following information: ++ - `Subscription`: The Azure subscription that your resources are deployed to. + - `Location`: The Azure region where your resources are deployed. + > [!NOTE]- > The provisioning process may take several minutes to complete. Wait for the task to finish before you proceed to the next steps. - -1. Click the link `azd` outputs to navigate to the new resource group in the Azure portal. You should see the following top level resources: - - * An Azure OpenAI service with a deployed model - * An Azure Storage account you can use to upload your own data files - * An Azure AI Search service configured with the proper indexes and data sources + > The sample `azd` template uses the `gpt-35-turbo-16k` model. A recommended region for this template is East US, because different Azure regions support different OpenAI models. For more details about model support by region, go to the [Azure OpenAI Service Models](/azure/ai-services/openai/concepts/models) support page. ++ The provisioning process might take several minutes. Wait for the task to finish before you proceed to the next steps. ++1. Select the link in the `azd` outputs to go to the new resource group in the Azure portal. The following top-level resources should appear: ++ - An Azure OpenAI service with a deployed model + - An Azure Storage account that you can use to upload your own data files + - An Azure AI Search service configured with the proper indexes and data sources ## Upload data to the storage account -`azd` provisioned all of the required resources for you to chat with your own data, but you still need to upload the data files you want to make available to your AI service. +The `azd` template provisioned all of the required resources for you to chat with your own data, but you still need to upload the data files that you want to make available to your AI service: -1. Navigate to the new storage account in the Azure portal. -1. On the left navigation, select **Storage browser**. -1. Select **Blob containers** and then navigate into the **File uploads** container. -1. Click the **Upload** button at the top of the screen. +1. Go to the new storage account in the Azure portal. +1. On the left menu, select **Storage browser**. +1. Select **Blob containers**, and then go to the **File uploads** container. +1. Select the **Upload** button at the top of the pane. 1. In the flyout menu that opens, upload your data.- + > [!NOTE]-> The search indexer is set to run every 5 minutes to index the data in the storage account. You can either wait a few minutes for the uploaded data to be indexed, or you can manually run the indexer from the search service page. +> The search indexer is set to run every five minutes to index the data in the storage account. You can wait a few minutes for the uploaded data to be indexed, or you can manually run the indexer from the search service page. ## Connect or create an application -After running the `azd` template and uploading your data, you're ready to start using Azure OpenAI on Your Data. See the [quickstart article](../use-your-data-quickstart.md) for code samples you can use to build your applications. +After you run the `azd` template and upload your data, you're ready to start using Azure OpenAI On Your Data. For code samples that you can use to build your applications, see the [quickstart article](../use-your-data-quickstart.md). |
ai-services | On Your Data Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/on-your-data-best-practices.md | Title: Best practices for using Azure OpenAI On Your Data- -description: Learn about the best practices when using Azure OpenAI On Your data. ++description: Learn about the best practices for using Azure OpenAI On Your Data, along with how to fix common problems. Last updated 04/08/2024 recommendations: false # Troubleshooting and best practices for Azure OpenAI On Your Data -Use this article to help you guide through the common issues you might run into while developing a solution using Azure OpenAI On Your Data, a service that allows you to use the power of OpenAI models with your own data. By following the best practices and tips in this article, you can optimize your output with Azure OpenAI On Your Data and achieve the best AI quality possible. +This article can help guide you through common problems in developing a solution by using Azure OpenAI Service On Your Data, a feature that allows you to use the power of OpenAI models with your own data. By following the best practices and tips in this article, you can optimize your output with Azure OpenAI On Your Data and achieve the best AI quality possible. -This document covers: +## Azure OpenAI On Your Data: Workflow -* High level workflow of the Azure OpenAI On Your Data -* How to structure your debugging investigation -* Common issues and their respective solutions +The workflow for Azure OpenAI On Your Data has two major parts: -## Azure OpenAI On Your Data: workflow +* **Data ingestion**: This is the stage where you connect your data with Azure OpenAI On Your Data. In this stage, user documents are processed and broken down into smaller chunks and then indexed. The chunks are 1,024 tokens by default, but more chunking options are available. -Azure OpenAI On Your Data's workflow can be divided into two major parts: + Also in this stage, you can choose an embedding model to use for creation of embeddings or preferred search type. Embeddings are representations of values or objects (like text, images, and audio) that are designed to be consumed by machine learning models and semantic search algorithms. + + The output of this process is an index that will later be used for retrieving documents during inferencing. -* Data ingestion -* Inferencing +* **Inferencing**: This is the stage where users chat with their data by using a studio, a deployed web app, or direct API calls. In this stage, users can set various model parameters (such as `temperature` and `top_P` ) and system parameters (such as `strictness` and `topNDocuments`). -## Data ingestion +Think of ingestion as a separate process before inferencing. After the index is created, Azure OpenAI On Your Data goes through the following steps to generate a good response to user questions: -This is the stage where you connect your data with the Azure OpenAI On Your Data service. In this stage, user documents are processed and broken down into smaller chunks (1,024 tokens by default, however there are more chunking options available) and then indexed. This is the stage where you can choose an embedding model (embeddings are representations of values or objects like text, images, and audio that are designed to be consumed by machine learning models and semantic search algorithms) to use for embeddings creation or preferred search type. The output of this process is an index that will later be used to retrieve documents from during inference. +1. **Intent generation**: Azure OpenAI On Your Data generates multiple search intents by using user questions and conversation history. It generates multiple search intents to address any ambiguity in users' questions, add more context by using the conversation history to retrieve holistic information in the retrieval stage, and provide any additional information to make the final response thorough and useful. +2. **Retrieval**: By using the search type provided during the ingestion, Azure OpenAI On Your Data retrieves a list of relevant document chunks that correspond to each of the search intents. +3. **Filtration**: Azure OpenAI On Your Data uses the strictness setting to filter out the retrieved documents that are considered irrelevant according to the strictness threshold. The `strictness` parameter controls how aggressive the filtration is. +4. **Reranking**: Azure OpenAI On Your Data reranks the remaining document chunks retrieved for each of the search intents. The purpose of reranking is to produce a combined list of the most relevant documents retrieved for all search intents. +5. **Parameter inclusion**: The `topNDocuments` parameter from the reranked list is included in the prompt sent to the model, along with the question, the conversation history, and the role information or system message. +6. **Response generation**: The model uses the provided context to generate the final response along with citations. -## Inferencing +## How to structure debugging investigation -This is the stage where users chat with their data using a studio, deployed webapp, or direct API calls. In this stage users are able to set various model parameters (such as `temperature`, or `top_P` ) and system parameters such as `strictness`, and `topNDocuments`. +When you see an unfavorable response to a query, it might be the result of outputs from various components not working as expected. You can debug the outputs of each component by using the following steps. -## Inner workings +### Step 1: Check for retrieval problems -Ingestion should be thought of as a separate process before inference. After the index has been created, Azure OpenAI On Your Data has many steps it goes through to generate a good response to user questions. +Use the REST API to check if the correct document chunks are present in the retrieved documents. In the API response, check the citations in the `tool` message. -1. **Intent generation**: Multiple search intents are generated using user question and conversation history. We generate multiple search intents to address any ambiguity in the user's question, add more context using the conversation history to retrieve holistic information in the retrieval stage, and to provide any additional information to make the final response thorough and useful. -2. **Retrieval**: Using the search type provided during the ingestion, a list of relevant document chunks are retrieved corresponding to each of the search intents. -3. **Filtration**: The strictness setting is used to filter out the retrieved documents that are considered irrelevant per the strictness threshold. `strictness` controls how aggressive the filtration is. -4. **Re-ranking**: The remaining document chunks retrieved for each of the search intents are reranked. Reranking is done to come up with a combined list of most relevant documents retrieved for all search intents. -5. **TopNDocuments**: The `topNDocuments` from this reranked list is included in the prompt sent to the model, along with the question, the conversation history, and the role information/system message. -1. **Response Generation**: The model uses the provided context to generate the final response along with citations. +### Step 2: Check for generation problems -## How to structure debugging investigation +If the correct document chunks appear in the retrieved documents, you're likely encountering a problem with content generation. Consider using a more powerful model through one of these methods: -When you see an unfavorable response to a query, it could be the result of different outputs from various components not working as expected. It's advisable to debug the outputs of each component using the following steps. - -### Step 1: Check for Retrieval issues +* Upgrade the model. For example, if you're using `gpt-35-turbo`, consider using `gpt-4`. +* Switch the model version. For example, if you're using `gpt-35-turbo-1106`, consider using `gpt-35-turbo-16k` (0613). -Check if the correct document chunks are present in the retrieved documents. This is straight forward to check using the REST API. In the API response, check the citations in the `tool` message. +You can also tune the finer aspects of the response by changing the role information or system message. -### Step 2: Check for Generation issues +### Step 3: Check the rest of the funnel -If you're seeing the correct document chunks in the retrieved documents, then you're likely encountering a **generation issue**. Consider using a more powerful model. If you aren't, go to [step 3](#step-3-check-the-rest-of-the-funnel). +If the correct document chunks don't appear in the retrieved documents, you need to dig farther down the funnel: -1. **Upgrade the model**: For example, if you're using gpt-35-turbo, consider using gpt-4. -1. **Switch the model version**: If you're using gpt-35-turbo-1106, consider using gpt-35-turbo-16k (0613). - 1. You can also tune the finer aspects of the response by changing the role information / system message. +* It's possible that a correct document chunk was retrieved but was filtered out based on strictness. In this case, try reducing the `strictness` parameter. -### Step 3: Check the rest of the funnel +* It's possible that a correct document chunk wasn't part of the `topNDocuments` parameter. In this case, increase the parameter. ++* It's possible that your index fields are incorrectly mapped, so retrieval might not work well. This mapping is particularly relevant if you're using a pre-existing data source. (That is, you didn't create the index by using the studio or offline scripts available on [GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT/tree/main/scripts).) For more information on mapping index fields, see the [how-to article](../concepts/use-your-data.md?tabs=ai-search#index-field-mapping). ++* It's possible that the intent generation step isn't working well. In the API response, check the `intents` fields in the `tool` message. ++ Some models don't work well for intent generation. For example, if you're using the `GPT-35-turbo-1106` model version, consider using a later model, such as `gpt-35-turbo` (0125) or `GPT-4-1106-preview`. ++* Do you have semistructured data in your documents, such as numerous tables? There might be an ingestion problem. Your data might need special handling during ingestion. ++ * If the file format is PDF, we offer optimized ingestion for tables by using the offline scripts available on [GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT/tree/main/scripts). To use the scripts, you need to have a [Document Intelligence](../../document-intelligence/overview.md) resource and use the [layout model](../../document-intelligence/concept-layout.md). + + * You can adjust your [chunk size](../concepts/use-your-data.md#chunk-size-preview) to make sure your largest table fits within it. ++* Are you converting a semistructured data type, such as JSON or XML, to a PDF document? This conversion might cause an ingestion problem because structured information needs a chunking strategy that's different from purely text content. -If you aren't seeing the correct document chunks in step 1, then you need to dig further down the funnel. +* If none of the preceding items apply, you might be encountering a retrieval problem. Consider using a more powerful `query_type` value. Based on our benchmarking, `semantic` and `vectorSemanticHybrid` are preferred. -1. It's possible that the correct document chunk was retrieved but was filtered out based on `strictness`. In this case, try reducing the `strictness` parameter. +## Common problems -1. It's possible that the correct document chunk wasn't part of the `topNDocuments`. In this case, increase the `topNDocuments` parameter. +The following sections list possible solutions to problems that you might encounter when you're developing a solution by using Azure OpenAI Service On Your Data. -1. It's possible that your index fields are not correctly mapped, meaning retrieval might not work well. This is particularly relevant if you're using a pre-existing data source (you did not create the index using the Studio or offline scripts available on [GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT/tree/main/scripts). For more information on mapping index fields, see the [how-to article](../concepts/use-your-data.md?tabs=ai-search#index-field-mapping). +### The information is correct, but the model responds with "The requested information isn't present in the retrieved documents. Please try a different query or topic." -1. It's possible that the intent generation step isn't working well. In the API response, check the `intents` fields in the `tool` message. +See [step 3](#step-3-check-the-rest-of-the-funnel) in the preceding debugging process. - - Some models are known to not work well for intent generation. For example, if you're using the GPT-35-turbo-1106 model version, consider using a later model, such as gpt-35-turbo (0125) or GPT-4-1106-preview. +### A response is from your data, but it isn't relevant or correct in the context of the question -1. Do you have semi-structured data in your documents, such as numerous tables? There might be an **ingestion issue**. Your data might need special handling during ingestion. - - If the file format is PDF, we offer optimized ingestion for tables using the offline scripts available on [GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT/tree/main/scripts). to use the scripts, you need to have a [Document Intelligence](../../document-intelligence/overview.md) resource and use the `Layout` [model](../../document-intelligence/concept-layout.md). You can also: - - Adjust your chunk size to make sure your largest table fits within the specified [chunk size](../concepts/use-your-data.md#chunk-size-preview). +See the preceding debugging process, starting at [step 1](#step-1-check-for-retrieval-problems). -1. Are you converting a semi-structured data type such as json/xml to a PDF document? This might cause an **ingestion issue** because structured information needs a chunking strategy that is different from purely text content. +### The model isn't following the role information or system message -1. If none of the above apply, you might be encountering a **retrieval issue**. Consider using a more powerful `query_type`. Based on our benchmarking, `semantic` and `vectorSemanticHybrid` are preferred. +* Make sure that instructions in the role information are consistent with the [Responsible AI guidelines](/legal/cognitive-services/openai/overview?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext). The model likely won't follow role information if it contradicts those guidelines. -## Frequently encountered issues +* Ensure that your role information follows the [established limits](../concepts/use-your-data.md#token-usage-estimation-for-azure-openai-on-your-data) for it. Each model has an implicit token limit for the role information. Beyond that limit, the information is truncated. -**Issue 1**: _The model responds with "The requested information isn't present in the retrieved documents. Please try a different query or topic" even though that's not the case._ +* Use the prompt engineering technique of repeating an important instruction at the end of the prompt. Putting a double asterisk (`**`) on both sides of the important information can also help. -See [Step 3](#step-3-check-the-rest-of-the-funnel) in the above debugging process. +* Upgrade to a newer GPT-4 model, because it's likely to follow your instructions better than GPT-3.5. -**Issue 2**: _The response is from my data, but it isn't relevant/correct in the context of the question._ +### Responses have inconsistencies -See the debugging process starting at [Step 1](#step-1-check-for-retrieval-issues). +* Ensure that you're using a low `temperature` value. We recommend setting it to `0`. -**Issue 3**: _The role information / system message isn't being followed by the model._ - -- Instructions in the role information might contradict with our [Responsible AI guidelines](/legal/cognitive-services/openai/overview?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext), in which case it won't likely be followed.+* By using the REST API, check if the generated search intents are the same both times. If the intents are different, try a more powerful model such as GPT-4 to see if the chosen model affects the problem. If the intents are the same or similar, try reducing `strictness` or increasing `topNDocuments`. -- For each model, there is an implicit token limit for the role information, beyond which it is truncated. Ensure your role information follows the established [limits](../concepts/use-your-data.md#token-usage-estimation-for-azure-openai-on-your-data).+> [!NOTE] +> Although the question is the same, the conversation history is added to the context and affects how the model responds to the same question over a long session. -- A prompt engineering technique you can use is to repeat an important instruction at the end of the prompt. Surrounding the important instruction with `**` on both side of it can also help.-- Upgrade to a newer GPT-4 model as it's likely to follow your instructions better than GPT-35.- -**Issue 4**: _There are inconsistencies in responses._ +### Intents are empty or wrong -- Ensure you're using a low `temperature`. We recommend setting it to `0`. +* See [Step 3](#step-3-check-the-rest-of-the-funnel) in the preceding debugging process. -- Although the question is the same, the conversation history gets added to the context and affects how the model responds to same question over a long session.+* If intents are irrelevant, the problem might be that the intent generation step lacks context. Intent generation considers only the user question and conversation history. It doesn't consider the role information or the document chunks. You might consider adding a prefix to each user question with a short context string to help the intent generation step. -- Using the REST API, check if the search intents generated are the same both times or not. If they are very different, try a more powerful model such as GPT-4 to see if the problem is affected by the chosen model. +### You set inScope=true or selected the checkbox for restricting responses to data, but the model still responds to out-of-domain questions -- If the intents are same or similar, try reducing `strictness` or increasing `topNDocuments`.+* Consider increasing `strictness`. -**Issue 5**: _Intents are empty or wrong._ +* Add the following instruction in your role information or system message: -- Refer to [Step 3](#step-3-check-the-rest-of-the-funnel) in the above debugging process. + `You are also allowed to respond to questions based on the retrieved documents.` +* Set the `inScope` parameter to `true`. The parameter isn't a hard switch, but setting it to `true` encourages the model to stay restricted. -- If intents are irrelevant, the issue might be that the intent generation step lacks context. It only considers the user question and conversation history. It does not look at the role information or the document chunks. You might want to consider adding a prefix to each user question with a short context string to help the intent generation step.+### A response is correct but is occasionally missing document references or citations -**Issue 6**: _I have set inScope=true or checked "Restrict responses to my data" but it still responds to Out-Of-Domain questions._ +* Consider upgrading to a GPT-4 model if you're already not using it. GPT-4 is generally more consistent with citation generation. -- Consider increasing `strictness`.-- Add the following instruction in your role information / system message: - - _"You are also allowed to respond to questions based on the retrieved documents."_ -- The `inscope` parameter isn't a hard switch, but setting it to `true` encourages the model to stay restricted.+* Try to emphasize citation generation in the response by adding `You must generate citation based on the retrieved documents in the response` in the role information. -**Issue 7**: _The response is correct but occasionally missing document references/citations._ -- Consider upgrading to a GPT-4 model if you're already not using it. GPT-4 is generally more consistent with citation generation.-- You can try to emphasize citation generation in the response by adding `**You must generate citation based on the retrieved documents in the response**` in the role information. -- Or you can add a prefix in the user query `**You must generate citation to the retrieved documents in the response to the user question \n User Question: {actual user question}**`+* Try adding a prefix in the user query `You must generate citation to the retrieved documents in the response to the user question \n User Question: {actual user question}`. |
ai-services | Use Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-web-app.md | Title: 'Using the Azure OpenAI web app'- + Title: 'Use the Azure OpenAI web app' + description: Use this article to learn about using the available web app to chat with Azure OpenAI models. recommendations: false # Use the Azure OpenAI web app -Along with Azure OpenAI Studio, APIs and SDKs, you can also use the available standalone web app to interact with Azure OpenAI models using a graphical user interface, which you can deploy using either Azure OpenAI studio or a [manual deployment](https://github.com/microsoft/sample-app-aoai-chatGPT). +Along with Azure OpenAI Studio, APIs, and SDKs, you can use the available standalone web app to interact with Azure OpenAI Service models by using a graphical user interface. You can deploy the app by using either Azure OpenAI Studio or a [manual deployment](https://github.com/microsoft/sample-app-aoai-chatGPT). -![A screenshot of the web app interface.](../media/use-your-data/web-app.png) +![Screenshot that shows the web app interface.](../media/use-your-data/web-app.png) ## Important considerations -- Publishing creates an Azure App Service in your subscription. It might incur costs depending on the [pricing plan](https://azure.microsoft.com/pricing/details/app-service/windows/) you select. When you're done with your app, you can delete it from the Azure portal.-- gpt-4 vision-preview models are not supported.-- By default, the app will be deployed with the Microsoft identity provider already configured, restricting access to the app to members of your Azure tenant. To add or modify authentication:- 1. Go to the [Azure portal](https://portal.azure.com/#home) and search for the app name you specified during publishing. Select the web app, and go to the **Authentication** tab on the left navigation menu. Then select **Add an identity provider**. - - :::image type="content" source="../media/quickstarts/web-app-authentication.png" alt-text="Screenshot of the authentication page in the Azure portal." lightbox="../media/quickstarts/web-app-authentication.png"::: +- Publishing creates an Azure App Service instance in your subscription. It might incur costs depending on the [pricing plan](https://azure.microsoft.com/pricing/details/app-service/windows/) that you select. When you're done with your app, you can delete it from the Azure portal. +- GPT-4 Turbo with Vision models are not supported. +- By default, the app is deployed with the Microsoft identity provider already configured. The identity provider restricts access to the app to members of your Azure tenant. To add or modify authentication: + 1. Go to the [Azure portal](https://portal.azure.com/#home) and search for the app name that you specified during publishing. Select the web app, and then select **Authentication** on the left menu. Then select **Add identity provider**. - 1. Select Microsoft as the identity provider. The default settings on this page will restrict the app to your tenant only, so you don't need to change anything else here. Then select **Add** - - Now users will be asked to sign in with their Microsoft Entra ID account to be able to access your app. You can follow a similar process to add another identity provider if you prefer. The app doesn't use the user's sign-in information in any other way other than verifying they are a member of your tenant. + :::image type="content" source="../media/quickstarts/web-app-authentication.png" alt-text="Screenshot of the authentication pane in the Azure portal." lightbox="../media/quickstarts/web-app-authentication.png"::: ++ 1. Select Microsoft as the identity provider. The default settings on this page restrict the app to your tenant only, so you don't need to change anything else here. Select **Add**. ++ Now users will be asked to sign in with their Microsoft Entra account to access your app. You can follow a similar process to add another identity provider if you prefer. The app doesn't use the user's sign-in information in any way other than verifying that the user is a member of your tenant. ## Web app customization -You can customize the app's frontend and backend logic. The app provides several [environment variables](https://github.com/microsoft/sample-app-aoai-chatGPT#common-customization-scenarios-eg-updating-the-default-chat-logo-and-headers) for common customization scenarios such as changing the icon in the app. See the source code for the web app, and more information on [GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT). +You can customize the app's front-end and back-end logic. The app provides several [environment variables](https://github.com/microsoft/sample-app-aoai-chatGPT#common-customization-scenarios-eg-updating-the-default-chat-logo-and-headers) for common customization scenarios such as changing the icon in the app. -When customizing the app, we recommend: +When you're customizing the app, we recommend: -- Resetting the chat session (clear chat) if the user changes any settings. Notify the user that their chat history will be lost.+- Resetting the chat session (clear chat) if users change any settings. Notify the users that their chat history will be lost. -- Clearly communicating how each setting you implement will affect the user experience.+- Clearly communicating how each setting that you implement will affect the user experience. -- When you rotate API keys for your Azure OpenAI or Azure AI Search resource, be sure to update the app settings for each of your deployed apps to use the new keys.+- Updating the app settings for each of your deployed apps to use new API keys after you rotate keys for your Azure OpenAI or Azure AI Search resource. -Sample source code for the web app is available on [GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT). Source code is provided "as is" and as a sample only. Customers are responsible for all customization and implementation of their web apps. +Sample source code for the web app is available on [GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT). Source code is provided "as is" and as a sample only. Customers are responsible for all customization and implementation of their web apps. ## Updating the web app > [!NOTE]-> After February 1, 2024, the web app requires the app startup command to be set to `python3 -m gunicorn app:app`. When updating an app that was published prior to February 1, 2024, you need to manually add the startup command from the **App Service Configuration** page. --We recommend pulling changes from the `main` branch for the web app's source code frequently to ensure you have the latest bug fixes, API version, and improvements. Additionally, the web app must be synchronized every time the API version being used is [retired](../api-version-deprecation.md) +> As of February 1, 2024, the web app requires the app startup command to be set to `python3 -m gunicorn app:app`. When you're updating an app that was published before February 1, 2024, you need to manually add the startup command from the **App Service Configuration** page. -Consider either clicking the **watch** or **star** buttons on the web app's [GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT) repo to be notified about changes and updates to the source code. +We recommend pulling changes from the `main` branch for the web app's source code frequently to ensure that you have the latest bug fixes, API version, and improvements. Additionally, the web app must be synchronized every time the API version that you're using is [retired](../api-version-deprecation.md). Consider selecting either the **Watch** or the **Star** button on the web app's [GitHub repo](https://github.com/microsoft/sample-app-aoai-chatGPT) to be notified about changes and updates to the source code. -**If you haven't customized the app:** -* You can follow the synchronization steps below +If you haven't customized the web app, you can use these steps to synchronize it: -**If you've customized or changed the app's source code:** -* You will need to update your app's source code manually and redeploy it. - * If your app is hosted on GitHub, push your code changes to your repo, and use the synchronization steps below. - * If you're redeploying the app manually (for example Azure CLI), follow the steps for your deployment strategy. +1. Go to your web app in the [Azure portal](https://portal.azure.com/). +1. On the left menu, under **Deployment**, select **Deployment Center**. +1. Select **Sync** at the top of the pane, and confirm that the app will be redeployed. + :::image type="content" source="../media/use-your-data/sync-app.png" alt-text="A screenshot of the web app synchronization button on the Azure portal." lightbox="../media/use-your-data/sync-app.png"::: -### Synchronize the web app --1. If you've customized your app, update the app's source code. -1. Navigate to your web app in the [Azure portal](https://portal.azure.com/). -1. Select **Deployment center** in the navigation menu, under **Deployment**. -1. Select **Sync** at the top of the screen, and confirm that the app will be redeployed. -- :::image type="content" source="../media/use-your-data/sync-app.png" alt-text="A screenshot of web app synchronization button on the Azure portal." lightbox="../media/use-your-data/sync-app.png"::: +If you customized or changed the app's source code, you need to update your app's source code manually and redeploy it: +- If your app is hosted on GitHub, push your code changes to your repo, and then use the preceding synchronization steps. +- If you're redeploying the app manually (for example, by using the Azure CLI), follow the steps for your deployment strategy. ## Chat history -You can enable chat history for your users of the web app. When you enable the feature, your users will have access to their individual previous queries and responses. +You can turn on chat history for your users of the web app. When you turn on the feature, users have access to their individual previous queries and responses. -To enable chat history, deploy or redeploy your model as a web app using [Azure OpenAI Studio](https://oai.azure.com/portal). +To turn on chat history, deploy or redeploy your model as a web app by using [Azure OpenAI Studio](https://oai.azure.com/portal) and select **Enable chat history in the web app**. > [!IMPORTANT]-> Enabling chat history will create a [Cosmos DB](/azure/cosmos-db/introduction) instance in your resource group, and incur [additional charges](https://azure.microsoft.com/pricing/details/cosmos-db/autoscale-provisioned/) for the storage used. +> Turning on chat history creates an [Azure Cosmos DB](/azure/cosmos-db/introduction) instance in your resource group, and it incurs [additional charges](https://azure.microsoft.com/pricing/details/cosmos-db/autoscale-provisioned/) for the storage that you use. -Once you've enabled chat history, your users will be able to show and hide it in the top right corner of the app. When the history is shown, they can rename, or delete conversations. As they're logged into the app, conversations will be automatically ordered from newest to oldest, and named based on the first query in the conversation. +After you turn on chat history, your users can show and hide it in the upper-right corner of the app. When users show chat history, they can rename or delete conversations. Because the users are signed in to the app, conversations are automatically ordered from newest to oldest. Conversations are named based on the first query in the conversation. ## Deleting your Cosmos DB instance -Deleting your web app does not delete your Cosmos DB instance automatically. To delete your Cosmos DB instance, along with all stored chats, you need to navigate to the associated resource in the [Azure portal](https://portal.azure.com) and delete it. If you delete the Cosmos DB resource but keep the chat history option enabled on the studio, your users will be notified of a connection error, but can continue to use the web app without access to the chat history. +Deleting your web app does not delete your Cosmos DB instance automatically. To delete your Cosmos DB instance along with all stored chats, you need to go to the associated resource in the [Azure portal](https://portal.azure.com) and delete it. If you delete the Cosmos DB resource but keep the chat history option turned on in the studio, your users are notified of a connection error but can continue to use the web app without access to the chat history. ++## Related content -## Next steps -* [Prompt engineering](../concepts/prompt-engineering.md) -* [Azure OpenAI on your data](../concepts/use-your-data.md) +- [Prompt engineering](../concepts/prompt-engineering.md) +- [Azure OpenAI On Your Data](../concepts/use-your-data.md) |
ai-services | Quotas Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md | The following sections provide you with a quick guide to the default quotas and | GPT-4o max images per request (# of images in the messages array/conversation history) | 10 | | GPT-4 `vision-preview` & GPT-4 `turbo-2024-04-09` default max tokens | 16 <br><br> Increase the `max_tokens` parameter value to avoid truncated responses. GPT-4o max tokens defaults to 4096. | - ## Regional quota limits [!INCLUDE [Quota](./includes/model-matrix/quota.md)] |
ai-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md | recommendations: false This article provides a summary of the latest releases and major documentation updates for Azure OpenAI. +## July 2024 ++### Expansion of regions available for global standard deployments of gpt-4o ++ GPT-4o is now available for [global standard deployments](./how-to/deployment-types.md) in: ++- australiaeast +- brazilsouth +- canadaeast +- eastus +- eastus2 +- francecentral git +- germanywestcentral +- japaneast +- koreacentral +- northcentralus +- norwayeast +- polandcentral +- southafricanorth +- southcentralus +- southindia +- swedencentral +- switzerlandnorth +- uksouth +- westeurope +- westus +- westus3 ++For information on global standard quota, consult the [quota and limits page](./quotas-limits.md). + ## June 2024 ### Retirement date updates |
ai-studio | Index Build Consume Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/develop/index-build-consume-sdk.md | from azure.ai.ml.entities import LocalSource input_source=LocalSource(input_data="<path-to-your-local-files>") -# Github repository +# GitHub repository from azure.ai.ml.entities import GitSource input_source=GitSource( |
aks | Azure Csi Blob Storage Provision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-provision.md | The following table includes parameters you can use to define a persistent volum | | **Following parameters are only for feature: blobfuse<br> [Managed Identity and Service Principal Name authentication](https://github.com/Azure/azure-storage-fuse#environment-variables)** | | | | |volumeAttributes.AzureStorageAuthType | Specify the authentication type. | `Key`, `SAS`, `MSI`, `SPN` | No | `Key`| |volumeAttributes.AzureStorageIdentityClientID | Specify the Identity Client ID. | | No ||-|volumeAttributes.AzureStorageIdentityObjectID | Specify the Identity Object ID. | | No || |volumeAttributes.AzureStorageIdentityResourceID | Specify the Identity Resource ID. | | No || |volumeAttributes.MSIEndpoint | Specify the MSI endpoint. | | No || |volumeAttributes.AzureStorageSPNClientID | Specify the Azure Service Principal Name (SPN) Client ID. | | No || |
aks | Long Term Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/long-term-support.md | The Kubernetes community releases a new minor version approximately every four m AKS supports versions of Kubernetes that are within this Community support window, to push bug fixes and security updates from community releases. -While innovation delivered with this release cadence provides huge benefits to you, it challenges you to keep up to date with Kubernetes releases, which can be made more difficult based on the number of AKS clusters you have to maintain. +While innovation delivered with this release cadence provides huge benefits to you, it challenges you to keep up to date with Kubernetes releases, which can be made more difficult based on the number of AKS clusters you have to maintain. ## AKS support types -After approximately one year, the Kubernetes version exits Community support and your AKS clusters are now at risk as bug fixes and security updates become unavailable. +After approximately one year, the Kubernetes version exits Community support and your AKS clusters are now at risk as bug fixes and security updates become unavailable. AKS provides one year Community support and one year of long-term support (LTS) to back port security fixes from the community upstream in our public repository. Our upstream LTS working group contributes efforts back to the community to provide our customers with a longer support window. -LTS intends to give you an extended period of time to plan and test for upgrades over a two-year period from the General Availability of the designated Kubernetes version. +LTS intends to give you an extended period of time to plan and test for upgrades over a two-year period from the General Availability of the designated Kubernetes version. | | Community support |Long-term support | |||| az aks update --resource-group myResourceGroup --name myAKSCluster --tier [free| The AKS team currently tracks add-on versions where Kubernetes Community support exists. Once a version leaves Community support, we rely on open source projects for managed add-ons to continue that support. Due to various external factors, some add-ons and features may not support Kubernetes versions outside these upstream Community support windows. -See the following table for a list of add-ons and features that aren't supported and the reason why. +See the following table for a list of add-ons and features that aren't supported and the reason why. | Add-on / Feature | Reason it's unsupported | || See the following table for a list of add-ons and features that aren't supported | AAD Pod Identity | Deprecated in place of Workload Identity | > [!NOTE]->You can't move your cluster to long-term support if any of these add-ons or features are enabled. +>You can't move your cluster to long-term support if any of these add-ons or features are enabled. >Whilst these AKS managed add-ons aren't supported by Microsoft, you're able to install the Open Source versions of these on your cluster if you wish to use it past Community support. ## How we decide the next LTS version To carry out an in-place upgrade to the latest LTS version, you need to specify az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.32.2 ``` > [!NOTE]-> The next Long Term Support Version after 1.27 is to be determined. However Customers will get a minimum 6 months of overlap between 1.27 LTS and the next LTS version to plan upgrades. +> The next Long Term Support Version after 1.27 is to be determined. However Customers will get a minimum 6 months of overlap between 1.27 LTS and the next LTS version to plan upgrades. > Kubernetes 1.32.2 is used as an example version in this article. Check the [AKS release tracker](release-tracker.md) for available Kubernetes releases. +## Frequently asked questions ++### Community support for AKS 1.27 ends expires in July 2024. Can I create a new AKS cluster with version 1.27 after that date? +Yes, as long as LTS is enabled on the cluster, you can create a new AKS cluster with version 1.27 after the community support window has ended. ++### Can I enable and disable LTS on AKS 1.27 after the end of community support? +You can enable the LTS support plan on AKS 1.27 after the end of community support. However, you can't disable LTS on AKS 1.27 after the end of community support. ++### I have a cluster running on version 1.27. Does it mean it is automatically in LTS? +No, you need to explicitly enable LTS on the cluster to receive LTS support. Enabling LTS also requires being on the Premium tier. ++### What is the pricing model for LTS? +LTS is available on the Premium tier, please refer to the [Premium tier pricing](https://azure.microsoft.com/pricing/details/kubernetes-service/) for more information. |
aks | Upgrade Windows Os | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-windows-os.md | + + Title: Upgrade the OS version for your Azure Kubernetes Service (AKS) Windows workloads +description: Learn how to upgrade the OS version for Windows workloads on Azure Kubernetes Service (AKS). +++ Last updated : 09/12/2023++++++# Upgrade the OS version for your Azure Kubernetes Service (AKS) Windows workloads ++When upgrading the OS version of a running Windows workload on Azure Kubernetes Service (AKS), you need to deploy a new node pool to ensure the Windows versions match on each node pool. This article describes the steps to upgrade the OS version for Windows workloads on AKS. While this example focuses on the upgrade from Windows Server 2019 to Windows Server 2022, the same process can be followed to upgrade from any Windows Server version to another. ++## Windows Server OS version support ++When a new version of the Windows Server operating system is released, AKS is committed to supporting it and recommending you upgrade to the latest version to take advantage of the fixes, improvements, and new functionality. AKS provides a five-year support lifecycle for every Windows Server version, starting with Windows Server 2022. During this period, AKS will release a new version that supports a newer version of Windows Server OS for you to upgrade to. ++> [!NOTE] +> - Windows Server 2019 is being retired after Kubernetes version 1.32 reaches end of life (EOL). For more information, see [AKS release notes][aks-release-notes]. +> - Windows Server 2022 is being retired after Kubernetes version 1.34 reaches its end of life (EOL). For more information, see [AKS release notes][aks-release-notes]. ++## Limitations ++Windows Server 2019 and Windows Server 2022 can't coexist on the same node pool on AKS. You need to create a new node pool to host the new OS version. It's important that you match the permissions and access of the previous node pool to the new one. ++## Before you begin ++- Update the `FROM` statement in your Dockerfile to the new OS version. +- Check your application and verify the container app works on the new OS version. +- Deploy the verified container app on AKS to a development or testing environment. +- Take note of the new image name or tag for use in this article. ++> [!NOTE] +> To learn how to build a Dockerfile for Windows workloads, see [Dockerfile on Windows](/virtualization/windowscontainers/manage-docker/manage-windows-dockerfile) and [Optimize Windows Dockerfiles](/virtualization/windowscontainers/manage-docker/optimize-windows-dockerfile). ++## Add a Windows Server 2022 node pool to an existing cluster ++- [Add a Windows Server 2022 node pool](./learn/quick-windows-container-deploy-cli.md) to an existing cluster. ++## Update the YAML file ++Node Selector is the most common and recommended option for placement of Windows pods on Windows nodes. ++1. Add Node Selector to your YAML file by adding the following annotation: ++ ```yaml + nodeSelector: + "kubernetes.io/os": windows + ``` ++ The annotation finds any available Windows node and places the pod on that node (following all other scheduling rules). When upgrading from Windows Server 2019 to Windows Server 2022, you need to enforce the placement on a Windows node and a node running the latest OS version. To accomplish this, one option is to use a different annotation: ++ ```yaml + nodeSelector: + "kubernetes.azure.com/os-sku": Windows2022 + ``` ++2. Once you update the `nodeSelector` in the YAML file, you also need to update the container image you want to use. You can get this information from the previous step in which you created a new version of the containerized application by changing the `FROM` statement on your Dockerfile. ++> [!NOTE] +> You should use the same YAML file you used to initially deploy the application. This ensures that no other configuration changes besides the `nodeSelector` and container image. ++## Apply the updated YAML file to the existing workload ++1. View the nodes on your cluster using the `kubectl get nodes` command. ++ ```bash + kubectl get nodes -o wide + ``` ++ The following example output shows all nodes on the cluster, including the new node pool you created and the existing node pools: ++ ```output + NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME + aks-agentpool-18877473-vmss000000 Ready agent 5h40m v1.23.8 10.240.0.4 <none> Ubuntu 18.04.6 LTS 5.4.0-1085-azure containerd://1.5.11+azure-2 + akspoolws000000 Ready agent 3h15m v1.23.8 10.240.0.208 <none> Windows Server 2022 Datacenter 10.0.20348.825 containerd://1.6.6+azure + akspoolws000001 Ready agent 3h17m v1.23.8 10.240.0.239 <none> Windows Server 2022 Datacenter 10.0.20348.825 containerd://1.6.6+azure + akspoolws000002 Ready agent 3h17m v1.23.8 10.240.1.14 <none> Windows Server 2022 Datacenter 10.0.20348.825 containerd://1.6.6+azure + akswspool000000 Ready agent 5h37m v1.23.8 10.240.0.115 <none> Windows Server 2019 Datacenter 10.0.17763.3165 containerd://1.6.6+azure + akswspool000001 Ready agent 5h37m v1.23.8 10.240.0.146 <none> Windows Server 2019 Datacenter 10.0.17763.3165 containerd://1.6.6+azure + akswspool000002 Ready agent 5h37m v1.23.8 10.240.0.177 <none> Windows Server 2019 Datacenter 10.0.17763.3165 containerd://1.6.6+azure + ``` ++2. Apply the updated YAML file to the existing workload using the `kubectl apply` command and specify the name of the YAML file. ++ ```bash + kubectl apply -f <filename> + ``` ++ The following example output shows a *configured* status for the deployment: ++ ```output + deployment.apps/sample configured + service/sample unchanged + ``` ++ At this point, AKS starts the process of terminating the existing pods and deploying new pods to the Windows Server 2022 nodes. ++3. Check the status of the deployment using the `kubectl get pods` command. ++ ```bash + kubectl get pods -o wide + ``` ++ The following example output shows the pods in the `default` namespace: ++ ```output + NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES + sample-7794bfcc4c-k62cq 1/1 Running 0 2m49s 10.240.0.238 akspoolws000000 <none> <none> + sample-7794bfcc4c-rswq9 1/1 Running 0 2m49s 10.240.1.10 akspoolws000001 <none> <none> + sample-7794bfcc4c-sh78c 1/1 Running 0 2m49s 10.240.0.228 akspoolws000000 <none> <none> + ``` ++## Security and authentication considerations ++If you're using Group Managed Service Accounts (gMSA), you need to update the Managed Identity configuration for the new node pool. gMSA uses a secret (user account and password) so the node that runs the Windows pod can authenticate the container against Microsoft Entra ID. To access that secret on Azure Key Vault, the node uses a Managed Identity that allows the node to access the resource. Since Managed Identities are configured per node pool, and the pod now resides on a new node pool, you need to update that configuration. For more information, see [Enable Group Managed Service Accounts (GMSA) for your Windows Server nodes on your Azure Kubernetes Service (AKS) cluster](./use-group-managed-service-accounts.md). ++The same principle applies to Managed Identities for any other pod or node pool when accessing other Azure resources. You need to update any access that Managed Identity provides to reflect the new node pool. To view update and sign-in activities, see [How to view Managed Identity activity](../active-directory/managed-identities-azure-resources/how-to-view-managed-identity-activity.md). ++## Next steps ++In this article, you learned how to upgrade the OS version for Windows workloads on AKS. To learn more about Windows workloads on AKS, see [Deploy a Windows container application on Azure Kubernetes Service (AKS)](./learn/quick-windows-container-deploy-cli.md). ++<!-- LINKS - External --> +[aks-release-notes]: https://github.com/Azure/AKS/releases + |
aks | Windows Annual Channel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-annual-channel.md | + + Title: Use Windows Annual Channel for Containers on Azure Kubernetes Service (AKS) +description: Learn about Windows Annual Channel for Containers for Windows containers on Azure Kubernetes Service (AKS). +++++ Last updated : 07/01/2024+++# Use Windows Annual Channel for Containers on Azure Kubernetes Service (AKS) (Preview) ++AKS supports [Windows Server Annual Channel for Containers](https://techcommunity.microsoft.com/t5/windows-server-news-and-best/windows-server-annual-channel-for-containers/ba-p/3866248) in public preview. Each channel version is released annually and is supported for *two years*. This channel is beneficial if you require increased innovation cycles and portability. ++Windows Annual Channel versions are based on the Kubernetes version of your node pool. To upgrade from one Annual Channel version to the next, you can [upgrade to a Kubernetes version][upgrade-aks-cluster] that supports the next Annual Channel version. +++## Supported Annual Channel releases ++AKS releases support for new releases of Windows Server Annual Channel for Containers in alignment with Kubernetes versions. For the latest updates, see the [AKS release notes][release-notes]. The following table provides an estimated release schedule for upcoming Annual Channel releases: ++| K8s version | Annual Channel (host) version | Container image supported | End of support date | +|--|-|-|-| +| 1.28 | 23H2 (preview only) | Windows Server 2022 | End of 1.30 support | +| 1.31 | 24H2 | Windows Server 2022 & Windows Server 2025 | End of 1.34 support | +| 1.35 | 25H2 | Windows Server 2025 | End of 1.38 support | ++## Windows Annual Channel vs. Long Term Servicing Channel Releases (LTSC) ++AKS supports Long Term Servicing Channel Releases (LTSC), including Windows Server 2022 and Windows Server 2019. These come from a different release channel than Windows Server Annual Channel for Containers. To view our current recommendations, see the [Windows best practices documentation][windows-best-practices]. ++> [!NOTE] +> Windows Server 2019 will retire after Kubernetes version 1.32 reaches end of life, and Windows Server 2022 will retire after Kubernetes version 1.34 reaches end of life. For more information, see the [AKS release notes][release-notes]. ++The following table compares Windows Annual Channel and Long Term Servicing Channel releases: ++| Channel | Support | Upgrades | +|||-| +| Long Term Servicing Channel (LTSC) | LTSC channels are released every three years and are supported for five years. This channel is recommended for customers using Long Term Support. | To upgrade from one release to the next, you need to migrate your node pools to a new OS SKU option and rebuild your container images with the new OS version. | +| Annual Channel for Containers | Annual Channel releases occur annually and are supported for two years. | To upgrade to the latest release, you can upgrade the Kubernetes version of your node pool. | ++## Before you begin ++* You need the Azure CLI version 2.56.0 or later installed and configured to set `os-sku` to `WindowsAnnual` with the `az aks nodepool add` command. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. ++### Limitations ++* Windows Annual Channel doesn't support Azure Network Policy Manager (NPM). ++### Install the `aks-preview` Azure CLI extension ++* Register or update the aks-preview extension using the [`az extension add`][az-extension-add] or [`az extension update`][az-extension-update] command. ++ ```azurecli-interactive + # Register the aks-preview extension + az extension add --name aks-preview + # Update the aks-preview extension + az extension update --name aks-preview + ``` ++### Register the `AKSWindowsAnnualPreview` feature flag ++1. Register the `AKSWindowsAnnualPreview` feature flag using the [`az feature register`][az-feature-register] command. ++ ```azurecli-interactive + az feature register --namespace "Microsoft.ContainerService" --name "WindowsAnnualPreview" + ``` ++ It takes a few minutes for the status to show *Registered*. ++2. Verify the registration status using the [`az feature show`][az-feature-show] command. ++ ```azurecli-interactive + az feature show --namespace "Microsoft.ContainerService" --name "AKSWindowsAnnualPreview" + ``` ++3. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command. ++ ```azurecli-interactive + az provider register --namespace Microsoft.ContainerService + ``` ++## Use Windows Annual Channel for Containers on AKS ++To use Windows Annual Channel on AKS, specify the following parameters: ++* `os-type` set to `Windows` +* `os-sku` set to `WindowsAnnual` ++Windows Annual Channel versions are based on the Kubernetes version of your node pool. To check which release you'll get based on the Kubernetes version of your node pool, see the [supported Annual Channel releases](#supported-annual-channel-releases). ++### Create a new Windows Annual Channel node pool ++#### [Azure CLI](#tab/azure-cli) ++* Create a Windows Annual Channel node pool using the [`az aks nodepool add`][az-aks-nodepool-add] command. The following example creates a Windows Annual Channel node pool with the 23H2 release: ++ ```azurecli-interactive + az aks nodepool add \ + --resource-group $RESOURCE_GROUP_NAME \ + --cluster-name $CLUSTER_NAME \ + --os-type Windows \ + --os-sku WindowsAnnual \ + --kubernetes-version 1.29 + --name $NODE_POOL_NAME \ + --node-count 1 + ``` ++ > [!NOTE] + > If you don't specify the Kubernetes version during node pool creation, AKS uses the same Kubernetes version as your cluster. ++#### [Azure PowerShell](#tab/azure-powershell) ++* Create a Windows Annual Channel node pool using the [`New-AzAksNodePool`][new-azaksnodepool] cmdlet. ++ ```azurepowershell + New-AzAksNodePool -ResourceGroupName $RESOURCE_GROUP_NAME ` + -ClusterName $CLUSTER_NAME ` + -VmSetType VirtualMachineScaleSets ` + -OsType Windows ` + -OsSKU WindowsAnnual ` + -Name $NODE_POOL_NAME + ``` ++++### Verify Windows Annual Channel node pool creation ++* Verify Windows Annual Channel node pool creation by checking the OS SKU of your node pool using `kubectl describe node` command. ++ ```bash + kubectl describe node $NODE_POOL_NAME + ``` ++ If you successfully created a Windows Annual Channel node pool, you should see the following output: ++ ```output + Name: npwin + Roles: agent + Labels: agentpool=npwin + ... + kubernetes.azure.com/os=windows + ... + kubernetes.azure.com/node-image-version=AKSWindows-23H2-gen2 + ... + kubernetes.azure.com/os-sku=WindowsAnnual + ``` ++### Upgrade an existing node pool to Windows Annual Channel ++You can upgrade an existing node pool from an LTSC release to Windows Annual Channel by following the guidance in [Upgrade the OS version for your Azure Kubernetes Service (AKS) Windows workloads][upgrade-windows-os]. ++To upgrade from one Annual Channel version to the next, you can [upgrade to a Kubernetes version][upgrade-aks-cluster] that supports the next Annual Channel version. ++## Next steps ++To learn more about Windows Containers on AKS, see the following resources: ++> [!div class="nextstepaction"] +> [Windows best practices][windows-best-practices] +> [Windows FAQ][windows-faq] ++<! LINKS > +[windows-best-practices]: ./windows-best-practices.md +[windows-FAQ]: ./windows-faq.md +[upgrade-aks-cluster]: ./upgrade-aks-cluster.md +[upgrade-windows-os]: ./upgrade-windows-os.md +[install-azure-cli]: /cli/azure/install-azure-cli +[az-extension-add]: /cli/azure/azure-cli-extensions-overview#add-extensions +[az-extension-update]: /cli/azure/azure-cli-extensions-overview#update-extensions +[az-feature-register]: /cli/azure/feature#az_feature_register +[az-feature-show]: /cli/azure/feature#az_feature_show +[az-provider-register]: /cli/azure/provider#az_provider_register +[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az-aks-nodepool-add +[new-azaksnodepool]: /powershell/module/az.aks/new-azaksnodepool +[release-notes]: https://github.com/Azure/AKS/releases |
aks | Windows Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-best-practices.md | You might want to containerize existing applications and run them using Windows > **Best practice guidance** >-> Windows Server 2022 provides improved security and performance, and is the recommended OS for Windows node pools on AKS. +> Windows Server 2022 provides improved security and performance and is the recommended OS for Windows node pools on AKS. AKS uses Windows Server 2022 as the host OS version and only supports process isolation. -AKS uses Windows Server 2019 and Windows Server 2022 as the host OS versions and only supports process isolation. AKS doesn't support container images built by other versions of Windows Server. For more information, see [Windows container version compatibility](/virtualization/windowscontainers/deploy-containers/version-compatibility). +AKS supports two options for the Windows Server operating system: Long Term Servicing Channel Releases (LTSC) and Windows Server Annual Channel for Containers. -Windows Server 2022 is the default OS for Kubernetes version 1.25 and later. Windows Server 2019 will retire after Kubernetes version 1.32 reaches end of life. Windows Server 2022 will retire after Kubernetes version 1.34 reaches its end of life. For more information, see [AKS release notes][aks-release-notes]. To stay up to date on the latest Windows Server OS versions and learn more about our roadmap of what's planned for support on AKS, see our [AKS public roadmap](https://github.com/azure/aks/projects/1). +1. AKS supports Long Term Servicing Channel Releases (LTSC), including Windows Server 2022 and Windows Server 2019. This channel is released every three years and is supported for five years. Customers using Long Term Support (LTS) should use Windows Server 2022. + AKS uses Windows Server 2019 and Windows Server 2022 as the host OS versions and only supports process isolation. AKS doesn't support container images built by other versions of Windows Server. For more information, see [Windows container version compatibility](/virtualization/windowscontainers/deploy-containers/version-compatibility). ++ Windows Server 2022 is the default OS for Kubernetes version 1.25 and later. Windows Server 2019 will retire after Kubernetes version 1.32 reaches end of life. Windows Server 2022 will retire after Kubernetes version 1.34 reaches its end of life. For more information, see [AKS release notes][aks-release-notes]. To stay up to date on the latest Windows Server OS versions and learn more about our roadmap of what's planned for support on AKS, see our [AKS public roadmap](https://github.com/azure/aks/projects/1). ++2. AKS supports [Windows Server Annual Channel for Containers](https://techcommunity.microsoft.com/t5/windows-server-news-and-best/windows-server-annual-channel-for-containers/ba-p/3866248) (preview). This channel is released annually and is supported for 2 years. This channel is beneficial for customers requiring increased innovation cycles and portability. The portability functionality enables the Windows Server 2022-based container image OS to run on newer versions of Windows Server host OS, such as the new annual channel release. ++ Windows Annual Channel versions are based on the Kubernetes version of your node pool. To upgrade from one Annual Channel version to the next, [upgrade to a Kubernetes version](./upgrade-aks-cluster.md) that supports the next Annual Channel version. For more information, see [Windows Server Annual Channel for Containers on AKS][use-windows-annual]. ## Networking To learn more about Windows containers on AKS, see the following resources: [upgrade-aks-node-images]: ./node-image-upgrade.md [upgrade-windows-workloads-aks]: ./upgrade-windows-2019-2022.md [windows-on-aks-partner-solutions]: ./windows-aks-partner-solutions.md+[use-windows-annual]: ./windows-annual-channel.md <!-- LINKS - external --> [aks-release-notes]: https://github.com/Azure/AKS/releases |
aks | Windows Vs Linux Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-vs-linux-containers.md | This article covers important considerations to keep in mind when using Windows | [AKS Image Cleaner][aks-image-cleaner] | Not supported. | | [BYOCNI][byo-cni] | Not supported. | | [Open Service Mesh][open-service-mesh] | Not supported. |-| [GPU][windows-gpu] | Supported in preview. | +| [GPU][gpu] | Supported in preview. | | [Multi-instance GPU][multi-instance-gpu] | Not supported. |-| [Generation 2 VMs (preview)][gen-2-vms] | Supported but not by default. | -| [Custom node config][custom-node-config] | ΓÇó Custom node config has two configurations:<br/> ΓÇó [kubelet][custom-kubelet-parameters]: Supported in preview.<br/> ΓÇó OS config: Not supported. | +| [Generation 2 VMs (preview)][gen-2-vms] | Supported. | +| [Custom node config][custom-node-config] | ΓÇó Custom node config has two configurations:<br/> ΓÇó [kubelet][custom-kubelet-parameters]: Supported.<br/> ΓÇó OS config: Not supported. | ## Next steps For more information on Windows containers, see the [Windows Server containers F [node-image-upgrade]: node-image-upgrade.md [byo-cni]: use-byo-cni.md [open-service-mesh]: open-service-mesh-about.md-[windows-gpu]: use-windows-gpu.md +[gpu]: use-windows-gpu.md [multi-instance-gpu]: gpu-multi-instance.md [gen-2-vms]: generation-2-vm.md [custom-node-config]: custom-node-configuration.md [custom-kubelet-parameters]: custom-node-configuration.md#kubelet-custom-configuration- |
api-management | Api Management Gateways Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md | The following tables compare features available in the following API Management | [Availability zones](zone-redundancy.md) | Premium | ❌ | ❌ | ✔️<sup>1</sup> | | [Multi-region deployment](api-management-howto-deploy-multi-region.md) | Premium | ❌ | ❌ | ✔️<sup>1</sup> | | [CA root certificates](api-management-howto-ca-certificates.md) for certificate validation | ✔️ | ✔️ | ❌ | ✔️<sup>3</sup> | -| [CA root certificates](api-management-howto-ca-certificates.md) for certificate validation | ✔️ | ✔️ | ❌ | ✔️<sup>3</sup> | | [Managed domain certificates](configure-custom-domain.md?tabs=managed#domain-certificate-options) | Developer, Basic, Standard, Premium | ❌ | ✔️ | ❌ | | [TLS settings](api-management-howto-manage-protocols-ciphers.md) | ✔️ | ✔️ | ✔️ | ✔️ | | **HTTP/2** (Client-to-gateway) | ✔️<sup>4</sup> | ✔️<sup>4</sup> |❌ | ✔️ | |
app-service | App Service App Service Environment Control Inbound Traffic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-control-inbound-traffic.md | -> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. > > As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. > |
app-service | App Service App Service Environment Create Ilb Ase Resourcemanager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-create-ilb-ase-resourcemanager.md | -> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. > > As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. > |
app-service | App Service App Service Environment Intro | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-intro.md | -> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. > > As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. > |
app-service | App Service App Service Environment Layered Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-layered-security.md | -> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. > > As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. > |
app-service | App Service App Service Environment Network Architecture Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-network-architecture-overview.md | -> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. > > As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. > |
app-service | App Service App Service Environment Network Configuration Expressroute | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-network-configuration-expressroute.md | -> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. > > As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. > |
app-service | App Service App Service Environment Securely Connecting To Backend Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-securely-connecting-to-backend-resources.md | -> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. > > As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. > |
app-service | App Service Environment Auto Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-environment-auto-scale.md | -> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. > > As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. > |
app-service | App Service Web Configure An App Service Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-web-configure-an-app-service-environment.md | -> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. > > As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. > |
app-service | App Service Web Scale A Web App In An App Service Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-web-scale-a-web-app-in-an-app-service-environment.md | -> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. > > As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. > |
app-service | Ase Multi Tenant Comparison | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/ase-multi-tenant-comparison.md | Title: 'App Service Environment v3 and App Service public multitenant comparison' description: This article provides an overview of the difference between App Service Environment v3 and the public multitenant offering of App Service. Previously updated : 6/14/2024 Last updated : 7/8/2024 An App Service Environment is an Azure App Service feature that provides a fully |Dedicated host group|[Available](overview.md#dedicated-environment) |No | |Remote file storage|Fully dedicated to the App Service Environment |Remote file storage for the application is dedicated, but the storage is hosted on a shared file server | |Private inbound configuration|Yes, using ILB App Service Environment variation |Yes, via private endpoint |-|Planned maintenance|[Manual upgrade preference is available](how-to-upgrade-preference.md). Maintenance is nondisruptive to your apps. |The platform handles maintenance and is nondisruptive to your apps | +|Planned maintenance|[Manual upgrade preference is available](how-to-upgrade-preference.md) |The platform handles maintenance. [Service health notifications are available](../../app-service/routine-maintenance.md). | |Aggregate remote file share storage limit|1 TB for all apps in an App Service Environment v3|250 GB for all apps in a single App Service plan. 500 GB for all apps across all App Service plans in a single resource group.| ### Scaling |
app-service | Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/certificates.md | -> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. > > As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. > |
app-service | Create External Ase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-external-ase.md | -> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. > > As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. > |
app-service | Create From Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-from-template.md | -> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. > > As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. > |
app-service | Create Ilb Ase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-ilb-ase.md | -> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. > > As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. > |
app-service | Firewall Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/firewall-integration.md | -> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. > > As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. > |
app-service | Forced Tunnel Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/forced-tunnel-support.md | -> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. > > As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. > |
app-service | Intro | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/intro.md | -> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. > > As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. > |
app-service | Management Addresses | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/management-addresses.md | -> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. > > As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. > |
app-service | Network Info | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/network-info.md | -> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. > > As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. > |
app-service | Using An Ase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/using-an-ase.md | -> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. > > As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. > |
app-service | Version Comparison | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/version-comparison.md | -> App Service Environment v1 and v2 [will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). After that date, those versions will no longer be supported and any remaining App Service Environment v1 and v2s and the applications running on them will be deleted. +> App Service Environment v1 and v2 [will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). After that date, those versions will no longer be supported and any remaining App Service Environment v1 and v2s and the applications running on them will be deleted. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1 or v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. > |
app-service | Zone Redundancy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/zone-redundancy.md | -> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. > > As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. > |
app-service | Tutorial Connect Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-overview.md | Title: 'Securely connect to Azure resources' -description: Your app service may need to connect to other Azure services such as a database, storage, or another app. This overview recommends the more secure method for connecting. +description: Shows you how to connect to other Azure services such as a database, storage, or another app. This overview recommends the more secure method for connecting. -- Previously updated : 01/16/2023+ Last updated : 07/06/2024 +#customer intent: As a developer, I want to learn how to securely connect to Azure resources from Azure App Service so that I can protect sensitive data and ensure secure communication. -# Securely connect to Azure services and databases from Azure App Service +# Secure connectivity to Azure services and databases from Azure App Service ++Your app service might need to connect to other Azure services such as a database, storage, or another app. This overview recommends different methods for connecting and when to use them. -Your app service may need to connect to other Azure services such as a database, storage, or another app. This overview recommends different methods for connecting and when to use them. +Today, the decision for a connectivity approach is closely related to secrets management. The common pattern of using connection secrets in connection strings, such as username and password, secret key, etc. is no longer considered the most secure approach for connectivity. The risk is even higher today because threat actors regularly crawl public GitHub repositories for accidentally committed connection secrets. For cloud applications, the best secrets management is to have no secrets at all. When you migrate to Azure App Service, your app might start with secrets-based connectivity, and App Service lets you keep secrets securely. However, Azure can help secure your app's back-end connectivity through Microsoft Entra authentication, which eliminates secrets altogether in your app. |Connection method|When to use| |--|--|-|[Connect using the app identity](#connect-using-the-app-identity)|* You want to connect to a resource without an authenticated user present or using the app identity.<br>* You don't need to manage credentials, keys, or secrets, or the credentials arenΓÇÖt even accessible to you.<br>* You can use managed identities to manage credentials for you.<br>* A Microsoft Entra identity is required to access the Azure resource. For example, services such as Microsoft Graph or Azure management SDKs.| -|[Connect as the authenticated user](#connect-as-the-authenticated-user)| * You want to access a resource and perform some action as the signed-in user.| -|[Connect using secrets](#connect-using-secrets)|* You need secrets to be passed to your app as environment variables.<br>* You want to connect to non-Azure services such as GitHub, Twitter, Facebook, or Google.<br>* The downstream resource doesn't support Microsoft Entra authentication. <br>* The downstream resource requires a connection string or key or secret of some sort.| +|[Connect with an app identity](#connect-with-an-app-identity)|* You want to remove credentials, keys, or secrets completely from your application.<br/>* The downstream Azure service supports Microsoft Entra authentication, such as Microsoft Graph.<br/>* The downstream resource doesn't need to know the current signed-in user or doesn't need granular authorization of the current signed-in user.| +|[Connect on behalf of the signed-in user](#connect-on-behalf-of-the-signed-in-user)| * The app must access a downstream resource on behalf of the signed-in user.<br/>* The downstream Azure service supports Microsoft Entra authentication, such as Microsoft Graph.<br/>* The downstream resource must perform granular authorization of the current signed-in user.| +|[Connect using secrets](#connect-using-secrets)|* The downstream resource requires connection secrets.<br/>* Your app connects to non-Azure services, such as an on-premises database server.<br/>* The downstream Azure service doesn't support Microsoft Entra authentication yet.| -## Connect using secrets +## Connect with an app identity -There are two recommended ways to use secrets in your app: using secrets stored in Azure Key Vault or secrets in App Service application settings. +If your app already uses a single set of credentials to access a downstream Azure service, you can quickly convert the connection to use an app identity instead. A [managed identity](overview-managed-identity.md) from Microsoft Entra ID lets App Service access resources without secrets, and you can manage its access through role-based access control (RBAC). A managed identity can connect to any Azure resource that supports Microsoft Entra authentication, and the authentication takes place with short-lived tokens. -### Use secrets in app settings +The following image demonstrates the following an App Service connecting to other Azure -Some apps access secrets using environment variables. Traditionally, App Service [app settings](configure-common.md) have been used to store connection strings, API keys, and other environment variables. These secrets are injected into your application code as environment variables at app startup. App settings are always encrypted when stored (encrypted-at-rest). If you also want access policies and audit history for your secrets, consider putting them in Azure Key Vault and using [Key Vault references](app-service-key-vault-references.md) in your app settings. +* A: User visits Azure app service website. +* B: Securely **connect from** App Service **to** another Azure service using a managed identity. +* C: Securely **connect from** App Service **to** Microsoft Graph using a managed identity. + Examples of using application secrets to connect to a database: -- [ASP.NET Core with SQL DB](tutorial-dotnetcore-sqldb-app.md)-- [ASP.NET with SQL DB](app-service-web-tutorial-dotnet-sqldatabase.md)-- [PHP with MySQL](tutorial-php-mysql-app.md)-- [Node.js with MongoDB](tutorial-nodejs-mongodb-app.md)-- [Python with Postgres](tutorial-python-postgresql-app.md)-- [Java with Spring Data](tutorial-java-spring-cosmosdb.md)-- [Quarkus with Postgres](tutorial-java-quarkus-postgresql-app.md)+- [Tutorial: Connect to Azure databases from App Service without secrets using a managed identity](tutorial-connect-msi-azure-database.md) +- [Tutorial: Connect to SQL Database from .NET App Service without secrets using a managed identity](tutorial-connect-msi-sql-database.md) +- [Tutorial: Connect to a PostgreSQL Database from Java Tomcat App Service without secrets using a managed identity](tutorial-java-tomcat-connect-managed-identity-postgresql-database.md) -### Use secrets from Key Vault +## Connect on behalf of the signed-in user -[Azure Key Vault](app-service-key-vault-references.md) can be used to securely store secrets and keys, monitor access and use of secrets, and simplify administration of application secrets. If your app's downstream service doesn't support Microsoft Entra authentication or requires a connection string or key, use Key Vault to store your secrets and connect your app to Key Vault with a managed identity and retrieve the secrets. +Your app might need to connect to a downstream service on behalf of the signed-in user. App Service lets you easily authenticate users using the most common identity providers (see [Authentication and authorization in Azure App Service and Azure Functions](overview-authentication-authorization.md)). If you use the Microsoft provider (Microsoft Entra authentication), you can then flow the signed-in user to any downstream service. For example: -Benefits of managed identities integrated with Key Vault include: -- Access to the Key Vault is restricted to the app. -- App contributors, such as administrators, may have complete control of the App Service resources, and at the same time have no access to the Key Vault secrets. -- No code change is required if your application code already accesses connection secrets with app settings. -- Key Vault provides monitoring and auditing of who accessed secrets. -- Rotation of connection information in Key Vault requires no changes in App Service.+- Run a database query that returns confidential data that the signed-in user is authorized to read. +- Retrieve personal data or take actions as the signed-in user in Microsoft Graph. -The following image demonstrates App Service connecting to Key Vault using a managed identity and then accessing an Azure service using secrets stored i Key Vault: +The following image demonstrates an application securely accessing an SQL database on behalf of the signed-in user. +Some common scenarios are: +- [Connect to Microsoft Graph on behalf of the user](scenario-secure-app-access-microsoft-graph-as-user.md) +- [Connect to an SQL database on behalf the user](tutorial-connect-app-access-sql-database-as-user-dotnet.md) +- [Connect to another App Service app on behalf of the user](tutorial-auth-aad.md) +- [Flow the signed-in user through multiple layers of downstream services](tutorial-connect-app-app-graph-javascript.md) -## Connect using the app identity +## Connect using secrets -In some cases, your app needs to access data under the identity of the app itself or without a signed-in user present. A [managed identity](overview-managed-identity.md) from Microsoft Entra ID allows App Service to access resources through role-based access control (RBAC), without requiring app credentials. A managed identity can connect to any resource that supports Microsoft Entra authentication. After assigning a managed identity to your web app, Azure takes care of the creation and distribution of a certificate. You don't have to worry about managing secrets or app credentials. +There are two recommended ways to use secrets in your app: using secrets stored in Azure Key Vault or secrets in App Service app settings. -The following image demonstrates the following an App Service connecting to other Azure +### Use secrets from Key Vault -* A: User visits Azure app service website. -* B: Securely **connect from** App Service **to** another Azure service using a managed identity. -* C: Securely **connect from** App Service **to** Microsoft Graph using a managed identity. +[Azure Key Vault](app-service-key-vault-references.md) can be used to securely store secrets and keys, monitor access and use of secrets, and simplify administration of application secrets. If the downstream service doesn't support Microsoft Entra authentication or requires a connection string or key, use Key Vault to store your secrets and connect your app to Key Vault with a managed identity and retrieve the secrets. Your app can access they key vault secrets as [Key Vault references](app-service-key-vault-references.md) in the app settings. +Benefits of managed identities integrated with Key Vault include: +- Access to the key vault secret is restricted to the app. +- App contributors, such as administrators, might have complete control of the App Service resources, and at the same time have no access to the key vault secrets. +- No code change is required if your application code already accesses connection secrets with app settings. +- Key Vault provides monitoring and auditing of who accessed secrets. +- Rotation of key vault secrets requires no changes in App Service. -## Connect as the authenticated user +The following image demonstrates App Service connecting to Key Vault using a managed identity and then accessing an Azure service using secrets stored in Key Vault: -In some cases, your app needs to connect to a resource and perform some action that only the signed-in user can do. Grant delegated permissions to your app to connect to resources using the identity of the signed-in user. -The following image demonstrates an application securely accessing an SQL database on behalf of the signed-in user. +### Use secrets in app settings +For apps that connect to services using secrets (such as usernames, passwords, and API keys), App Service can store them securely in [app settings](configure-common.md). These secrets are injected into your application code as environment variables at app startup. App settings are always encrypted when stored (encrypted-at-rest). For more advanced secrets management, such as secrets rotation, access policies, and audit history, try [using Key Vault](#use-secrets-from-key-vault). -Some common scenarios are: -- [Connect to Microsoft Graph](scenario-secure-app-access-microsoft-graph-as-user.md) as the user-- [Connect to an SQL database](tutorial-connect-app-access-sql-database-as-user-dotnet.md) as the user-- [Connect to another App Service app](tutorial-auth-aad.md) as the user-- [Connect to another App Service app and then a downstream service](tutorial-connect-app-app-graph-javascript.md) as the user+Examples of using application secrets to connect to a database: +- [Tutorial: Deploy an ASP.NET Core and Azure SQL Database app to Azure App Service](tutorial-dotnetcore-sqldb-app.md) +- [Tutorial: Deploy an ASP.NET app to Azure with Azure SQL Database](app-service-web-tutorial-dotnet-sqldatabase.md) +- [Tutorial: Deploy a PHP, MySQL, and Redis app to Azure App Service](tutorial-php-mysql-app.md) +- [Deploy a Node.js + MongoDB web app to Azure](tutorial-nodejs-mongodb-app.md) +- [Deploy a Python (Django or Flask) web app with PostgreSQL in Azure](tutorial-python-postgresql-app.md) +- [Tutorial: Build a Tomcat web app with Azure App Service on Linux and MySQL](tutorial-java-tomcat-mysql-app.md) +- [Tutorial: Build a Java Spring Boot web app with Azure App Service on Linux and Azure Cosmos DB](tutorial-java-spring-cosmosdb.md) +- [Tutorial: Build a Quarkus web app with Azure App Service on Linux and PostgreSQL](tutorial-java-quarkus-postgresql-app.md) ## Next steps |
app-service | Tutorial Dotnetcore Sqldb App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md | In this tutorial, you learn how to: > * Stream diagnostic logs from Azure > * Manage the app in the Azure portal > * Provision and deploy by using Azure Developer CLI+> * Use passwordless SQL connectivity by using a managed identity ## Prerequisites Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps ## 2. Verify connection strings +> [!TIP] +> The default SQL database connection string uses SQL authentication. For more secure, passwordless authentication, see [How do I change the SQL Database connection to use a managed identity instead?](#how-do-i-change-the-sql-database-connection-to-use-a-managed-identity-instead) + The creation wizard generated connection strings for the SQL database and the Redis cache already. In this step, find the generated connection strings for later. :::row::: Having issues? Check the [Troubleshooting section](#troubleshooting). ## 3. Verify connection strings +> [!TIP] +> The default SQL database connection string uses SQL authentication. For more secure, passwordless authentication, see [How do I change the SQL Database connection to use a managed identity instead?](#how-do-i-change-the-sql-database-connection-to-use-a-managed-identity-instead) + The AZD template you use generated the connectivity variables for you already as [app settings](configure-common.md#configure-app-settings) and outputs the them to the terminal for your convenience. App settings are one way to keep connection secrets out of your code repository. 1. In the AZD output, find the settings `AZURE_SQL_CONNECTIONSTRING` and `AZURE_REDIS_CONNECTIONSTRING`. To keep secrets safe, only the setting names are displayed. They look like this in the AZD output: Having issues? Check the [Troubleshooting section](#troubleshooting). Azure App Service can capture console logs to help you diagnose issues with your application. For convenience, the AZD template already [enabled logging to the local file system](troubleshoot-diagnostic-logs.md#enable-application-logging-linuxcontainer) and is [shipping the logs to a Log Analytics workspace](troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor). -<!-- The sample application includes standard logging statements to demonstrate this capability, as shown in the following snippet: +The sample application includes standard logging statements to demonstrate this capability, as shown in the following snippet: In the AZD output, find the link to stream App Service logs and navigate to it in the browser. The link looks like this in the AZD output: After you configure diagnostic logs, the app is restarted. You might need to ref - [How do I connect to the Azure SQL Database server that's secured behind the virtual network with other tools?](#how-do-i-connect-to-the-azure-sql-database-server-thats-secured-behind-the-virtual-network-with-other-tools) - [How does local app development work with GitHub Actions?](#how-does-local-app-development-work-with-github-actions) - [How do I debug errors during the GitHub Actions deployment?](#how-do-i-debug-errors-during-the-github-actions-deployment)+- [How do I change the SQL Database connection to use a managed identity instead?](#how-do-i-change-the-sql-database-connection-to-use-a-managed-identity-instead) - [I don't have permissions to create a user-assigned identity](#i-dont-have-permissions-to-create-a-user-assigned-identity) - [What can I do with GitHub Copilot in my codespace?](#what-can-i-do-with-github-copilot-in-my-codespace) If a step fails in the autogenerated GitHub workflow file, try modifying the fai See [Set up GitHub Actions deployment from the Deployment Center](deploy-github-actions.md#set-up-github-actions-deployment-from-the-deployment-center). +### How do I change the SQL Database connection to use a managed identity instead? ++The default connection string to the SQL database is managed by Service Connector, with the name *defaultConnector*, and it uses SQL authentication. To replace it with a connection that uses a managed identity, run the following commands in the [cloud shell](https://shell.azure.com) after replacing the placeholders: ++```azurecli-interactive +az extension add --name serviceconnector-passwordless --upgrade +az sql server update --enable-public-network true +az webapp connection delete sql --connection defaultConnector --resource-group <group-name> --name <app-name> +az webapp connection create sql --connection defaultConnector --resource-group <group-name> --name <app-name> --target-resource-group <group-name> --server <database-server-name> --database <database-name> --client-type dotnet --system-identity --config-connstr true +az sql server update --enable-public-network false +``` ++By default, the command `az webapp connection create sql --client-type dotnet --system-identity --config-connstr` does the following: ++- Sets your user as the Microsoft Entra ID administrator of the SQL database server. +- Create a system-assigned managed identity and grants it access to the database. +- Generates a passwordless connection string called `AZURE_SQL_CONNECTIONGSTRING`, which your app is already using at the end of the tutorial. ++Your app should now have connectivity to the SQL database. For more information, see [Tutorial: Connect to Azure databases from App Service without secrets using a managed identity](tutorial-connect-msi-azure-database.md). ++> [!TIP] +> **Don't want to enable public network connection?** You can skip `az sql server update --enable-public-network true` by running the commands from an [Azure cloud shell that's integrated with your virtual network](../cloud-shell/vnet/deployment.md) if you have the **Owner** role assignment on your subscription. +> +> To grant the identity the required access to the database that's secured by the virtual network, `az webapp connection create sql` needs direct connectivity with Entra ID authentication to the database server. By default, the Azure cloud shell doesn't have this access to the network-secured database. + ### What can I do with GitHub Copilot in my codespace? You might have noticed that the GitHub Copilot chat view was already there for you when you created the codespace. For your convenience, we include the GitHub Copilot chat extension in the container definition (see *.devcontainer/devcontainer.json*). However, you need a [GitHub Copilot account](https://docs.github.com/copilot/using-github-copilot/using-github-copilot-code-suggestions-in-your-editor) (30-day free trial available). |
automation | Automation Tutorial Runbook Textual | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/learn/automation-tutorial-runbook-textual.md | You can use the `ForEach -Parallel` construct to process commands for each item |VMs|Enter the names of the virtual machines using the following syntax: `["VM1","VM2","VM3"]`| |Action|Enter `stop` or `start`.| -1. Navigate to your list of virtual machines and refresh the page every few seconds. Observe that the action for each VM happens in parallel. Without the `-Parallel` keyword, the actions would have performed sequentially. While the VMs will start sequentially, each VM may reach the **Running** phase at slightly different times based on the characteristics of each VM. +1. Navigate to your list of virtual machines and refresh the page every few seconds. Observe that the action for each VM happens in parallel. Without the `-Parallel` keyword, the actions would have performed sequentially. While the VMs will start in parallel, each VM may reach the **Running** phase at slightly different times based on the characteristics of each VM. ## Clean up resources |
automation | Manage Sql Server In Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-sql-server-in-automation.md | To allow access from the Automation system managed identity to the Azure SQL dat 1. Go to [Azure portal](https://portal.azure.com) home page and select **SQL servers**. 1. In the **SQL server** page, under **Settings**, select **SQL Databases**. 1. Select your database to go to the SQL database page and select **Query editor (preview)** and execute the following two queries:- - CREATE USER "AutomationAccount" FROM EXTERNAL PROVIDER WITH OBJECT_ID= `ObjectID` - - EXEC sp_addrolemember `db_owner`, "AutomationAccount" - - Automation account - replace with your Automation account's name - - Object ID - replace with object (principal) ID for your system managed identity principal from step 1. + ```sql + # AutomationAccount - replace with your Automation account's name + # ObjectID - replace with object (principal) ID for your system managed identity principal from step 1. + CREATE USER "AutomationAccount" FROM EXTERNAL PROVIDER WITH OBJECT_ID = `ObjectID` + EXEC sp_addrolemember `db_owner`, "AutomationAccount" + ``` ## Sample code |
automation | Source Control Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/source-control-integration.md | Azure Automation supports three types of source control: > Azure Automation Run As Account will retire on **September 30, 2023** and will be replaced with Managed Identities. Before that date, you need to [migrate from a Run As account to Managed identities](migrate-run-as-accounts-managed-identity.md). > [!NOTE]-> According to [this](/azure/devops/organizations/accounts/change-application-access-policies#application-connection-policies) Azure DevOps documentation, **Third-party application access via OAuth** policy is defaulted to **off** for all new organizations. So if you try to configure source control in Azure Automation with **Azure Devops (Git)** as source control type without enabling **Third-party application access via OAuth** under Policies tile of Organization Settings in Azure DevOps then you might get **SourceControl securityToken is invalid** error. Hence to avoid this error, make sure you first enable **Third-party application access via OAuth** under Policies tile of Organization Settings in Azure DevOps. +> According to [this](/azure/devops/organizations/accounts/change-application-access-policies#application-connection-policies) Azure DevOps documentation, **Third-party application access via OAuth** policy is defaulted to **off** for all new organizations. So if you try to configure source control in Azure Automation with **Azure DevOps (Git)** as source control type without enabling **Third-party application access via OAuth** under Policies tile of Organization Settings in Azure DevOps then you might get **SourceControl securityToken is invalid** error. Hence to avoid this error, make sure you first enable **Third-party application access via OAuth** under Policies tile of Organization Settings in Azure DevOps. ## Configure source control |
azure-arc | Enable Guest Management At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/enable-guest-management-at-scale.md | Title: Install Arc agent at scale for your VMware VMs description: Learn how to enable guest management at scale for Arc enabled VMware vSphere VMs. Previously updated : 04/23/2024 Last updated : 07/08/2024 Arc agents can be installed directly on machines without relying on VMware tools - The following command scans all the Arc for Server machines that belong to the vCenter in the specified subscription and links the machines with that vCenter. - ```azurecli - az connectedvmware vm create-from-machines --subscription contoso-sub --vcenter-id /subscriptions/fedcba98-7654-3210-0123-456789abcdef/resourceGroups/contoso-rg-2/providers/Microsoft.HybridCompute/vcenters/contoso-vcenter + ```azurecli-interactive + az connectedvmware vm create-from-machines --subscription contoso-sub --vcenter-id /subscriptions/999998ee-cd13-9999-b9d4-55ca5c25496d/resourceGroups/allhands-demo/providers/microsoft.connectedvmwarevsphere/VCenters/ContosovCentervcenters/contoso-vcenter ``` - The following command scans all the Arc for Server machines that belong to the vCenter in the specified Resource Group and links the machines with that vCenter. - ```azurecli - az connectedvmware vm create-from-machines --resource-group contoso-rg --vcenter-id /subscriptions/fedcba98-7654-3210-0123-456789abcdef/resourceGroups/contoso-rg-2/providers/Microsoft.HybridCompute/vcenters/contoso-vcenter. + ```azurecli-interactive + az connectedvmware vm create-from-machines --resource-group contoso-rg --vcenter-id /subscriptions/999998ee-cd13-9999-b9d4-55ca5c25496d/resourceGroups/allhands-demo/providers/microsoft.connectedvmwarevsphere/VCenters/ContosovCentervcenters/contoso-vcenter ``` - The following command can be used to link an individual Arc for Server resource to vCenter. - ```azurecli - az connectedvmware vm create-from-machines --resource-group contoso-rg --name contoso-vm --vcenter-id /subscriptions/fedcba98-7654-3210-0123-456789abcdef/resourceGroups/contoso-rg-2/providers/Microsoft.HybridCompute/vcenters/contoso-vcenter + ```azurecli-interactive + az connectedvmware vm create-from-machines --subscription contoso-sub --vcenter-id /subscriptions/999998ee-cd13-9999-b9d4-55ca5c25496d/resourceGroups/allhands-demo/providers/microsoft.connectedvmwarevsphere/VCenters/ContosovCentervcenters/contoso-vcenter ``` ## Next steps |
azure-functions | Functions Bindings Openai Textcompletion Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-openai-textcompletion-input.md | Title: Azure OpenAI text completion input binding for Azure Functions description: Learn how to use the Azure OpenAI text completion input binding to access Azure OpenAI text completion APIs during function execution in Azure Functions. Previously updated : 05/23/2024 Last updated : 07/08/2024 zone_pivot_groups: programming-languages-set-functions This example takes a prompt as input, sends it directly to the completions API, ::: zone-end ::: zone pivot="programming-language-javascript" +This example demonstrates the _templating_ pattern, where the HTTP trigger function takes a `name` parameter and embeds it into a text prompt, which is then sent to the Azure OpenAI completions API by the extension. The response to the prompt is returned in the HTTP response. + ::: zone-end ::: zone pivot="programming-language-typescript" This example demonstrates the _templating_ pattern, where the HTTP trigger function takes a `name` parameter and embeds it into a text prompt, which is then sent to the Azure OpenAI completions API by the extension. The response to the prompt is returned in the HTTP response. ::: zone-end ::: zone pivot="programming-language-powershell" |
azure-health-insights | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/overview.md | +> [!IMPORTANT] +> Onco-Phenotype will be retired on July 31st, 2024, at which time the Onco-Phenotype model will no longer be available. +> +> The Onco-Phenotype model is being retired, but please note that all other models within Azure Health Insights will remain available. The container image for Onco-Phenotype will also be removed from the [Microsoft Artifact Registry](https://mcr.microsoft.com). If youΓÇÖve downloaded the image and have it deployed in your own hosting environment, the Onco-phenotype model will cease to function. +> +> If you have Azure AI Health Insights deployed via the Azure Portal, it will continue to work as usual, but the Onco-Phenotype endpoint will no longer be available. As per the standard operating procedure for the Onco-Phenotype model, API results are available for 24 hours from the time the request was created, after which the results are purged. We will honor this commitment up until the model is retired. +> +> We understand that you may have questions regarding this retirement. Please reach out to our Customer Service and Support (CSS) team for assistance. If you donΓÇÖt currently have CSS support, you can purchase support [here](https://azure.microsoft.com/support/plans/). Onco-Phenotype is an AI model thatΓÇÖs offered within the context of the broader Azure AI Health Insights. It augments traditional clinical natural language processing tools by enabling healthcare organizations to rapidly identify key cancer attributes within their patient populations. For the Public Preview, you can select the Free F0 SKU. The official pricing wil Get started using the Onco-Phenotype model: >[!div class="nextstepaction"]-> [Deploy the service via the portal](../deploy-portal.md) +> [Deploy the service via the portal](../deploy-portal.md) |
azure-maps | Understanding Azure Maps Transactions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/understanding-azure-maps-transactions.md | The following table summarizes the Azure Maps services that generate transaction | [Traffic] | Yes | One request = 1 transaction (except tiles)<br>15 tiles = 1 transaction | <ul><li>Location Insights Traffic (Gen2 pricing)</li><li>Standard S1 Traffic Transactions (Gen1 S1 pricing)</li><li>Standard Traffic Transactions (Gen1 S0 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li></ul> | | [Weather] | Yes | One request = 1 transaction | <ul><li>Location Insights Weather (Gen2 pricing)</li><li>Standard S1 Weather Transactions (Gen1 S1 pricing)</li><li>Standard Weather Transactions (Gen1 S0 pricing)</li></ul> | -<sup>1</sup> The Azure Maps Data service (both [v1] and [v2]) is now deprecated and will be retired on 9/16/24. To avoid service disruptions, all calls to the Data service will need to be updated to use the Azure Maps [Data Registry] service by 9/16/24. For more information, see [How to create data registry]. +<sup>1</sup> The Azure Maps Data service (both [v1] and [v2]) is now deprecated and will be retired on 9/16/24. To avoid service disruptions, all calls to the Data service need to be updated to use the Azure Maps [Data Registry] service by 9/16/24. For more information, see [How to create data registry]. ++> [!TIP] +> +> Unlike Bing Maps, Azure Maps doesn’t use [session IDs]. Instead, Azure Maps offers a number of free transactions each month as shown in [Azure Maps pricing]. For example, you get 5,000 free *Base Map Tile* transactions per month. Each transaction can include up to 15 tiles for a total of 75,000 tiles rendered for free each month. <!-- In Bing Maps, any time a synchronous Truck Routing request is made, three transactions are counted. Does this apply also to Azure Maps?--> The following table summarizes the Azure Maps services that generate transaction | [Conversion] | Part of a provisioned Creator resource and not transactions based.| Not transaction-based | Map Provisioning (Gen2 pricing) | | [Dataset] | Part of a provisioned Creator resource and not transactions based.| Not transaction-based | Map Provisioning (Gen2 pricing)| | [Feature State] | Yes, except for `FeatureState.CreateStateset`, `FeatureState.DeleteStateset`, `FeatureState.GetStateset`, `FeatureState.ListStatesets`, `FeatureState.UpdateStatesets` | One request = 1 transaction | Azure Maps Creator Feature State (Gen2 pricing) |-| [Render] | Yes, only with `GetMapTile` with Creator Tileset ID and `GetStaticTile`.<br>For everything else for Render, see Render section in the above table.| One request = 1 transaction<br>One tile = 1 transaction | Azure Maps Creator Map Render (Gen2 pricing) | +| [Render] | Yes, only with `GetMapTile` with Creator Tileset ID and `GetStaticTile`.<br>For more information on Render related transactions, see the Render section in the previous table.| One request = 1 transaction<br>One tile = 1 transaction | Azure Maps Creator Map Render (Gen2 pricing) | | [Tileset] | Part of a provisioned Creator resource and not transactions based.| Not transaction-based | Map Provisioning    (Gen2 pricing) | | [WFS] | Yes| One request = 1 transaction | Azure Maps Creator Web Feature (WFS) (Gen2 pricing) | The following table summarizes the Azure Maps services that generate transaction [Route]: /rest/api/maps/route [Search v1]: /rest/api/maps/search?view=rest-maps-1.0&preserve-view=true [Search v2]: /rest/api/maps/search+[session IDs]: /bingmaps/getting-started/bing-maps-dev-center-help/understanding-bing-maps-transactions#using-session-ids-to-make-billable-transactions-non-billable [Spatial]: /rest/api/maps/spatial [Tileset]: /rest/api/maps-creator/tileset [Timezone]: /rest/api/maps/timezone |
azure-netapp-files | Azure Netapp Files Cost Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-cost-model.md | For cost model specific to cross-region replication, see [Cost model for cross-r Azure NetApp Files is billed on provisioned storage capacity, which is allocated by creating capacity pools. Capacity pools are billed monthly based on a set cost per allocated GiB per hour. Capacity pool allocation is measured hourly. -Capacity pools must be at least 1 TiB and can be increased or decreased in 1-TiB intervals. Capacity pools contain volumes that range in size from a minimum of 100 GiB to a maximum of 100 TiB. Volumes are assigned quotas that are subtracted from the capacity poolΓÇÖs provisioned size. For an active volume, capacity consumption against the quota is based on logical (effective) capacity, being active filesystem data or snapshot data. See [How Azure NetApp Files snapshots work](snapshots-introduction.md) for details. +Capacity pools must be at least 1 TiB and can be increased or decreased in 1-TiB intervals. Capacity pools contain volumes that range in size from a minimum of 100 GiB to a maximum of 100 TiB for regular volumes and up to 500 TiB for [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes). Volumes are assigned quotas that are subtracted from the capacity poolΓÇÖs provisioned size. For an active volume, capacity consumption against the quota is based on logical (effective) capacity, being active filesystem data or snapshot data. See [How Azure NetApp Files snapshots work](snapshots-introduction.md) for details. ### Pricing examples |
azure-netapp-files | Azure Netapp Files Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-introduction.md | Azure NetApp Files is designed to provide high-performance file storage for ente | Three flexible performance tiers (Standard, Premium, Ultra) | Three performance tiers with dynamic service-level change capability based on workload needs, including cool access for cold data. | Choose the right performance level for workloads and dynamically adjust performance without overspending on resources. | Small-to-large volumes | Easily resize file volumes from 100 GiB up to 100 TiB without downtime. | Scale storage as business needs grow without over-provisioning, avoiding upfront cost. | 1-TiB minimum capacity pool size | 1-TiB capacity pool is a reduced-size storage pool compared to the initial 4-TiB minimum. | Save money by starting with a smaller storage footprint and lower entry point, without sacrificing performance or availability. Scale storage based on growth without high upfront costs.-| 1,000-TiB maximum capacity pool | 1000-TiB capacity pool is an increased storage pool compared to the initial 500-TiB maximum. | Reduce waste by creating larger, pooled capacity and performance budget, and share and distribute across volumes. +| 2,048-TiB maximum capacity pool | 2048-TiB capacity pool is an increased storage pool compared to the initial 500-TiB maximum. | Reduce waste by creating larger, pooled capacity and performance budget, and share and distribute across volumes. | 50-500 TiB large volumes | Store large volumes of data up to 500 TiB in a single volume. | Manage large datasets and high-performance workloads with ease. | User and group quotas | Set quotas on storage usage for individual users and groups. | Control storage usage and optimize resource allocation. | Virtual machine (VM) networked storage performance | Higher VM network throughput compared to disk IO limits enable more demanding workloads on smaller Azure VMs. | Improve application performance at a smaller VM footprint, improving overall efficiency and lowering application license cost. |
azure-netapp-files | Azure Netapp Files Resource Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md | The following table describes resource limits for Azure NetApp Files: | Number of snapshots per volume | 255 | No | | Number of IPs in a virtual network (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | <ul><li>**Basic**: 1000</li><li>**Standard**: [Same standard limits as VMs](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits)</li></ul> | No | | Minimum size of a single capacity pool | 1 TiB* | No |-| Maximum size of a single capacity pool | 1000 TiB | Yes | +| Maximum size of a single capacity pool | 2048 TiB | Yes | | Minimum size of a single regular volume | 100 GiB | No | | Maximum size of a single regular volume | 100 TiB | No | | Minimum size of a single [large volume](large-volumes-requirements-considerations.md) | 50 TiB | No | |
azure-netapp-files | Regional Capacity Quota | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/regional-capacity-quota.md | For example: ## Request regional capacity quota increase -You can [submit a support request](azure-netapp-files-resource-limits.md#request-limit-increase) for an increase of a regional capacity quota without incurring extra cost. The support request you submit will be sent to the Azure capacity management team for processing. You will receive a response typically within two business days. The Azure capacity management team might contact you if you have a large request. +You can [submit a support request](azure-netapp-files-resource-limits.md#request-limit-increase) for an increase of a regional capacity quota without incurring extra cost. The support request you submit is sent to the Azure capacity management team for processing. You typically receive a response within two business days. The Azure capacity management team might contact you if you have a large request. -A regional capacity quota increase does not incur a billing increase. Billing is still based on the provisioned capacity pools. -For example, if you currently have 25 TiB of provisioned capacity, you can request a quota increase to 35 TiB. Within two business days, your quota increase will be applied to the requested region. When the quota increase is applied, you still pay for only the current provisioned capacity (25 TiB). But when you actually provision the additional 10 TiB, you will be billed for 35 TiB. +A regional capacity quota increase doesn't incur a billing increase. Billing is still based on the provisioned capacity pools. -The current [resource limits](azure-netapp-files-resource-limits.md#resource-limits) for Azure NetApp Files are not changing. You will still be able to provision a 500-TiB capacity pool. But before doing so, the regional capacity quota needs to be increased to 500 TiB. +For example, if you currently have 25 TiB of provisioned capacity, you can request a quota increase to 35 TiB. Within two business days, your quota increase is applied to the requested region. When the quota increase is applied, you still pay for only the current provisioned capacity (25 TiB). But when you actually provision the additional 10 TiB, you're billed for 35 TiB. ++To understand minimum and maximum capacity pool sizes, see [resource limits](azure-netapp-files-resource-limits.md#resource-limits) for Azure NetApp Files. ## Next steps |
backup | About Azure Vm Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/about-azure-vm-restore.md | Title: About the Azure Virtual Machine restore process description: Learn how the Azure Backup service restores Azure virtual machines Previously updated : 12/24/2021 Last updated : 10/12/2023 + # About Azure VM restore |
backup | Backup Azure Backup Sharepoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-backup-sharepoint.md | Title: Back up a SharePoint farm to Azure with DPM description: This article provides an overview of DPM/Azure Backup server protection of a SharePoint farm to Azure Previously updated : 10/27/2022 Last updated : 07/08/2024 This article describes how to back up and restore SharePoint data using System C System Center Data Protection Manager (DPM) enables you back up a SharePoint farm to Microsoft Azure, which gives an experience similar to back up of other data sources. Azure Backup provides flexibility in the backup schedule to create daily, weekly, monthly, or yearly backup points, and gives you retention policy options for various backup points. DPM provides the capability to store local disk copies for quick recovery-time objectives (RTO) and to store copies to Azure for economical, long-term retention. -In this article, you'll learn about: --> [!div class="checklist"] -> - SharePoint supported scenarios -> - Prerequisites -> - Configure backup -> - Monitor operations -> - Restore SharePoint data -> - Restore a SharePoint database from Azure using DPM -> - Switch the Front-End Web Server - ## SharePoint supported scenarios -For information on the supported SharePoint versions and the DPM versions required to back them up, see [What can DPM back up?](/system-center/dpm/dpm-protection-matrix#applications-backup). +For information on the supported SharePoint versions and the DPM versions required to back them up, see [this article](/system-center/dpm/dpm-protection-matrix#applications-backup). ## Prerequisites |
backup | Encryption At Rest With Cmk For Backup Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/encryption-at-rest-with-cmk-for-backup-vault.md | Title: Encrypt backup data in a Backup vault by using customer-managed keys description: Learn how to use Azure Backup to encrypt your backup data by using customer-managed keys (CMKs) in a Backup vault. Previously updated : 06/12/2024 Last updated : 06/24/2024 -# Encrypt backup data in a Backup vault by using customer-managed keys (preview) +# Encrypt backup data in a Backup vault by using customer-managed keys You can use Azure Backup to encrypt your backup data via customer-managed keys (CMKs) instead of platform-managed keys (PMKs), which are enabled by default. Your keys to encrypt the backup data must be stored in [Azure Key Vault](../key-vault/index.yml). The encryption key that you use for encrypting backups might be different from t To allow encryption, you must grant the Backup vault the permissions to access the encryption key in the key vault. You can change the key when necessary. -Support for CMK configuration for a Backup vault is in preview. - ## Support matrix ### Supported regions CMKs for Backup vaults are currently available in all Azure public regions. - Encryption settings support Azure Key Vault RSA and RSA-HSM keys only of sizes 2,048, 3,072, and 4,096. [Learn more about keys](../key-vault/keys/about-keys.md). Before you consider Key Vault regions for encryption settings, see [Key Vault disaster recovery scenarios](../key-vault/general/disaster-recovery-guidance.md) for regional failover support. -### Known limitations --- If you remove Key Vault access permissions from the managed identity, PostgreSQL backup or restore operations will fail with a generic error.--- If you remove Key Vault permissions from encryption settings, disable system-assigned identity, or detach/delete the managed identity from the Backup vault that you're using for encryption settings, tiering of background operations and restore-point expiration jobs will fail without surfacing errors to the Azure portal or other interfaces (for example, REST API or CLI). These operations will continue to fail and incur costs until you restore the required settings.- ## Considerations - After you enable encryption by using CMKs for a Backup vault, you can't revert to using PMKs (the default). You can change the encryption keys or the managed identity to meet requirements. CMKs for Backup vaults are currently available in all Azure public regions. - Moving a CMK-encrypted Backup vault across resource groups and subscriptions isn't currently supported. -- The feature of user-assigned managed identities for Backup vaults is currently in preview. You can configure it by using the Azure portal and REST APIs.- - After you enable encryption settings on the Backup vault, don't disable or detach the managed identity or remove Key Vault permissions used for encryption settings. These actions lead to failure of backup, restore, tiering, and restore-point expiration jobs. They'll incur costs for the data stored in the Backup vault until: - You restore the Key Vault permissions. CMKs for Backup vaults are currently available in all Azure public regions. When you create a Backup vault, you can enable encryption on backups by using CMKs. [Learn how to create a Backup vault](create-manage-backup-vault.md#create-a-backup-vault). +**Choose a client**: ++# [Azure portal](#tab/azure-portal) + To enable the encryption, follow these steps: -1. On the **Vault Properties** tab, specify the encryption key and the identity to be used for encryption. +1. On the **Vault Properties** tab, select **Add Identity**. :::image type="content" source="./media/encryption-at-rest-with-cmk-for-backup-vault/backup-vault-properties.png" alt-text="Screenshot that shows Backup vault properties." lightbox="./media/encryption-at-rest-with-cmk-for-backup-vault/backup-vault-properties.png"::: +1. On the **Select user assigned managed identity** blade, select a *managed identity* from the list that you want to use for encryption, and then select **Add**. + 2. For **Encryption type**, select **Use customer-managed key**. 3. To specify the key to be used for encryption, select the appropriate option. To enable the encryption, follow these steps: :::image type="content" source="./media/encryption-at-rest-with-cmk-for-backup-vault/add-key-uri.png" alt-text="Screenshot that shows the option for using a customer-managed key and encryption key details." lightbox="./media/encryption-at-rest-with-cmk-for-backup-vault/add-key-uri.png"::: -5. Specify the user-assigned managed identity to manage encryption with CMKs. +5. Add the user-assigned managed identity to manage encryption with CMKs. ++ During the vault creation, only *user-assigned managed identities* can be used for CMK. To add CMK with system-assigned managed identity, update the vault properties after creating the vault. +6. To enable encryption on the backup storage infrastructure, select **Infrastructure Encryption**. ++ You can enable this only on a new vault during the encryption using Customer-Managed Keys (CMK). ++7. Add tags (optional) and continue creating the vault. +++# [PowerShell](#tab/powershell) ++To enable the encryption on the Backup vault, update the following parameters in the [New-AzDataProtectionBackupVault](/powershell/module/az.dataprotection/new-azdataprotectionbackupvault?view=azps-11.6.0&preserve-view=true#example-3-create-a-backup-vault-with-cmk) command, and then run it. ++- `[-IdentityUserAssignedIdentity <Hashtable>]` +- `[-CmkEncryptionState <EncryptionState>]` +- `[-CmkInfrastructureEncryption <InfrastructureEncryptionState>]` +- `[-CmkIdentityType <IdentityType>]` +- `[-CmkUserAssignedIdentityId <String>]` +- `[-CmkEncryptionKeyUri <String>]` ++```azurepowershell-interactive +New-AzDataProtectionBackupVault -SubscriptionId xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx -ResourceGroupName "resourceGroupName" -VaultName "vaultName" -Location "location" -StorageSetting $storagesetting -IdentityType UserAssigned -UserAssignedIdentity $userAssignedIdentity -CmkEncryptionState Enabled -CmkIdentityType UserAssigned -CmkUserAssignedIdentityId $cmkIdentityId -CmkEncryptionKeyUri $cmkKeyUri -CmkInfrastructureEncryption Enabled +``` +++# [CLI](#tab/cli) ++To enable the encryption on the Backup vault, update the following parameters in the [az dataprotection backup-vault create](/cli/azure/dataprotection/backup-vault?view=azure-cli-latest&preserve-view=true#az-dataprotection-backup-vault-create) command, and then run it. ++- `[--cmk-encryption-key-uri]` +- `[--cmk-encryption-state {Disabled, Enabled, Inconsistent}]` +- `[--cmk-identity-type {SystemAssigned, UserAssigned}]` +- `[--cmk-infra-encryption {Disabled, Enabled}]` +- `[--cmk-uami]` ++```azurecli-interactive +az dataprotection backup-vault create --resource-group + --storage-setting + --vault-name + [--azure-monitor-alerts-for-job-failures {Disabled, Enabled}] + [--cmk-encryption-key-uri] + [--cmk-encryption-state {Disabled, Enabled, Inconsistent}] + [--cmk-identity-type {SystemAssigned, UserAssigned}] + [--cmk-infra-encryption {Disabled, Enabled}] + [--cmk-uami] + [--cross-region-restore-state {Disabled, Enabled}] + [--cross-subscription-restore-state {Disabled, Enabled, PermanentlyDisabled}] + [--e-tag] + [--immutability-state {Disabled, Locked, Unlocked}] + [--location] + [--no-wait {0, 1, f, false, n, no, t, true, y, yes}] + [--retention-duration-in-days] + [--soft-delete-state {AlwaysOn, Off, On}] + [--tags] + [--type] + [--uami] +``` +++ -6. Add tags (optional) and continue creating the vault. ## Update the Backup vault properties to encrypt by using customer-managed keys ++You can modify the **Encryption Settings** of a Backup vault in the following scenarios: ++- Enable Customer Managed Key for an already existing vault. For Backup vaults, you can enable CMK before or after protecting items to the vault. +- Update details in the Encryption Settings, such as the managed identity or encryption key. ++Let's enable Customer Managed Key for an existing vault. + To configure a vault, perform the following actions in sequence: 1. Enable a managed identity for your Backup vault. For security reasons, you can't update both a Key Vault key URI and a managed id #### Enable a system-assigned managed identity for the vault +**Choose a client**: ++# [Azure portal](#tab/azure-portal) + To enable a system-assigned managed identity for your Backup vault, follow these steps: 1. Go to *your Backup vault* > **Identity**. To enable a system-assigned managed identity for your Backup vault, follow these The preceding steps generate an object ID, which is the system-assigned managed identity of the vault. -#### Assign a user-assigned managed identity to the vault (in preview) ++# [PowerShell](#tab/powershell) ++To enable a system-assigned managed identity for the Backup vault, use the [Update-AzDataProtectionBackupVault](/powershell/module/az.dataprotection/update-azdataprotectionbackupvault?view=azps-11.6.0&preserve-view=true) command. ++```azurepowershell-interactive +Update-AzDataProtectionBackupVault -SubscriptionId "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -ResourceGroupName "resourceGroupName" -VaultName "vaultName" -IdentityType “SystemAssigned” +``` ++# [CLI](#tab/cli) ++To enable a system-assigned managed identity for the Backup vault, use the [az dataprotection backup-vault update](/cli/azure/dataprotection/backup-vault?view=azure-cli-latest&preserve-view=true#az-dataprotection-backup-vault-update) command. +++++#### Assign a user-assigned managed identity to the vault To assign a user-assigned managed identity for your Backup vault, follow these steps: To assign a user-assigned managed identity for your Backup vault, follow these s :::image type="content" source="./media/encryption-at-rest-with-cmk-for-backup-vault/assign-user-assigned-managed-identity-to-vault.png" alt-text="Screenshot that shows selections for assigning a user-assigned managed identity to a vault." lightbox="./media/encryption-at-rest-with-cmk-for-backup-vault/assign-user-assigned-managed-identity-to-vault.png"::: > [!NOTE]-> Vaults that use user-assigned managed identities for CMK encryption don't support the use of private endpoints for Backup. -> > Key vaults that limit access to specific networks aren't yet supported for use with user-assigned managed identities for CMK encryption. ### Assign permissions to the Backup vault to access the encryption key in Azure Key Vault -You now need to permit the Backup vault's managed identity to access the key vault that contains the encryption key. +**Choose a client**: ++# [Azure portal](#tab/azure-portal) ++You need to permit the Backup vault's managed identity to access the key vault that contains the encryption key. #### Scenario: Key Vault has access control (IAM) configuration enabled If you're using a user-assigned identity, you must assign the same permissions t You can also assign an RBAC role to the Backup vault that contains the previously mentioned permissions, such as the [Key Vault Crypto Officer](../key-vault/general/rbac-guide.md#azure-built-in-roles-for-key-vault-data-plane-operations) role. This role might contain additional permissions. +# [PowerShell](#tab/powershell) ++To assign the permissions to the Backup vault, run the following commands: ++1. Fetch the principal ID of the Backup vault by using the [Get-AzADServicePrincipal](/powershell/module/az.resources/get-azadserviceprincipal?view=azps-12.0.0&preserve-view=true) command. +2. Set an access policy for the key vault by using this ID in the [Set-AzKeyVaultAccessPolicy](/powershell/module/az.keyvault/set-azkeyvaultaccesspolicy?view=azps-12.0.0&preserve-view=true) command. ++**Example**: ++```azurepowershell-interactive +$sp = Get-AzADServicePrincipal -DisplayName MyVault +$Set-AzKeyVaultAccessPolicy -VaultName myKeyVault -ObjectId $sp.Id -PermissionsToKeys get,list,unwrapkey,wrapkey +``` +++# [CLI](#tab/cli) ++To assign the permissions to the Backup vault, run the following commands: ++1. Fetch the principal ID of the Backup vault by using the [az ad sp list](/cli/azure/ad/sp?view=azure-cli-latest&preserve-view=true#az-ad-sp-list) command. +2. Set an access policy for the key vault by using this ID in the [az keyvault set-policy](/cli/azure/keyvault?view=azure-cli-latest&preserve-view=true#az-keyvault-set-policy) command. ++**Example**: ++```azurecli-interactive +az ad sp list --display-name MyVault +az keyvault set-policy --name myKeyVault --object-id <object-id> --key-permissions get,list,unwrapkey,wrapkey +``` ++++++ ### Enable soft delete and purge protection on Azure Key Vault You need to enable soft delete and purge protection on the key vault that stores your encryption key. +**Choose a client**: ++# [Azure portal](#tab/azure-portal) + You can set these properties from the Azure Key Vault interface, as shown in the following screenshot. Alternatively, you can set these properties while creating the key vault. [Learn more about these Key Vault properties](../key-vault/general/soft-delete-overview.md). :::image type="content" source="./media/encryption-at-rest-with-cmk-for-backup-vault/soft-delete-purge-protection.png" alt-text="Screenshot of options for enabling soft delete and purge protection." lightbox="./media/encryption-at-rest-with-cmk-for-backup-vault/soft-delete-purge-protection.png"::: ++# [PowerShell](#tab/powershell) ++To enable soft delete on the vault, run the following commands: ++1. Sign in to your Azure account. ++ ```azurepowershell-interactive + Login-AzAccount + ``` ++2. Select the subscription that contains your vault. ++ ```azurepowershell-interactive + Set-AzContext -SubscriptionId SubscriptionId + ``` ++3. Enable soft delete. ++ ```azurepowershell-interactive + ($resource = Get-AzResource -ResourceId (Get-AzKeyVault -VaultName "AzureKeyVaultName").ResourceId).Properties | Add-Member -MemberType "NoteProperty" -Name "enableSoftDelete" -Value "true" + Set-AzResource -resourceid $resource.ResourceId -Properties $resource.Properties + ``` ++4. Enable purge protection. ++ ```azurepowershell-interactive + ($resource = Get-AzResource -ResourceId (Get-AzKeyVault -VaultName "AzureKeyVaultName").ResourceId).Properties | Add-Member -MemberType "NoteProperty" -Name "enablePurgeProtection" -Value "true" + Set-AzResource -resourceid $resource.ResourceId -Properties $resource.Properties + ``` +++# [CLI](#tab/cli) ++To enable soft delete on the vault, run the following commands: ++1. Sign in to your Azure account. ++ ```azurecli-interactive + az login + ``` ++2. Select the subscription that contains your vault. ++ ```azurecli-interactive + az account set --subscription "Subscription1" + ``` ++3. Enable soft delete. ++ ```azurecli-interactive + az keyvault update --subscription {SUBSCRIPTION ID} -g {RESOURCE GROUP} -n {VAULT NAME} --enable-soft-delete true + ``` ++4. Enable purge protection. ++ ```azurecli-interactive + az keyvault update --subscription {SUBSCRIPTION ID} -g {RESOURCE GROUP} -n {VAULT NAME} --enable-purge-protection true + ``` ++++ ### Assign the encryption key to the Backup vault Before you select the encryption key for your vault, ensure that you successfully: Before you select the encryption key for your vault, ensure that you successfull - Enabled the Backup vault's managed identity and assigned the required permissions to it. - Enabled soft delete and purge protection for the key vault. +>[!Note] +>If there're any updates to the current Key Vault details in the **Encryption Settings** with new key vault information, the managed identity used for **Encryption Settings** must retain access to the original Key Vault, with *Get* and *Unwrap* permissions, and the key should be in *Enabled* state. This access is necessary to execute the *key rotation* from the *previous* to the *new* key. + To assign the key, follow these steps: 1. Go to *your Backup vault* > **Properties**. :::image type="content" source="./media/encryption-at-rest-with-cmk/encryption-settings.png" alt-text="Screenshot that shows properties for a Backup vault." lightbox="./media/encryption-at-rest-with-cmk/encryption-settings.png"::: -2. For **Encryption Settings (Preview)**, select **Update**. +2. For **Encryption Settings**, select **Update**. :::image type="content" source="./media/encryption-at-rest-with-cmk-for-backup-vault/update-encryption-settings.png" alt-text="Screenshot that shows the link for updating encryption settings." lightbox="./media/encryption-at-rest-with-cmk-for-backup-vault/update-encryption-settings.png"::: The process to configure and perform backups to a Backup vault that's encrypted ## Troubleshoot operation errors for encryption settings +This section lists the various troubleshooting scenarios that you might encounter for Backup vault encryption. ++### Backup, restore, and background operations failures +++**Causes**: ++- **Cause 1**: If there's an issue with your **Backup vault Encryption Settings**, such as you have removed Key Vault permissions from Encryption Settings’ managed identity, disabled system-assigned identity, or detached/deleted the managed identity from the Backup vault that you're using for encryption settings, then *backup* and *restore* jobs fail. ++- **Cause 2**: Tiering of restore points and restore-points expiration jobs will fail without showing errors in the Azure portal or other interfaces (for example, REST API or CLI). These operations will continue to fail and incur costs. ++**Recommended actions**: ++- **Recommendation 1**: Restore the permissions, update the managed identity details that have access to the key vault. ++- **Recommendation 2**: Restore the required encryption settings to the Backup vault. ++ ### Missing permissions for a managed identity **Error code**: `UserErrorCMKMissingMangedIdentityPermissionOnKeyVault` The process to configure and perform backups to a Backup vault that's encrypted - If your key vault is using an RBAC configuration that's based on IAM, you need Key Vault Crypto Service Encryption User built-in role permissions. - If you use access policies, you need **Get**, **Wrap** and **Unwrap** permissions. +- The Key Vault and key don't exist, and aren't reachable to the Azure Backup service via network settings. + **Recommended action**: Check the Key Vault access policies and grant permissions accordingly. -## Next steps +++## Validate error codes ++Azure Backup validates the selected *Azure Key Vault* when CMK is applied on the backup vault. If the Key Vault doesn't have the required configuration settings (**Soft Delete Enabled** and **Purge Protection Enabled**), the following error-codes appear: ++### UserErrorCMKPurgeProtectionNotEnabledOnKeyVault ++**Error code**: `UserErrorCMKPurgeProtectionNotEnabledOnKeyVault` ++**Cause**: Soft delete isn't enabled on the Key Vault. ++**Recommended action**: Enable soft delete on the Key Vault, and then retry the operation. ++### UserErrorCMKSoftDeleteNotEnabledOnKeyVault ++**Error code**: `UserErrorCMKSoftDeleteNotEnabledOnKeyVault` ++**Cause**: Purge Protection isn't enabled on the Key Vault. ++**Recommended action**: Enable Purge Protection on the Key Vault, and then retry the operation. ++## Next step [Overview of security features in Azure Backup](security-overview.md). |
communication-services | Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md | This article describes which capabilities Azure Communication Services SDKs supp | Group of features | Capability | Supported | | -- | - | - | -| Core Capabilities | Join Teams meeting | ✔️ | +| Core Capabilities | Join Teams meeting via URL | ✔️ | +| | Join Teams meeting via meeting ID & passcode | ✔️ | +| | Join [end-to-end encrypted Teams meeting](/microsoftteams/teams-end-to-end-encryption) | ❌ | +| | Join channel Teams meeting | ✔️ [1]| +| | Join Teams [Webinar](/microsoftteams/plan-webinars) | ❌ | +| | Join Teams [Town halls](/microsoftteams/plan-town-halls) | ❌ | +| | Join Teams [live events](/microsoftteams/teams-live-events/what-are-teams-live-events). | ❌ | +| | Join Teams meeting scheduled in application for [personal use](https://www.microsoft.com/microsoft-teams/teams-for-home) | ❌ | | | Leave meeting | ✔️ | | | End meeting for everyone | ✔️ |-| | Change meeting options | ❌ | -| | Lock & unlock meeting | ❌ | -| | Prevent joining locked meeting | ✔️ | -| | Honor assigned Teams meeting role | ✔️ | +| | Change meeting options | ❌[6] | +| | Lock & unlock meeting | ❌[6] | +| | Prevent joining locked meeting | ✔️ | +| | Honor assigned Teams meeting role | ✔️ | | Chat | Send and receive chat messages | ✔️ | | | [Receive inline images](../../../tutorials/chat-interop/meeting-interop-features-inline-image.md) | ✔️ | | | Send inline images | ❌ | | | [Receive file attachments](../../../tutorials/chat-interop/meeting-interop-features-file-attachment.md) | ✔️ |-| | Send file attachments | ❌ | +| | Send file attachments | ❌[6] | | | Receive Giphy | ✔️ | | | Send messages with high priority | ❌ | | | Receive messages with high priority | ✔️ | This article describes which capabilities Azure Communication Services SDKs supp | | Render response to chat message | ✔️ | | | Reply to specific chat message | ❌ | | | React to chat message | ❌ |-| | [Data Loss Prevention (DLP)](/microsoft-365/compliance/dlp-microsoft-teams) | ✔️*| +| | [Data Loss Prevention (DLP)](/microsoft-365/compliance/dlp-microsoft-teams) | ✔️ [2]| | | [Customer Managed Keys (CMK)](/microsoft-365/compliance/customer-key-overview) | ✔️ | | Mid call control | Turn your video on/off | ✔️ | | | Mute/Unmute mic | ✔️ | This article describes which capabilities Azure Communication Services SDKs supp | | Receive your screen sharing stream | ❌ | | | Share content in "content-only" mode | ✔️ | | | Receive video stream with content for "content-only" screen sharing experience | ✔️ |-| | Share content in "standout" mode | ❌ | -| | Receive video stream with content for a "standout" screen sharing experience | ❌ | -| | Share content in "side-by-side" mode | ❌ | +| | Share content in "standout" mode | ❌[6] | +| | Receive video stream with content for a "standout" screen sharing experience | ❌ | +| | Share content in "side-by-side" mode | ❌[6] | | | Receive video stream with content for "side-by-side" screen sharing experience | ❌ |-| | Share content in "reporter" mode | ❌ | +| | Share content in "reporter" mode | ❌[6] | | | Receive video stream with content for "reporter" screen sharing experience | ❌ |+| | [Give or request control over screen sharing](/microsoftteams/meeting-who-present-request-control) | ❌ | | Roster | List participants | ✔️ |-| | Add an Azure Communication Services user | ❌ | +| | Add an Azure Communication Services user | ❌ | | | Add a Teams user | ✔️ | | | Adding Teams user honors Teams external access configuration | ✔️ | | | Adding Teams user honors Teams guest access configuration | ✔️ | | | Add a phone number | ✔️ | | | Remove a participant | ✔️ |-| | Manage breakout rooms | ❌ | +| | Manage breakout rooms | ❌[6] | | | Participation in breakout rooms | ❌ | | | Admit participants in the lobby into the Teams meeting | ✔️ | | | Be admitted from the lobby into the Teams meeting | ✔️ | | | Promote participant to a presenter or attendee | ❌ | | | Be promoted to presenter or attendee | ✔️ | | | Disable or enable mic for attendees | ❌ |-| | Honor disabling or enabling a mic as an attendee | ✔️ | +| | Honor disabling or enabling a mic as an attendee | ✔️ | | | Disable or enable camera for attendees | ❌ | | | Honor disabling or enabling a camera as an attendee | ✔️ | | | Adding Teams user honors information barriers | ✔️ |+| | Announce when phone callers join or leave | ❌ | +| Teams Copilot | User can access Teams Copilot | ❌[6] | +| | User's transcript is captured when Copilot is enabled | ✔️ | | Device Management | Ask for permission to use audio and/or video | ✔️ | | | Get camera list | ✔️ | | | Set camera | ✔️ | This article describes which capabilities Azure Communication Services SDKs supp | | Receive adjusted stream for "content from Camera" | ❌ | | | Add and remove video stream from spotlight | ✔️ | | | Allow video stream to be selected for spotlight | ✔️ |-| | Apply Teams background effects | ❌ | -| Recording & transcription | Manage Teams cloud recording | ❌ | +| | Apply background blur | ✔️[3] | +| | Apply background replacement | ✔️[3] | +| | Receive Teams default images for background replacement | ❌[6]| +| | Receive [Teams Premium custom images for background replacement](/microsoftteams/custom-meeting-backgrounds) | ❌[6] | +| | Apply [Watermark](/microsoftteams/watermark-meeting-content-video) over received video and screen sharing | ❌ | +| Recording & transcription | Manage Teams cloud recording | ❌[6] | | | Receive information of call being cloud recorded | ✔️ |-| | Manage Teams local recording | ❌ | -| | Receive information of call being locally recorded | ✔️ | -| | Manage Teams transcription | ❌ | +| | Manage Teams transcription | ❌[6] | | | Receive information of call being transcribed | ✔️ | | | Manage Teams closed captions | ✔️ | | | Support for compliance recording | ✔️ | This article describes which capabilities Azure Communication Services SDKs supp | | Trigger reactions | ✔️ | | | Indicate other participants' reactions | ✔️ | | Integrations | Control Teams third-party applications | ❌ |-| | Receive PowerPoint Live stream | ✔️ | -| | Receive Whiteboard stream | ❌ | +| | Receive [PowerPoint Live stream](https://support.microsoft.com/office/present-from-powerpoint-live-in-microsoft-teams-28b20e74-7165-499c-9bd4-0ad975d448ad) | ✔️ | +| | Receive [Excel Live stream](https://support.microsoft.com/office/excel-live-in-microsoft-teams-meetings-a5790e42-7f75-4859-8674-cc3d07c86ede) | ❌[6] | +| | Receive [Whiteboard stream](https://support.microsoft.com/office/whiteboard-in-microsoft-teams-d69a2709-cb9a-4b3d-b878-67b9bbf4e7bf) | ❌[6] | +| | Receive [collaborative annotations](https://support.microsoft.com/office/use-annotation-while-sharing-your-screen-in-microsoft-teams-876ba527-7112-437e-b410-5aec7363c473) | ❌[6] | | | Interact with a poll | ❌ | | | Interact with a Q&A | ❌ |-| | Interact with a OneNote | ❌ | -| | Manage SpeakerCoach | ❌ | +| | Interact with a Meeting notes | ❌[6] | +| | Manage SpeakerCoach | ❌[6] | | | [Include participant in Teams meeting attendance report](https://support.microsoft.com/office/view-and-download-meeting-attendance-reports-in-teams-ae7cf170-530c-47d3-84c1-3aedac74d310) | ✔️ |-| Accessibility | Receive Teams closed captions | ✔️ | +| | Support [Teams eCDN](/microsoftteams/streaming-ecdn-enterprise-content-delivery-network) | ❌ | +| | Receive [Teams meeting theme details](/microsoftteams/meeting-themes) | ❌ | +| Accessibility | Receive [Teams closed captions](https://support.microsoft.com/office/use-live-captions-in-microsoft-teams-meetings-4be2d304-f675-4b57-8347-cbd000a21260) | ✔️ | +| | Change spoken language of [Teams closed captions](https://support.microsoft.com/office/use-live-captions-in-microsoft-teams-meetings-4be2d304-f675-4b57-8347-cbd000a21260) | ✔️ | | | Communication access real-time translation (CART) | ❌ |-| | Language interpretation | ❌ | +| Larger meetings | Support [Teams green room](https://support.microsoft.com/office/green-room-for-teams-meetings-5b744652-789f-42da-ad56-78a68e8460d5) | ✔️[4] | +| | Support "[Hide attendee names](/microsoftteams/hide-attendee-names)" meeting option | ❌[5] | +| | Support "[Manage what attendee see](https://support.microsoft.com/en-us/office/manage-what-attendees-see-in-teams-meetings-19bfd690-8122-49f4-bc04-c2c5f69b4e16) | ❌ | +| | Support [RTMP-in](https://support.microsoft.com/office/use-rtmp-in-in-microsoft-teams-789d6090-8511-4e2e-add6-52a9f551be7f) | ❌ | +| | Support [RTMP-out](https://support.microsoft.com/office/broadcast-audio-and-video-from-teams-with-rtmp-11d5707b-88bf-411c-aff1-f8d85cab58a0) | ✔️ | +| Translation | Receive [Teams Premium translated closed captions](https://support.microsoft.com/office/use-live-captions-in-microsoft-teams-meetings-4be2d304-f675-4b57-8347-cbd000a21260) | ✔️ | +| | Change spoken and caption's language for [Teams Premium closed captions](https://support.microsoft.com/office/use-live-captions-in-microsoft-teams-meetings-4be2d304-f675-4b57-8347-cbd000a21260) | ✔️ | +| | [Language interpretation](https://support.microsoft.com/office/use-language-interpretation-in-microsoft-teams-meetings-b9fdde0f-1896-48ba-8540-efc99f5f4b2e) | ❌ | | Advanced call routing | Does meeting dial-out honor forwarding rules | ✔️ | | | Read and configure call forwarding rules | ❌ | | | Does meeting dial-out honor simultaneous ringing | ✔️ | This article describes which capabilities Azure Communication Services SDKs supp | | [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ | | | [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality) | ❌ | -When Teams external users leave the meeting, or the meeting ends, they can no longer exchange new chat messages nor access messages sent and received during the meeting. -\* Azure Communication Services provides developer tools to integrate Microsoft Teams Data Loss Prevention compatible with Microsoft Teams. For more information, see [how to implement Data Loss Prevention (DLP)](../../../how-tos/chat-sdk/data-loss-prevention.md). +> [!Note] +> When Teams external users leave the meeting, or the meeting ends, they can no longer exchange new chat messages nor access messages sent and received during the meeting. ++1. Azure Communication Services users can join a channel Teams meeting with audio and video, but they won't be able to send or receive any chat messages. +2. Azure Communication Services provides developer tools to integrate Microsoft Teams Data Loss Prevention compatible with Microsoft Teams. For more information, see [how to implement Data Loss Prevention (DLP)](../../../how-tos/chat-sdk/data-loss-prevention.md). +3. Feature is not available in mobile browsers. +4. Azure Communication Services calling SDK doesn't receive signal the user is admitted and waiting for meeting to be started. UI library doesn't support chat while waiting for the meeting to be started. +5. Azure Communication Services chat SDK shows real identity of attendees. +6. Functionality is not available for users that are not part of the organization ## Server capabilities |
container-apps | Azure Pipelines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-pipelines.md | Take the following steps to configure an Azure DevOps pipeline to deploy to Azur | Requirement | Instructions | |--|--| | Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. Refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml?tabs=current) for details. |-| Azure Devops project | Go to [Azure DevOps](https://azure.microsoft.com/services/devops/) and select *Start free*. Then create a new project. | +| Azure DevOps project | Go to [Azure DevOps](https://azure.microsoft.com/services/devops/) and select *Start free*. Then create a new project. | | Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).| ### Create an Azure DevOps repository and clone the source code |
cosmos-db | Access Key Vault Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/access-key-vault-managed-identity.md | Last updated 06/01/2022 + # Access Azure Key Vault from Azure Cosmos DB using a managed identity [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)] Azure Cosmos DB may need to read secret/key data from Azure Key Vault. For example, your Azure Cosmos DB may require a customer-managed key stored in Azure Key Vault. To do this, Azure Cosmos DB should be configured with a managed identity, and then an Azure Key Vault access policy should grant the managed identity access. + ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). |
cosmos-db | Hierarchical Partition Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/hierarchical-partition-keys.md | -Azure Cosmos DB distributes your data across logical and physical partitions based on your partition keys to support horizontal scaling. By using hierarchical partition keys (also called *subpartitoning*), you can configure up to a three-level hierarchy for your partition keys to further optimize data distribution and for a higher level of scaling. +Azure Cosmos DB distributes your data across logical and physical partitions based on your partition keys to support horizontal scaling. By using hierarchical partition keys (also called *subpartitoning*), you can configure up to a three-level hierarchy for your partition keys to further optimize data distribution and for a higher level of scaling. -If you use synthetic keys today or if you have scenarios in which partition keys can exceed 20 GB of data, subpartitioning can help. If you use this feature, logical partition key prefixes can exceed 20 GB and 10,000 request units per second (RU/s). Queries by prefix are efficiently routed to the subset of partitions that hold the data. +If you use synthetic keys today, have scenarios in which partition keys can exceed 20 GB of data, or would like to ensure that each tenant's document maps to its own logical partition, subpartitioning can help. If you use this feature, logical partition key prefixes can exceed 20 GB and 10,000 request units per second (RU/s). Queries by prefix are efficiently routed to the subset of partitions that hold the data. -## Choose your hierarchical partition keys +## Choosing your hierarchical partition keys -If you have multitenant applications, we recommend that you use hierarchical partition keys. Hierarchical partitions allow you to scale beyond the logical partition key limit of 20 GB. If your current partition key or if a single partition key is frequently reaching 20 GB, hierarchical partitions are a great choice for your workload. +If you have multitenant applications and currently isolate tenants by partition key, hierarchical partitions might benefit you. Hierarchical partitions allow you to scale beyond the logical partition key limit of 20 GB, and are a good solution if you'd like to ensure each of your tenants' documents can scale infinitely. If your current partition key or if a single partition key is frequently reaching 20 GB, hierarchical partitions are a great choice for your workload. -When you choose your hierarchical partition keys, it's important to keep the following general partitioning concepts in mind: +However, depending on the nature of your workload and how cardinal your first level key is, there can be some tradeoffs which we cover in depth in our hierarchical partition scenarios page. -- For *all* containers, *each level* of the full path (starting with the *first level*) of your hierarchical partition key should:+When you choose each level of your hierarchical partition key, it's important to keep the following general partitioning concepts in mind and understand how each one can affect your workload: - - **Have a high cardinality**. The first, second, and third (if applicable) keys of the hierarchical partition should all have a wide range of possible values. - - **Spread request unit (RU) consumption and data storage evenly across all logical partitions**. This spread ensures even RU consumption and storage distribution across your physical partitions. +- For **all** containers, **each level** of the full path (starting with the **first level**) of your hierarchical partition key should: -- For *large, read-heavy workloads*, we recommend that you choose hierarchical partition keys that appear frequently in your queries. For example, a workload that frequently runs queries to filter out specific user sessions in a multitenant application can benefit from hierarchical partition keys of `TenantId`, `UserId`, and `SessionId`, in that order. Queries can be efficiently routed to only the relevant physical partitions by including the partition key in the filter predicate. For more information about choosing partition keys for read-heavy workloads, see the [partitioning overview](partitioning-overview.md).+ - **Have a high cardinality**. The first, second, and third (if applicable) keys of the hierarchical partition should all have a wide range of possible values. + + - Having low cardinality at the first level of the hierarchical partition key will limit all of your write operations at the time of ingestion to just one physical partition until it reaches 50 GB and splits into two physical partitions. For example, suppose your first level key is on `TenantId` and only have 5 unique tenants. Each of these tenants' operations will be scoped to just one physical partition, limiting your throughput consumption to just what is on that one physical partition. This is because hierarchical partitions optimize for all documents with the same first-level key to be colloacted on the same physical partition to avoid full-fanout queries. + - While this may be okay for workloads where we do a one-time ingest of all our tenants' data and the following operations are primarily read-heavy afterwards, this can be unideal for workloads where your business requirements involve ingestion of data within a specific time. For example, if you have strict business requirements to avoid latencies, the maximum throughput your workload can theoretically achieve to ingest data is number of physical partitions * 10k. If your top-level key has low cardinality, your number of physical partitions will likely be 1, unless there is sufficient data for the level 1 key for it to be spread across multiple partitions after splits which can take between 4-6 hours to complete. + + - **Spread request unit (RU) consumption and data storage evenly across all logical partitions**. This spread ensures even RU consumption and storage distribution across your physical partitions. + + - If you choose a first level key that seems to have high cardinality like `UserId`, but in practice your workload performs operations on just one specific `UserId`, then you are likely to run into a hot partition as all of your operations will be scoped to just one or few physical partitions. + +- **Read-heavy workloads:** We recommend that you choose hierarchical partition keys that appear frequently in your queries. ++ - For example, a workload that frequently runs queries to filter out specific user sessions in a multitenant application can benefit from hierarchical partition keys of `TenantId`, `UserId`, and `SessionId`, in that order. Queries can be efficiently routed to only the relevant physical partitions by including the partition key in the filter predicate. For more information about choosing partition keys for read-heavy workloads, see the [partitioning overview](partitioning-overview.md). + +- **Write-heavy workloads:** We recommend using a high cardinal value for the **first-level** of your hierarchical partition key. High cardinality means that the first-level key (and subsequent levels as well) has at least thousands of unique values and more unique values than the number of your physical partitions. ++ - For example, suppose we have a workload that isolates tenants by partition key, and has a few large tenants that are more write-heavy than others. Today, Azure Cosmos DB will stop ingesting data on any partition key value if it exceeds 20 GB of data. In this workload, Microsoft and Contoso are large tenants and we anticipate it growing much faster than our other tenants. To avoid the risk of not being able to ingest data for these tenants, hierarchical partition keys allows us to scale these tenants beyond the 20 GB limit. We can add more levels like UserId and SessionId to ensure higher scalability across tenants. + - To ensure that your workload can accommodate writes for all documents with the same first-level key, consider using item ID as a second or third level key. + + - If your first level does not have high cardinality and you are hitting the 20 GB logical partition limit on your partition key today, we suggest using a synthetic partition key instead of a hierarchical partition key. + ## Example use case Suppose you have a multitenant scenario in which you store event information for users in each tenant. The event information might have event occurrences including but not limited to sign-in, clickstream, or payment events. In a real-world scenario, some tenants can grow large, with thousands of users, Using a synthetic partition key that combines `TenantId` and `UserId` adds complexity to the application. Additionally, the synthetic partition key queries for a tenant are still cross-partition, unless all users are known and specified in advance. -With hierarchical partition keys, you can partition first on `TenantId`, and then on `UserId`. If you expect the `TenantId` and `UserId` combination to produce partitions that exceed 20 GB, you can even partition further down to another level, such as on `SessionId`. The overall depth can't exceed three levels. When a physical partition exceeds 50 GB of storage, Azure Cosmos DB automatically splits the physical partition so that roughly half of the data is on one physical partition, and half is on the other. Effectively, subpartitioning means that a single `TenantId` value can exceed 20 GB of data, and it's possible for `TenantId` data to span multiple physical partitions. +If your workload has tenants with roughly the same workload patterns, hierarchical partition key can help. With hierarchical partition keys, you can partition first on `TenantId`, and then on `UserId`. If you expect the `TenantId` and `UserId` combination to produce partitions that exceed 20 GB, you can even partition further down to another level, such as on `SessionId`. The overall depth can't exceed three levels. When a physical partition exceeds 50 GB of storage, Azure Cosmos DB automatically splits the physical partition so that roughly half of the data is on one physical partition, and half is on the other. Effectively, subpartitioning means that a single `TenantId` value can exceed 20 GB of data, and it's possible for `TenantId` data to span multiple physical partitions. Queries that specify either `TenantId`, or both `TenantId` and `UserId`, are efficiently routed to only the subset of physical partitions that contain the relevant data. Specifying the full or prefix subpartitioned partition key path effectively avoids a full fan-out query. For example, if the container had 1,000 physical partitions, but a specific `TenantId` value was only on 5 physical partitions, the query would be routed to the smaller number of relevant physical partitions. |
cosmos-db | Compatibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/compatibility.md | Below are the list of operators currently supported on Azure Cosmos DB for Mongo <tr><td><code>$isArray</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td><code>$lastN</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td><code>$map</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>-<tr><td><code>$maxN</code></td><td></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td></tr> -<tr><td><code>$minN</code></td><td></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td></tr> +<tr><td><code>$maxN</code></td><td></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> +<tr><td><code>$minN</code></td><td></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td><code>$objectToArray</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td><code>$range</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td><code>$reduce</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td><code>$reverseArray</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td><code>$size</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td><code>$slice</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>-<tr><td><code>$sortArray</code></td><td></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td></tr> -<tr><td><code>$zip</code></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td></tr> +<tr><td><code>$sortArray</code></td><td></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> +<tr><td><code>$zip</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td rowspan="4">Bitwise Operators</td><td><code>$bitAnd</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td><code>$bitNot</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> |
cosmos-db | Compute Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/compute-storage.md | available to each shard in the cluster. \* Available in preview. ## Maximizing IOPS for your compute / storage configuration-Each compute configuration has an IOPS limit that depends on the number of vCores. Make sure you select compute configuration for your cluster to fully utilize IOPS in the selected storage. --| Compute tier | vCores | Max storage size | IOPS with the max recommended storage size, up to | -||-|| -| M30 | 2 vCores | 0.5 TiB | 3,500ΓÇá | -| M40 | 4 vCores | 1 TiB | 5,000 | -| M50 | 8 vCores | 4 TiB | 7,500 | -| M60 | 16 vCores | 32 TiB | 20,000 | -| M80 | 32 vCores | 32 TiB | 20,000 | +Each *compute* configuration has an IOPS limit that depends on the number of vCores. Make sure you select compute configuration for your cluster to fully utilize IOPS in the selected storage. ++| Storage size | Storage IOPS, up to | Min compute tier | Min vCores | +|-|||| +| Up to 0.5 TiB | 3,500ΓÇá | M30 | 2 vCores | +| 1 TiB | 5,000 | M40 | 4 vCores | +| 2 TiB | 7,500 | M50 | 8 vCores | +| 4 TiB | 7,500 | M50 | 8 vCores | +| 8 TiB | 16,000 | M60 | 16 vCores | +| 16 TiB | 18,000 | M60 | 16 vCores | +| 32 TiB | 20,000 | M60 | 16 vCores | ΓÇá Max IOPS with free disk bursting. Storage up to 512 GiB inclusive come with free disk bursting enabled. -To put it another way, if you need 8 TiB of storage per shard or more, make sure you select 16 vCores or more for the node's compute configuration. That selection would allow you to maximize IOPS usage provided by the selected storage. +For instance, if you need 8 TiB of storage per shard or more, make sure you select 16 vCores or more for the node's compute configuration. That selection would allow you to maximize IOPS usage provided by the selected storage. ++## Considerations for compute and storage ++### Working set and memory considerations ++In Azure Cosmos DB for MongoDB vCore, *the working set* refers to the portion of your data that is frequently accessed and used by your applications. It includes both the data and the indexes that are regularly read or written to during the application's typical operations. The concept of a working set is important for performance optimization because MongoDB, like many databases, performs best when the working set fits in RAM. ++To define and understand your MongoDB database working set, consider the following components: ++1. **Frequently accessed data**: This data include documents that your application reads or updates regularly. +1. **Indexes**: Indexes that are used in query operations also form part of the working set because they need to be loaded into memory to ensure fast access. +1. **Application usage patterns**: Analyzing the usage patterns of your application can help identify which parts of your data are accessed most frequently. ++By keeping the working set in RAM, you can minimize slower disk I/O operations, thereby improving the performance of your MongoDB database. If you find that your working set exceeds the available RAM, you might consider optimizing your data model, adding more RAM, or using sharding to distribute the data across multiple nodes. ++### Choosing optimal configuration for a workload ++Determining the right compute and storage configuration for your Azure Cosmos DB for MongoDB vCore workload involves evaluating several factors related to your application's requirements and usage patterns. The key steps and considerations to determine the optimal configuration include: ++1. **Understand your workload** + - **Data volume**: Estimate the total size of your data, including indexes. + - **Read/write ratio**: Determine the ratio of read operations to write operations. + - **Query patterns**: Analyze the types of queries your application performs. For instance, simple reads, complex aggregations. + - **Concurrency**: Assess the number of concurrent operations your database needs to handle. ++2. **Monitor current performance** + - **Resource utilization**: Use monitoring tools to track CPU, memory, disk I/O, and network usage before you move your workload to Azure and [monitoring metrics](./how-to-monitor-diagnostics-logs.md) once you start running your MongoDB workload on an Azure Cosmos DB for MongoDB vCore cluster. + - **Performance metrics**: Monitor key performance metrics such as latency, throughput, and cache hit ratios. + - **Bottlenecks**: Identify any existing performance bottlenecks, such as high CPU usage, memory pressure, or slow disk I/O. ++3. **Estimate resource requirements** + - **Memory**: Ensure that your [working set](#working-set-and-memory-considerations) (frequently accessed data and indexes) fits into RAM. If your working set size exceeds available memory, consider adding more RAM or optimizing your data model. + - **CPU**: Choose a CPU configuration that can handle your query load and concurrency requirements. CPU-intensive workloads may require more cores. Use 'CPU percent' metric with 'Max' aggregation on your Azure Cosmos DB for MongoDB vCore cluster to see historical compute usage patterns. + - **Storage IOPS**: Select storage with sufficient IOPS to handle your read and write operations. Use 'IOPS' metric with 'Max' aggregation on your cluster to see historical storage IOPS usage. + - **Network**: Ensure adequate network bandwidth to handle data transfer between your application and the database, especially for distributed setups. Make sure you configured host for your MongoDB application to support [accelerated networking](../../../virtual-network/accelerated-networking-overview.md) technologies such as SR-IOV. ++4. **Scale appropriately** + - **Vertical scaling**: Scale compute / RAM up and down and scale storage up. + - Compute: Increase the vCore / RAM on a cluster if your workload requires temporary increase or is often crossing over 70% of CPU utilization for prolonged periods. + - Make sure you have appropriate data retention in your Azure Cosmos DB for MongoDB vCore database. Retention allows you to avoid unnecessary storage use. Monitor storage usage by setting alerts on the 'Storage percent' and/or 'Storage used' metrics with 'Max' aggregation. Consider increase storage as your workload size crosses 70% usage. + - **Horizontal scaling**: Consider using multiple shards for your cluster to distribute your data across multiple Azure Cosmos DB for MongoDB vCore nodes for performance gains and better capacity management as your workload grows. This is especially useful for large datasets (over 2-4 TiB) and high-throughput applications. ++5. **Test and iterate** + - **Benchmarking**: Perform measurement for the most frequently used queries with different configurations to determine the impact on performance. Use CPU/RAM and IOPS metrics and application-level benchmarking. + - **Load testing**: Conduct load testing to simulate production workloads and validate the performance of your chosen configuration. + - **Continuous monitoring**: Continuously monitor your Azure Cosmos DB for MongoDB vCore deployment and adjust resources as needed based on changing workloads and usage patterns. ++By systematically evaluating these factors and continuously monitoring and adjusting your configuration, you can ensure that your MongoDB deployment is well-optimized for your specific workload. ++### Considerations for storage ++Deciding on the appropriate storage size for your workload involves several considerations to ensure optimal performance and scalability. Here are considerations for the storage size in Azure Cosmos DB for MongoDB vCore: ++1. **Estimate data size:** + - Calculate the expected size of your Azure Cosmos DB for MongoDB vCore data. Consider: + - **Current data size:** If migrating from an existing database. + - **Growth rate:** Estimate how much data will be added over time. + - **Document size and structure:** Understand your data schema and document sizes, as they affect storage efficiency. ++2. **Factor in indexes:** + - Azure Cosmos DB for MongoDB vCore uses **[indexes](./indexing.md)** for efficient querying. Indexes consume extra disk space. + - Estimate the size of indexes based on: + - **Number of indexes**. + - **Size of indexed fields**. ++3. **Performance considerations:** + - Disk performance impacts database operations, especially for workloads that can't fit their [working set](#working-set-and-memory-considerations) into RAM. Consider: + - **I/O throughput:** IOPS, or Input/Output Operations Per Second, is the number of requests that are sent to storage disks in one second. The larger storage size comes with more IOPS. Ensure adequate throughput for read/write operations. Use 'IOPS' metric with 'Max' aggregation to monitor used IOPS on your cluster. + - **Latency:** Latency is the time it takes an application to receive a single request, send it to storage disks, and send the response to the client. Latency is a critical measure of an application's performance in addition to IOPS and throughput. Latency is largely defined by the type of storage used and storage configuration. In a managed service like Azure Cosmos DB for MongoDB, the fast storage such as Premium SSD disks is used with settings optimized to reduce latency. ++4. **Future growth and scalability:** + - Plan for future data growth and scalability needs. + - Allocate more disk space beyond current needs to accommodate growth without frequent storage expansions. ++5. **Example calculation**: + - Suppose your initial data size is 500 GiB. + - With indexes, it might grow to 700 GiB. + - If you anticipate doubling the data in two years, plan for 1.4 TiB (700 GiB * 2). + - Add a buffer for overhead, growth, and operational needs. + - You might want to start with 1 TiB storage today and upscale it to 2 TiB once its size grows over 800 GiB. ++Deciding on storage size involves a combination of estimating current and future data needs, considering indexing and compression, and ensuring adequate performance and scalability. Regular monitoring and adjustment based on actual usage and growth trends are also crucial to maintaining optimal MongoDB performance. ## Next steps - [See more information about burstable compute](./burstable-tier.md) - [Learn how to scale Azure Cosmos DB for MongoDB vCore cluster](./how-to-scale-cluster.md)+- [Check out indexing best practices](./how-to-create-indexes.md) > [!div class="nextstepaction"] > [Migration options for Azure Cosmos DB for MongoDB vCore](migration-options.md) |
cosmos-db | How To Javascript Vector Index Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-javascript-vector-index-query.md | - Title: Index and query vector data in JavaScript- -description: Add vector data Azure Cosmos DB for NoSQL and then query the data efficiently in your JavaScript application. ------ Previously updated : 08/01/2023----# Index and query vectors in Azure Cosmos DB for NoSQL in JavaScript. --Vector search in Azure Cosmos DB for NoSQL is currently a preview feature. You're required to register for the preview before use. This article covers the following steps: --1. Registering for the preview of Vector Search in Azure Cosmos DB for NoSQL -2. Setting up the Azure Cosmos DB container for vector search -3. Authoring vector embedding policy -4. Adding vector indexes to the container indexing policy -5. Creating a container with vector indexes and vector embedding policy -6. Performing a vector search on the stored data. -7. This guide walks through the process of creating vector data, indexing the data, and then querying the data in a container. ---## Prerequisites -- An existing Azure Cosmos DB for NoSQL account.- - If you don't have an Azure subscription, [Try Azure Cosmos DB for NoSQL free](https://cosmos.azure.com/try/). - - If you have an existing Azure subscription, [create a new Azure Cosmos DB for NoSQL account](how-to-create-account.md). -- Latest version of the Azure Cosmos DB [JavaScript](sdk-nodejs.md) SDK.--## Registering for the preview -Vector search for Azure Cosmos DB for NoSQL requires preview feature registration. Follow the below steps to register: --1. Navigate to your Azure Cosmos DB for NoSQL resource page. - -2. Select the "Features" pane under the "Settings" menu item. --3. Select for “Vector Search in Azure Cosmos DB for NoSQL”. --5. Read the description of the feature to confirm you want to enroll in the preview. --6. Select "Enable" to enroll in the preview. --> [!NOTE] -> The registration request will be autoapproved, however it may take several minutes to take effect. - -## Understanding the steps involved in vector search --The following steps assume that you know how to [setup a Cosmos DB NoSQL account and create a database](quickstart-portal.md). The vector search feature is currently only supported on new containers, not existing container. You need to create a new container and then specify the container-level vector embedding policy and the vector indexing policy at the time of creation. --Let’s take an example of creating a database for an internet-based bookstore and you're storing Title, Author, ISBN, and Description for each book. We also define two properties to contain vector embeddings. The first is the “contentVector” property, which contains [text embeddings](../../ai-services/openai/concepts/models.md#embeddings ) generated from the text content of the book (for example, concatenating the “title” “author” “isbn” and “description” properties before creating the embedding). The second is “coverImageVector”, which is generated from [images of the book’s cover](../../ai-services/computer-vision/concept-image-retrieval.md). --1. Create and store vector embeddings for the fields on which you want to perform vector search. -2. Specify the vector embedding paths in the vector embedding policy. -3. Include any desired vector indexes in the indexing policy for the container. --For subsequent sections of this article, we consider the below structure for the items stored in our container: --```json -{ -"title": "book-title", -"author": "book-author", -"isbn": "book-isbn", -"description": "book-description", -"contentVector": [2, -1, 4, 3, 5, -2, 5, -7, 3, 1], -"coverImageVector": [0.33, -0.52, 0.45, -0.67, 0.89, -0.34, 0.86, -0.78] -} -``` --## Creating a vector embedding policy for your container. -Next, you need to define a container vector policy. This policy provides information that is used to inform the Azure Cosmos DB query engine how to handle vector properties in the VectorDistance system functions. This also informs the vector indexing policy of necessary information, should you choose to specify one. -The following information is included in the contained vector policy: -- * “path”: The property path that contains vectors  - * “datatype”: The type of the elements of the vector (default Float32)  - * “dimensions”: The length of each vector in the path (default 1536)  - * “distanceFunction”: The metric used to compute distance/similarity (default Cosine)  --For our example with book details, the vector policy can look like the example JSON: --```javascript -const vectorEmbeddingPolicy: VectorEmbeddingPolicy = { - vectorEmbeddings: [ - { - path: "/coverImageVector", - dataType: "float32", - dimensions: 8, - distanceFunction: "euclidean", - }, - { - path: "contentVector", - dataType: "float32", - dimensions: 10, - distanceFunction: "dotproduct", - }, - ], - }; -``` --## Creating a vector index in the indexing policy -Once the vector embedding paths are decided, vector indexes need to be added to the indexing policy. Currently, the vector search feature for Azure Cosmos DB for NoSQL is supported only on new containers so you need to apply the vector policy during the time of container creation and it can’t be modified later. For this example, the indexing policy would look like this: --```javascript -const indexingPolicy: IndexingPolicy = { - vectorIndexes: [ - { path: "/coverImageVector", type: "quantizedFlat" }, - { path: "/contentVector", type: "diskANN" }, - ] -}; -``` --Now create your container as usual. --```javascript -const containerName = "vector embedding container"; - // create container - const { resource: containerdef } = await database.containers.createIfNotExists({ - id: containerName, - vectorEmbeddingPolicy: vectorEmbeddingPolicy, - indexingPolicy: indexingPolicy, - }); -``` ---> [!IMPORTANT] -> Currently vector search in Azure Cosmos DB for NoSQL is supported on new containers only. You need to set both the container vector policy and any vector indexing policy during the time of container creation as it can’t be modified later. Both policies will be modifiable in a future improvement to the preview feature. --## Running vector similarity search query --Once you create a container with the desired vector policy, and insert vector data into the container, you can conduct a vector search using the [Vector Distance](query/vectordistance.md) system function in a query. Suppose you want to search for books about food recipes by looking at the description, you first need to get the embeddings for your query text. In this case, you might want to generate embeddings for the query text – “food recipe”. Once you have the embedding for your search query, you can use it in the VectorDistance function in the vector search query and get all the items that are similar to your query as shown here: --```sql -SELECT c.title, VectorDistance(c.contentVector, [1,2,3,4,5,6,7,8,9,10]) AS SimilarityScore   -FROM c  -ORDER BY VectorDistance(c.contentVector, [1,2,3,4,5,6,7,8,9,10])   -``` --This query retrieves the book titles along with similarity scores with respect to your query. Here is an example in JavaScript: --```javascript -const { resources } = await container.items - .query({ - query: "SELECT c.title, VectorDistance(c.contentVector, @embedding) AS SimilarityScore FROM c  ORDER BY VectorDistance(c.contentVector, @embedding)" - parameters: [{ name: "@embedding", value: [1,2,3,4,5,6,7,8,9,10] }] - }) - .fetchAll(); -for (const item of resources) { - console.log(`${itme.title}, ${item.SimilarityScore} is a capitol `); -} -``` ---## Next steps -- [VectorDistance system function](query/vectordistance.md)-- [Vector indexing](../index-policy.md)-- [Setup Azure Cosmos DB for NoSQL for vector search](../vector-search.md). |
cosmos-db | Vector Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/vector-search.md | Last updated 5/7/2024 [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)] -Azure Cosmos DB for NoSQL now offers vector indexing and search in preview. This feature is designed to handle high-dimensional vectors, enabling efficient and accurate vector search at any scale. You can now store vectors directly in the documents alongside your data. This means that each document in your database can contain not only traditional schema-free data, but also high-dimensional vectors as other properties of the documents. This colocation of data and vectors allows for efficient indexing and searching, as the vectors are stored in the same logical unit as the data they represent. This simplifies data management, AI application architectures, and the efficiency of vector-based operations. +Azure Cosmos DB for NoSQL now offers vector indexing and search in preview. This feature is designed to handle high-dimensional vectors, enabling efficient and accurate vector search at any scale. You can now store vectors directly in the documents alongside your data. Each document in your database can contain not only traditional schema-free data, but also high-dimensional vectors as other properties of the documents. This colocation of data and vectors allows for efficient indexing and searching, as the vectors are stored in the same logical unit as the data they represent. Keeping vectors and data together simplifies data management, AI application architectures, and the efficiency of vector-based operations. Azure Cosmos DB for NoSQL offers the flexibility it offers in choosing the vector indexing method: - A "flat" or k-nearest neighbors exact search (sometimes called brute-force) can provide 100% retrieval recall for smaller, focused vector searches. especially when combined with query filters and partition-keys. In a vector store, vector search algorithms are used to index and query embeddin In the Integrated Vector Database in Azure Cosmos DB for NoSQL, embeddings can be stored, indexed, and queried alongside the original data. This approach eliminates the extra cost of replicating data in a separate pure vector database. Moreover, this architecture keeps the vector embeddings and original data together, which better facilitates multi-modal data operations, and enables greater data consistency, scale, and performance. ## Enroll in the Vector Search Preview Feature-Vector search for Azure Cosmos DB for NoSQL requires preview feature registration on the Features page of your Azure Cosmos DB . Follow the below steps to register: +Vector search for Azure Cosmos DB for NoSQL requires preview feature registration on the Features page of your Azure Cosmos DB. Follow the below steps to register: 1. Navigate to your Azure Cosmos DB for NoSQL resource page. Performing vector search with Azure Cosmos DB for NoSQL requires you to define a * “dimensions”: The dimensionality or length of each vector in the path. All vectors in a path should have the same number of dimensions. (default 1536). * “distanceFunction”: The metric used to compute distance/similarity. Supported metrics are: * [cosine](https://en.wikipedia.org/wiki/Cosine_similarity), which has values from -1 (least similar) to +1 (most similar). - * [dotproduct](https://en.wikipedia.org/wiki/Dot_product), which has values from -inf (least simialr) to +inf (most similar). + * [dot product](https://en.wikipedia.org/wiki/Dot_product), which has values from -inf (least similar) to +inf (most similar). * [euclidean](https://en.wikipedia.org/wiki/Euclidean_distance), which has values from 0 (most similar) to +inf) (least similar). Here are examples of valid vector index policies: ## Perform vector search with queries using VectorDistance() -Once you have created a container with the desired vector policy, and inserted vector data into the container, you can conduct a vector search using the [Vector Distance](query/vectordistance.md) system function in a query. An example of a NoSQL query that projects the similarity score as the alias `SimilarityScore`, and sorts in order of most-similar to least-similar is shown below: +Once you created a container with the desired vector policy, and inserted vector data into the container, you can conduct a vector search using the [Vector Distance](query/vectordistance.md) system function in a query. An example of a NoSQL query that projects the similarity score as the alias `SimilarityScore`, and sorts in order of most-similar to least-similar: ```sql SELECT c.title, VectorDistance(c.contentVector, [1,2,3]) AS SimilarityScore   Vector indexing and search in Azure Cosmos DB for NoSQL has some limitations whi - You can specify, at most, one DiskANN index type per container - Vector indexing is only supported on new containers. - Vectors indexed with the `flat` index type can be at most 505 dimensions. Vectors indexed with the `quantizedFlat` or `DiskANN` index type can be at most 4,096 dimensions.-- `quantizedFlat` utilizes the same quantization method as DiskANN and is not configurable at this time. +- `quantizedFlat` utilizes the same quantization method as DiskANN and isn't configurable at this time. - Shared throughput databases can't use the vector search preview feature at this time. - Ingestion rate should be limited while using an early preview of DiskANN. Vector indexing and search in Azure Cosmos DB for NoSQL has some limitations whi - [DiskANN + Azure Cosmos DB - Microsoft Mechanics Video](https://www.youtube.com/watch?v=MlMPIYONvfQ) - [.NET - How-to Index and query vector data](how-to-dotnet-vector-index-query.md) - [Python - How-to Index and query vector data](how-to-python-vector-index-query.md)-- [JavaScript - How-to Index and query vector data](how-to-javascript-vector-index-query.md) - [Java - How-to Index and query vector data](how-to-java-vector-index-query.md) - [VectorDistance system function](query/vectordistance.md) - [Vector index overview](../index-overview.md#vector-indexes) |
cosmos-db | Product Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md | Updates that donΓÇÖt directly affect the internals of a cluster are rolled out g Updates that change cluster internals, such as installing a [new minor PostgreSQL version](https://www.postgresql.org/developer/roadmap/), are delivered to existing clusters as part of the next [scheduled maintenance](concepts-maintenance.md) event. Such updates are available immediately to newly created clusters. +### July 2024 +* [MD5 hashing is disabled](./reference-limits.md#security) in Azure Cosmos DB for PostgreSQL. + ### May 2024 * General availability: [The latest minor PostgreSQL version updates](reference-versions.md#postgresql-versions) (12.19, 13.15, 14.12, 15.7, and 16.3) are now available. * [The last update for PostgreSQL 11](./reference-versions.md#postgresql-version-11-and-older) was released by community in November 2023. |
cosmos-db | Reference Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-limits.md | In Azure Cosmos DB for PostgreSQL clusters with [burstable compute](concepts-bur <a name='azure-active-directory-authentication'></a> ### Microsoft Entra ID authentication+ If [Microsoft Entra ID](./concepts-authentication.md#azure-active-directory-authentication-preview) is enabled on an Azure Cosmos DB for PostgreSQL cluster, the following is currently **not supported**: * PostgreSQL 11, 12, and 13 * Microsoft Entra groups -### Database creation +## Security ++MD5 hashing is disabled in Azure Cosmos DB for PostgreSQL and impacts the following areas: +* Native Postgres passwords are hashed using SCRAM-SHA-256 method only. +* [pgcrypto extension](https://www.postgresql.org/docs/current/static/pgcrypto.html): MD5 isn't available as a hashing method. +* [uuid-ossp extension](https://www.postgresql.org/docs/current/static/uuid-ossp.html): MD5 isn't available as a hashing method. +* Built-in Postgres functions. For instance, SELECT md5(ΓÇÿyour_stringΓÇÖ); +* Custom functions such as custom functions in PL/pgSQL that use MD5 hashing. ++## Database creation The Azure portal provides credentials to connect to exactly one database per cluster. Creating another database is currently not allowed, and the CREATE DATABASE command fails with an error. |
data-factory | Control Flow Web Activity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-web-activity.md | Specify the resource uri for which the access token will be requested using the > [!NOTE] > If your data factory or Synapse workspace is configured with a git repository, you must store your credentials in Azure Key Vault to use basic or client certificate authentication. The service does not store passwords in git. +### Service principal ++Specify the tenant ID, service principal ID, and service principal key, using a secure string for the client secret. ++```json +"authentication": { + "type": "ServicePrincipal", + "tenant": "your_tenant_id", + "servicePrincipalId": "your_client_id", + "servicePrincipalKey": { + "type": "SecureString", + "value": "your_client_secret" + }, + "resource": "https://management.azure.com/" +} +``` + ## Request payload schema When you use the POST/PUT method, the body property represents the payload that is sent to the endpoint. You can pass linked services and datasets as part of the payload. Here is the schema for the payload: |
defender-for-cloud | Concept Agentless Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-containers.md | Agentless container posture provides the following capabilities: - **[Agentless vulnerability assessment](agentless-vulnerability-assessment-azure.md)** - provides vulnerability assessment for all container images, including recommendations for registry and runtime, near real-time scans of new images, daily refresh of results, exploitability insights, and more. Vulnerability information is added to the security graph for contextual risk assessment and calculation of attack paths, and hunting capabilities. - **[Attack path analysis](concept-attack-path.md)** - Contextual risk assessment exposes exploitable paths that attackers might use to breach your environment and are reported as attack paths to help prioritize posture issues that matter most in your environment. - **[Enhanced risk-hunting](how-to-manage-cloud-security-explorer.md)** - Enables security admins to actively hunt for posture issues in their containerized assets through queries (built-in and custom) and [security insights](attack-path-reference.md#insights) in the [security explorer](how-to-manage-cloud-security-explorer.md).-- **Control plane hardening** - Defender for Cloud continuously assesses the configurations of your clusters and compares them with the initiatives applied to your subscriptions. When it finds misconfigurations, Defender for Cloud generates security recommendations that are available on Defender for Cloud's Recommendations page. The recommendations let you investigate and remediate issues. For details on the recommendations included with this capability, check out the [containers section](recommendations-reference.md#container-recommendations) of the recommendations reference table for recommendations of the type **control plane**.+- **Control plane hardening** - Defender for Cloud continuously assesses the configurations of your clusters and compares them with the initiatives applied to your subscriptions. When it finds misconfigurations, Defender for Cloud generates security recommendations that are available on Defender for Cloud's Recommendations page. The recommendations let you investigate and remediate issues. For details on the recommendations included with this capability, check out the [container recommendations](recommendations-reference-container.md) of the type **control plane**. ## Next steps |
defender-for-cloud | Custom Dashboards Azure Workbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-dashboards-azure-workbooks.md | -# Create rich, interactive reports of Defender for Cloud data by using workbooks +# Create interactive reports with Azure Monitor workbooks [Azure workbooks](../azure-monitor/visualize/workbooks-overview.md) are flexible canvas that you can use to analyze data and create rich, visual reports in the Azure portal. In workbooks, you can access multiple data sources across Azure. Combine workbooks into unified, interactive experiences. In Defender for Cloud, you can use integrated Azure workbooks functionality to b - [Vulnerability Assessment Findings workbook](#vulnerability-assessment-findings-workbook): View the findings of vulnerability scans of your Azure resources. - [Compliance Over Time workbook](#compliance-over-time-workbook): View the status of a subscription's compliance with regulatory standards or industry standards that you select. - [Active Alerts workbook](#active-alerts-workbook): View active alerts by severity, type, tag, MITRE ATT&CK tactics, and location.-- Price Estimation workbook: View monthly, consolidated price estimations for Defender for Cloud plans based on the resource telemetry in your environment. The numbers are estimates that are based on retail prices and don't represent actual billing or invoice data.+- Price Estimation workbook: View monthly, consolidated price estimations for plans in Defender for Cloud, based on the resource telemetry in your environment. The numbers are estimates that are based on retail prices and don't represent actual billing or invoice data. - Governance workbook: Use the governance report in the governance rules settings to track progress of the rules that affect your organization. - [DevOps Security (preview) workbook](#devops-security-workbook): View a customizable foundation that helps you visualize the state of your DevOps posture for the connectors that you set up. To see more details about an alert, select the alert. :::image type="content" source="media/custom-dashboards-azure-workbooks/active-alerts-high.png" alt-text="Screenshot that shows all high-severity active alerts for a specific resource."::: -The **MITRE ATT&CK tactics** tab lists alerts in the order of the kill chain and the number of alerts that the subscription has at each stage. +The **MITRE ATT&CK tactics** tab lists alerts in the order of the "kill chain" and the number of alerts that the subscription has at each stage. You can see all the active alerts in a table and filter by columns. This article describes the Defender for Cloud integrated Azure workbooks page th Built-in workbooks get their data from Defender for Cloud recommendations. -- Learn about the many security recommendations in [Security recommendations: A reference guide](recommendations-reference.md).+ |
defender-for-cloud | Defender For Apis Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-introduction.md | Last called data (UTC): The date when API traffic was last observed going to/fro Use recommendations to improve your security posture, harden API configurations, identify critical API risks, and mitigate issues by risk priority. -Defender for API provides a number of recommendations, including recommendations to onboard APIs to the Defender for API plan, disable and remove unused APIs, and best practice recommendations for security, authentication, and access control. +Defender for API provides a [number of recommendations](recommendations-reference-api.md), including recommendations to onboard APIs to the Defender for API plan, disable and remove unused APIs, and best practice recommendations for security, authentication, and access control. -[Review the recommendations reference](recommendations-reference.md). ## Detecting threats |
defender-for-cloud | Defender For Container Registries Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-introduction.md | Defender for Cloud identifies Azure Resource Manager based ACR registries in you **Microsoft Defender for container registries** includes a vulnerability scanner to scan the images in your Azure Resource Manager-based Azure Container Registry registries and provide deeper visibility into your images' vulnerabilities. -When issues are found, you'll get notified in the workload protection dashboard. For every vulnerability, Defender for Cloud provides actionable recommendations, along with a severity classification, and guidance for how to remediate the issue. For details of Defender for Cloud's recommendations for containers, see the [reference list of recommendations](recommendations-reference.md#container-recommendations). +When issues are found, you'll get notified in the workload protection dashboard. For every vulnerability, Defender for Cloud provides actionable recommendations, along with a severity classification, and guidance for how to remediate the issue. [Learn more](recommendations-reference-container.md) about container recommendations. Defender for Cloud filters and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. Defender for Cloud provides details of each reported vulnerability and a severity classification. Additionally, it gives guidance for how to remediate the specific vulnerabilities found on each image. |
defender-for-cloud | Defender For Containers Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md | -# Overview of Container security in Microsoft Defender for Containers +# Overview-Container protection in Defender for Cloud Microsoft Defender for Containers is a cloud-native solution to improve, monitor, and maintain the security of your containerized assets (Kubernetes clusters, Kubernetes nodes, Kubernetes workloads, container registries, container images and more), and their applications, across multicloud and on-premises environments. You can learn more by watching this video from the Defender for Cloud in the Fie :::image type="content" source="media/defender-for-containers/resource-filter.png" alt-text="Screenshot showing you where the resource filter is located." lightbox="media/defender-for-containers/resource-filter.png"::: - For details included with this capability, check out the [containers section](recommendations-reference.md#container-recommendations) of the recommendations reference table, and look for recommendations with type "Control plane" + For details included with this capability, review [container recommendations](recommendations-reference-container.md ), and look for recommendations with type "Control plane" ### Sensor-based capabilities |
defender-for-cloud | Defender For Devops Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-devops-introduction.md | Here, you can add [Azure DevOps](quickstart-onboard-devops.md), [GitHub](quickst The DevOps inventory table allows you to review onboarded DevOps resources and the security information related to them. On this part of the screen you see: |
defender-for-cloud | Exempt Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/exempt-resource.md | For the scope you need, you can create an exemption rule to: This feature is in preview. [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)] This is a premium Azure Policy capability offered at no extra cost for customers with Microsoft Defender for Cloud's enhanced security features enabled. For other users, charges might apply in the future. - You need the following permissions to make exemptions:- - **Owner** or **Security Admin** or **Resource Policy Contributor** to create an exemption + - **Owner** or **Security Admin** to create an exemption. - To create a rule, you need permissions to edit policies in Azure Policy. [Learn more](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy). - You can create exemptions for recommendations included in Defender for Cloud's default [Microsoft cloud security benchmark](/security/benchmark/azure/introduction) standard, or any of the supplied regulatory standards. |
defender-for-cloud | Kubernetes Workload Protections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/kubernetes-workload-protections.md | Last updated 09/04/2023 This page describes how to use Microsoft Defender for Cloud's set of security recommendations dedicated to Kubernetes data plane hardening. > [!TIP]-> For a list of the security recommendations that might appear for Kubernetes clusters and nodes, see the [Container recommendations](recommendations-reference.md#container-recommendations) section of the recommendations reference table. +> For a list of the security recommendations that might appear for Kubernetes clusters and nodes, review[container recommendations](recommendations-reference-container.md). ## Set up your workload protection In this article, you learned how to configure Kubernetes data plane hardening. For related material, see the following pages: -- [Defender for Cloud recommendations for compute](recommendations-reference.md#compute-recommendations)+- [Defender for Cloud recommendations for compute](recommendations-reference-compute.md) - [Alerts for AKS cluster level](alerts-reference.md#alerts-for-containerskubernetes-clusters) |
defender-for-cloud | Multi Factor Authentication Enforcement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/multi-factor-authentication-enforcement.md | To see which accounts don't have MFA enabled, use the following Azure Resource G - Conditional Access feature to enforce MFA on external users/tenants isn't supported yet. - Conditional Access policy applied to Microsoft Entra roles (such as all global admins, external users, external domain, etc.) isn't supported yet.+- Conditional Access authentication strength isn't supported yet. - External MFA solutions such as Okta, Ping, Duo, and more aren't supported within the identity MFA recommendations. ## Next steps |
defender-for-cloud | Plan Defender For Servers Agents | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-agents.md | Defender for Servers is one of the paid plans provided by [Microsoft Defender fo This article is the *fifth* article in the Defender for Servers planning guide. Before you begin, review the earlier articles: -1. [Start planning your deployment](plan-defender-for-servers.md) -1. [Understand where your data is stored and Log Analytics workspace requirements](plan-defender-for-servers-data-workspace.md) -1. [Review Defender for Servers access roles](plan-defender-for-servers-roles.md) -1. [Select a Defender for Servers plan](plan-defender-for-servers-select-plan.md) +1. [Start planning your deployment](plan-defender-for-servers.md). +1. [Understand where your data is stored and Log Analytics workspace requirements](plan-defender-for-servers-data-workspace.md). +1. [Review Defender for Servers access roles](plan-defender-for-servers-roles.md). +1. [Select a plan for Defender for Servers](plan-defender-for-servers-select-plan.md). ## Review Azure Arc requirements The following table describes the agents that are used in Defender for Servers: Feature | Log Analytics agent | Azure Monitor agent | | Foundational CSPM recommendations (free) that depend on the agent: [OS baseline recommendation](apply-security-baseline.md) (Azure VMs) | :::image type="icon" source="./medi) is used.-Foundational CSPM: [System updates recommendations](recommendations-reference.md#compute-recommendations) (Azure VMs) | :::image type="icon" source="./media/icons/yes-icon.png" ::: | Not yet available. +Foundational CSPM: [System updates recommendations](recommendations-reference-compute.md) (Azure VMs) | :::image type="icon" source="./media/icons/yes-icon.png" ::: | Not yet available. Foundational CSPM: [Antimalware/endpoint protection recommendations](endpoint-protection-recommendations-technical.md) (Azure VMs) | :::image type="icon" source="./media/icons/yes-icon.png" ::: | :::image type="icon" source="./media/icons/yes-icon.png" ::: Attack detection at the OS level and network layer, including fileless attack detection<br/><br/> Plan 1 relies on Defender for Endpoint capabilities for attack detection. | :::image type="icon" source="./media/icons/yes-icon.png" :::<br/><br/> Plan 2| :::image type="icon" source="./media/icons/yes-icon.png" :::<br/><br/> Plan 2 File integrity monitoring (Plan 2 only) | :::image type="icon" source="./media/icons/yes-icon.png" ::: | :::image type="icon" source="./media/icons/yes-icon.png" ::: Before you deploy Defender for Servers, verify operating system support for agen ## Review agent provisioning -When you enable Defender for Cloud plans, including Defender for Servers, you can choose to automatically provision some agents that are relevant for Defender for Servers: +When you enable plans in Defender for Cloud, including Defender for Servers, you can choose to automatically provision some agents that are relevant for Defender for Servers: - Log Analytics agent and Azure Monitor agent for Azure VMs - Log Analytics agent and Azure Monitor agent for Azure Arc VMs You want to configure a custom workspace | Log Analytics agent, Azure Monitor ag After working through these planning steps, you can start deployment: -- [Enable Defender for Servers](enable-enhanced-security.md) plans+- [Enable plans in Defender for Servers](enable-enhanced-security.md) - [Connect on-premises machines](quickstart-onboard-machines.md) to Azure. - [Connect AWS accounts](quickstart-onboard-aws.md) to Defender for Cloud. - [Connect GCP projects](quickstart-onboard-gcp.md) to Defender for Cloud. |
defender-for-cloud | Protect Network Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/protect-network-resources.md | -# Protect your network resources +# Protect network resources Microsoft Defender for Cloud continuously analyzes the security state of your Azure resources for network security best practices. When Defender for Cloud identifies potential security vulnerabilities, it creates recommendations that guide you through the process of configuring the needed controls to harden and protect your resources. -For a full list of the recommendations for Networking, see [Networking recommendations](recommendations-reference.md#networking-recommendations). +Review Defender for Cloud [networking recommendations](recommendations-reference-networking.md). This article addresses recommendations that apply to your Azure resources from a network security perspective. Networking recommendations center around next generation firewalls, Network Security Groups, JIT VM access, overly permissive inbound traffic rules, and more. For a list of networking recommendations and remediation actions, see [Managing security recommendations in Microsoft Defender for Cloud](review-security-recommendations.md). |
defender-for-cloud | Quickstart Onboard Aws | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md | -# Connect your AWS account to Microsoft Defender for Cloud +# Connect AWS accounts to Microsoft Defender for Cloud Workloads commonly span multiple cloud platforms. Cloud security services must do the same. Microsoft Defender for Cloud helps protect workloads in Amazon Web Services (AWS), but you need to set up the connection between them and Defender for Cloud. The following screenshot shows AWS accounts displayed in the Defender for Cloud You can learn more by watching the [New AWS connector in Defender for Cloud](episode-one.md) video from the *Defender for Cloud in the Field* video series. -For a reference list of all the recommendations that Defender for Cloud can provide for AWS resources, see [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md). - ## **AWS authentication process** Federated authentication is used between Microsoft Defender for Cloud and AWS. All of the resources related to the authentication are created as a part of the CloudFormation template deployment, including: To complete the procedures in this article, you need: - Access to an AWS account. -- **Contributor** permission for the relevant Azure subscription, and **Administrator** permission on the AWS account.+- **Subscription owner** permission for the relevant Azure subscription, and **Administrator** permission on the AWS account. > [!NOTE] > The AWS connector is not available on the national government clouds (Azure Government, Microsoft Azure operated by 21Vianet). Each plan has its own requirements for the native connector. If you choose the Microsoft Defender for Containers plan, you need: - At least one Amazon EKS cluster with permission to access to the EKS Kubernetes API server. If you need to create a new EKS cluster, follow the instructions in [Getting started with Amazon EKS ΓÇô eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html).-- The resource capacity to create a new Amazon SQS queue, Kinesis Data Firehose delivery stream, and Amazon S3 bucket in the cluster's region.+- The resource capacity to create a new Amazon SQS queue, ```Kinesis Data Firehose``` delivery stream, and Amazon S3 bucket in the cluster's region. ### Defender for SQL If you choose the Microsoft Defender for SQL plan, you need: We recommend that you use the autoprovisioning process to install Azure Arc on all of your existing and future EC2 instances. To enable the Azure Arc autoprovisioning, you need **Owner** permission on the relevant Azure subscription. -AWS Systems Manager (SSM) manages autoprovisioning by using the SSM Agent. Some Amazon Machine Images already have the [SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ami-preinstalled-agent.html). If your EC2 instances don't have the SSM Agent, install it by using these instructions from Amazon: [Install SSM Agent for a hybrid and multicloud environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html). +AWS Systems Manager (SSM) uses the SSM Agent to handle automatic provisioning. Some Amazon Machine Images already have the [SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ami-preinstalled-agent.html). If your EC2 instances don't have the SSM Agent, install it by using these instructions from Amazon: [Install SSM Agent for a hybrid and multicloud environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html). Ensure that your SSM Agent has the managed policy [AmazonSSMManagedInstanceCore](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonSSMManagedInstanceCore.html). It enables core functionality for the AWS Systems Manager service. If you choose the Microsoft Defender for Servers plan, you need: We recommend that you use the autoprovisioning process to install Azure Arc on all of your existing and future EC2 instances. To enable the Azure Arc autoprovisioning, you need **Owner** permission on the relevant Azure subscription. -AWS Systems Manager manages autoprovisioning by using the SSM Agent. Some Amazon Machine Images already have the [SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ami-preinstalled-agent.html). If your EC2 instances don't have the SSM Agent, install it by using either of the following instructions from Amazon: +AWS Systems Manager automatically provisions using the SSM Agent. Some Amazon Machine Images already have the [SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ami-preinstalled-agent.html). If your EC2 instances don't have the SSM Agent, install it by using either of the following instructions from Amazon: - [Install SSM Agent for a hybrid and multicloud environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html) - [Install SSM Agent for a hybrid and multicloud environment (Linux)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-linux.html) To view all the active recommendations for your resources by resource type, use ## Integrate with Microsoft Defender XDR -When you enable Defender for Cloud, Defender for Cloud alerts are automatically integrated into the Microsoft Defender Portal. No further steps are needed. +When you enable Defender for Cloud, its security alerts are automatically integrated into the Microsoft Defender Portal. No further steps are needed. The integration between Microsoft Defender for Cloud and Microsoft Defender XDR brings your cloud environments into Microsoft Defender XDR. With Defender for Cloud's alerts and cloud correlations integrated into Microsoft Defender XDR, SOC teams can now access all security information from a single interface. |
defender-for-cloud | Quickstart Onboard Devops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-devops.md | -# Quickstart: Connect your Azure DevOps Environment to Microsoft Defender for Cloud +# Connect Azure DevOps environments to Defender for Cloud -This quickstart shows you how to connect your Azure DevOps organizations on the **Environment settings** page in Microsoft Defender for Cloud. This page provides a simple onboarding experience to autodiscover your Azure DevOps repositories. +This page provides a simple onboarding experience to connect Azure DevOps environments to Microsoft Defender for Cloud, and automatically discover Azure DevOps repositories. -By connecting your Azure DevOps organizations to Defender for Cloud, you extend the security capabilities of Defender for Cloud to your Azure DevOps resources. These features include: --- **Foundational Cloud Security Posture Management (CSPM) features**: You can assess your Azure DevOps security posture through Azure DevOps-specific security recommendations. You can also learn about all the [recommendations for DevOps](recommendations-reference.md) resources.--- **Defender CSPM features**: Defender CSPM customers receive code to cloud contextualized attack paths, risk assessments, and insights to identify the most critical weaknesses that attackers can use to breach their environment. Connecting your Azure DevOps repositories allows you to contextualize DevOps security findings with your cloud workloads and identify the origin and developer for timely remediation. For more information, learn how to [identify and analyze risks across your environment](concept-attack-path.md).--API calls that Defender for Cloud performs count against the [Azure DevOps global consumption limit](/azure/devops/integrate/concepts/rate-limits). For more information, see the [common questions about DevOps security in Defender for Cloud](faq-defender-for-devops.yml). +By connecting your Azure DevOps environments to Defender for Cloud, you extend the security capabilities of Defender for Cloud to your Azure DevOps resources and improve security posture. [Learn more](defender-for-devops-introduction.md). ## Prerequisites To complete this quickstart, you need: - An Azure account with Defender for Cloud onboarded. If you don't already have an Azure account, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).+- Note that API calls that Defender for Cloud performs count against the [Azure DevOps global consumption limit](/azure/devops/integrate/concepts/rate-limits). +- Review [common questions about DevOps security in Defender for Cloud](faq-defender-for-devops.yml). ## Availability |
defender-for-cloud | Quickstart Onboard Gcp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md | To complete the procedures in this article, you need: - Access to a GCP project. -- **Contributor** permission on the relevant Azure subscription, and **Owner** permission on the GCP organization or project.+- **Subscription owner** permission on the relevant Azure subscription, and **Owner** permission on the GCP organization or project. You can learn more about Defender for Cloud pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). |
defender-for-cloud | Quickstart Onboard Github | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-github.md | -By connecting your GitHub organizations to Defender for Cloud, you extend the security capabilities of Defender for Cloud to your GitHub resources. These features include: +By connecting your GitHub environments to Defender for Cloud, you extend the security capabilities of Defender for Cloud to your GitHub resources, and improve security posture. [Learn more](defender-for-devops-introduction.md). -- **Foundational Cloud Security Posture Management (CSPM) features**: You can assess your GitHub security posture through GitHub-specific security recommendations. You can also learn about all the [recommendations for GitHub](recommendations-reference.md) resources. -- **Defender CSPM features**: Defender CSPM customers receive code to cloud contextualized attack paths, risk assessments, and insights to identify the most critical weaknesses that attackers can use to breach their environment. Connecting your GitHub repositories allows you to contextualize DevOps security findings with your cloud workloads and identify the origin and developer for timely remediation. For more information, learn how to [identify and analyze risks across your environment](concept-attack-path.md). ## Prerequisites |
defender-for-cloud | Recommendations Reference Ai | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-ai.md | + + Title: Reference table for all AI security recommendations in Microsoft Defender for Cloud +description: This article lists all Microsoft Defender for Cloud AI security recommendations that help you harden and protect your resources. +++ Last updated : 03/13/2024+++ai-usage: ai-assisted +++# AI security recommendations ++This article lists all the AI security recommendations you might see in Microsoft Defender for Cloud. ++The recommendations that appear in your environment are based on the resources that you're protecting and on your customized configuration. ++To learn about actions that you can take in response to these recommendations, see [Remediate recommendations in Defender for Cloud](implement-security-recommendations.md). +++## Azure recommendations ++### [Azure AI Services resources should have key access disabled (disable local authentication)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/13b10b36-aa99-4db6-b00c-dcf87c4761e6) ++**Description**: Key access (local authentication) is recommended to be disabled for security. Azure OpenAI Studio, typically used in development/testing, requires key access and will not function if key access is disabled. After the setting is disabled, Microsoft Entra ID becomes the only access method, which allows maintaining minimum privilege principle and granular control. [Learn more](https://aka.ms/AI/auth). ++This recommendation replaces the old recommendation *Cognitive Services accounts should have local authentication methods disabled*. It was formerly in category Cognitive Services and Cognitive Search, and was updated to comply with the Azure AI Services naming format and align with the relevant resources. ++**Severity**: Medium ++### [Azure AI Services resources should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f738efb8-005f-680d-3d43-b3db762d6243) ++**Description**: By restricting network access, you can ensure that only allowed networks can access the service. This can be achieved by configuring network rules so that only applications from allowed networks can access the Azure AI service resource. ++- This recommendation is covered by another networking recommendation for Azure AI services - [Cognitive Services accounts should restrict network access](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/f738efb8-005f-680d-3d43-b3db762d6243/showSecurityCenterCommandBar%7E/false). +- The Cognitive Services accounts should restrict network access recommendation is now in turn replaced by a new one (Azure AI Services should restrict network access). +- This recommendation replaces the old recommendation *Cognitive Services accounts should restrict network access*. It was formerly in category Cognitive Services and Cognitive Search, and was updated to comply with the Azure AI Services naming format and align with the relevant resources. +- The related policy definition [Cognitive Services accounts should disable public network access](https://ms.portal.azure.com/?feature.msaljs=true#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) has been removed from the regulatory compliance dashboard. +++**Severity**: Medium +++### Resource logs in Azure Machine Learning Workspaces should be enabled (Preview) ++**Description & related policy**: Resource logs enable recreating activity trails to use for investigation purposes when a security incident occurs or when your network is compromised. ++**Severity**: Medium ++### Azure Machine Learning Workspaces should disable public network access (Preview) ++**Description & related policy**: Disabling public network access improves security by ensuring that the Machine Learning Workspaces aren't exposed on the public internet. You can control exposure of your workspaces by creating private endpoints instead. For more information, see [Configure a private endpoint for an Azure Machine Learning workspace](../machine-learning/how-to-configure-private-link.md). ++**Severity**: Medium ++### Azure Machine Learning Computes should be in a virtual network (Preview) ++**Description & related policy**: Azure Virtual Networks provide enhanced security and isolation for your Azure Machine Learning Compute Clusters and Instances, as well as subnets, access control policies, and other features to further restrict access. When a compute is configured with a virtual network, it is not publicly addressable and can only be accessed from virtual machines and applications within the virtual network. ++**Severity**: Medium ++### Azure Machine Learning Computes should have local authentication methods disabled (Preview) ++**Description & related policy**: Disabling local authentication methods improves security by ensuring that Machine Learning Computes require Azure Active Directory identities exclusively for authentication. For more information, see [Azure Policy Regulatory Compliance controls for Azure Machine Learning](../machine-learning/security-controls-policy.md). ++**Severity**: Medium ++### Azure Machine Learning compute instances should be recreated to get the latest software updates (Preview) ++**Description & related policy**: Ensure Azure Machine Learning compute instances run on the latest available operating system. Security is improved and vulnerabilities reduced by running with the latest security patches. For more information, see [Vulnerability management for Azure Machine Learning](../machine-learning/concept-vulnerability-management.md#compute-instance). ++**Severity**: Medium ++### [Diagnostic logs in Azure AI services resources should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dea5192e-1bb3-101b-b70c-4646546f5e1e) ++**Description**: Enable logs for Azure AI services resources. This enables you to recreate activity trails for investigation purposes, when a security incident occurs or your network is compromised. ++This recommendation replaces the old recommendation *Diagnostic logs in Search services should be enabled*. It was formerly in the category Cognitive Services and Cognitive Search, and was updated to comply with the Azure AI Services naming format and align with the relevant resources. ++**Severity**: Low ++### Resource logs in Azure Databricks Workspaces should be enabled (Preview) ++**Description & related policy**: Resource logs enable recreating activity trails to use for investigation purposes when a security incident occurs or when your network is compromised. ++**Severity**: Medium ++### Azure Databricks Workspaces should disable public network access (Preview) ++**Description & related policy**: Disabling public network access improves security by ensuring that the resource isn't exposed on the public internet. You can control exposure of your resources by creating private endpoints instead. For more information, see [Enable Azure Private Link](/azure/databricks/administration-guide/cloud-configurations/azure/private-link). ++**Severity**: Medium ++### Azure Databricks Clusters should disable public IP (Preview) ++**Description & related policy**: Disabling public IP of clusters in Azure Databricks Workspaces improves security by ensuring that the clusters aren't exposed on the public internet. For more information, see [Secure cluster connectivity](/azure/databricks/security/network/secure-cluster-connectivity). ++**Severity**: Medium ++### Azure Databricks Workspaces should be in a virtual network (Preview) ++**Description & related policy**: Azure Virtual Networks provide enhanced security and isolation for your Azure Databricks Workspaces, as well as subnets, access control policies, and other features to further restrict access. For more information, see [Deploy Azure Databricks in your Azure virtual network](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject). ++**Severity**: Medium ++### Azure Databricks Workspaces should use private link (Preview) ++**Description & related policy**: Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Databricks workspaces, you can reduce data leakage risks. For more information, see [Create the workspace and private endpoints in the Azure portal UI](/azure/databricks/administration-guide/cloud-configurations/azure/private-link-standard#create-the-workspace-and-private-endpoints-in-the-azure-portal-ui). ++**Severity**: Medium ++## AWS AI recommendations ++### [AWS Bedrock should have model invocation logging enabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/Recommendation.ReactView/assessedResourceId/%2Fsubscriptions%2Fd1d8779d-38d7-4f06-91db-9cbc8de0176f%2Fresourcegroups%2Fsoc-asc%2Fproviders%2Fmicrosoft.security%2Fsecurityconnectors%2Fawsdspm%2Fsecurityentitydata%2Faws-account-in-region-323104580785-us-west-2%2Fproviders%2Fmicrosoft.security%2Fassessments%2F1a202dce-e13f-43ba-8a97-2f9235c5c834/recommendationDisplayName/AWS%20Bedrock%20should%20have%20model%20invocation%20logging%20enabled) ++**Description:** With invocation logging, you can collect the full request data, response data, and metadata associated with all calls performed in your account. This enables you to recreate activity trails for investigation purposes when a security incident occurs. ++**Severity:** Low +++## Related content ++- [Learn about security recommendations](security-policy-concept.md) +- [Review security recommendations](review-security-recommendations.md) |
defender-for-cloud | Recommendations Reference Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-api.md | + + Title: Reference table for all API security recommendations in Microsoft Defender for Cloud +description: This article lists all Microsoft Defender for Cloud API security recommendations that help you harden and protect your resources. +++ Last updated : 03/13/2024+++ai-usage: ai-assisted +++# API/API management security recommendations ++This article lists all the API/API management security recommendations you might see in Microsoft Defender for Cloud. ++The recommendations that appear in your environment are based on the resources that you're protecting and on your customized configuration. ++To learn about actions that you can take in response to these recommendations, see [Remediate recommendations in Defender for Cloud](implement-security-recommendations.md). +++## Azure API recommendations ++### Microsoft Defender for APIs should be enabled ++**Description & related policy**: Enable the Defender for APIs plan to discover and protect API resources against attacks and security misconfigurations. [Learn more](defender-for-apis-deploy.md) ++**Severity**: High ++### Azure API Management APIs should be onboarded to Defender for APIs ++**Description & related policy**: Onboarding APIs to Defender for APIs requires compute and memory utilization on the Azure API Management service. Monitor performance of your Azure API Management service while onboarding APIs, and scale out your Azure API Management resources as needed. ++**Severity**: High ++### API endpoints that are unused should be disabled and removed from the Azure API Management service ++**Description & related policy**: As a security best practice, API endpoints that haven't received traffic for 30 days are considered unused, and should be removed from the Azure API Management service. Keeping unused API endpoints might pose a security risk. These might be APIs that should have been deprecated from the Azure API Management service, but have accidentally been left active. Such APIs typically do not receive the most up-to-date security coverage. ++**Severity**: Low ++### API endpoints in Azure API Management should be authenticated ++**Description & related policy**: API endpoints published within Azure API Management should enforce authentication to help minimize security risk. Authentication mechanisms are sometimes implemented incorrectly or are missing. This allows attackers to exploit implementation flaws and to access data. For APIs published in Azure API Management, this recommendation assesses authentication through verifying the presence of Azure API Management subscription keys for APIs or products where subscription is required, and the execution of policies for validating [JWT](../api-management/validate-jwt-policy.md), [client certificates](../api-management/validate-client-certificate-policy.md), and [Microsoft Entra](../api-management/validate-azure-ad-token-policy.md) tokens. If none of these authentication mechanisms are executed during the API call, the API will receive this recommendation. ++**Severity**: High ++## API management recommendations ++### API Management subscriptions should not be scoped to all APIs ++**Description & related policy**: API Management subscriptions should be scoped to a product or an individual API instead of all APIs, which could result in excessive data exposure. ++**Severity**: Medium ++### API Management calls to API backends should not bypass certificate thumbprint or name validation ++**Description & related policy**: API Management should validate the backend server certificate for all API calls. Enable SSL certificate thumbprint and name validation to improve the API security. ++**Severity**: Medium ++### API Management direct management endpoint should not be enabled ++**Description & related policy**: The direct management REST API in Azure API Management bypasses Azure Resource Manager role-based access control, authorization, and throttling mechanisms, thus increasing the vulnerability of your service. ++**Severity**: Low ++### API Management APIs should use only encrypted protocols ++**Description & related policy**: APIs should be available only through encrypted protocols, like HTTPS or WSS. Avoid using unsecured protocols, such as HTTP or WS to ensure security of data in transit. ++**Severity**: High ++### API Management secret named values should be stored in Azure Key Vault ++**Description & related policy**: Named values are a collection of name and value pairs in each API Management service. Secret values can be stored either as encrypted text in API Management (custom secrets) or by referencing secrets in Azure Key Vault. Reference secret named values from Azure Key Vault to improve security of API Management and secrets. Azure Key Vault supports granular access management and secret rotation policies. ++**Severity**: Medium ++### API Management should disable public network access to the service configuration endpoints ++**Description & related policy**: To improve the security of API Management services, restrict connectivity to service configuration endpoints, like direct access management API, Git configuration management endpoint, or self-hosted gateways configuration endpoint. ++**Severity**: Medium ++### API Management minimum API version should be set to 2019-12-01 or higher ++**Description & related policy**: To prevent service secrets from being shared with read-only users, the minimum API version should be set to 2019-12-01 or higher. ++**Severity**: Medium ++### API Management calls to API backends should be authenticated ++**Description & related policy**: Calls from API Management to backends should use some form of authentication, whether via certificates or credentials. Does not apply to Service Fabric backends. ++**Severity**: Medium ++++## Related content ++- [Learn about security recommendations?](security-policy-concept.md) +- [Review security recommendations](review-security-recommendations.md) |
defender-for-cloud | Recommendations Reference App Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-app-services.md | + + Title: Reference table for Azure App Service security recommendations +description: This article lists the Microsoft Defender for Cloud security recommendations for Azure App Service. +++ Last updated : 03/13/2024+++ai-usage: ai-assisted +++# Azure App Service security recommendations ++This article lists all the security recommendations you might see issued by the Microsoft Defender for Cloud plan - Microsoft Defender for Cloud for Azure App Service. ++The recommendations that appear in your environment are based on the resources that you're protecting and on your customized configuration. ++To learn about actions that you can take in response to these recommendations, see [Remediate recommendations in Defender for Cloud](implement-security-recommendations.md). +++> [!TIP] +> If a recommendation's description says *No related policy*, usually it's because that recommendation is dependent on a different recommendation and *its* policy. +> +> For example, the recommendation *Endpoint protection health failures should be remediated* relies on the recommendation that checks whether an endpoint protection solution is even installed (*Endpoint protection solution should be installed*). The underlying recommendation *does* have a policy. Limiting the policies to only the foundational recommendation simplifies policy management. ++## App Services recommendations ++### [API App should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/bf82a334-13b6-ca57-ea75-096fc2ffce50) ++**Description**: Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. +(Related policy: [API App should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb7ddfbdc-1260-477d-91fd-98bd9be789a6)). ++**Severity**: Medium ++### [CORS should not allow every resource to access API Apps](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e40df93c-7a7c-1b0a-c787-9987ceb98e54) ++**Description**: Cross-Origin Resource Sharing (CORS) should not allow all domains to access your API app. Allow only required domains to interact with your API app. +(Related policy: [CORS should not allow every resource to access your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2f358c20a6-3f9e-4f0e-97ff-c6ce485e2aac)). ++**Severity**: Low ++### [CORS should not allow every resource to access Function Apps](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/7b3d4796-9400-2904-692b-4a5ede7f0a1e) ++**Description**: Cross-Origin Resource Sharing (CORS) should not allow all domains to access your Function app. Allow only required domains to interact with your Function app. +(Related policy: [CORS should not allow every resource to access your Function Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2f0820b7b9-23aa-4725-a1ce-ae4558f718e5)). ++**Severity**: Low ++### [CORS should not allow every resource to access Web Applications](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/df4d1739-47f0-60c7-1706-3731fea6ab03) ++**Description**: Cross-Origin Resource Sharing (CORS) should not allow all domains to access your web application. Allow only required domains to interact with your web app. +(Related policy: [CORS should not allow every resource to access your Web Applications](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2f5744710e-cc2f-4ee8-8809-3b11e89f4bc9)). ++**Severity**: Low ++### [Diagnostic logs in App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/40394a2c-60fb-7cc5-1944-065772e94f05) ++**Description**: Audit enabling of diagnostic logs on the app. +This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised +(No related policy). ++**Severity**: Medium ++### [Ensure API app has Client Certificates Incoming client certificates set to On](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ce2768c3-a7c7-1bbf-22cd-f9db675a9807) ++**Description**: Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app. +(Related policy: [Ensure API app has 'Client Certificates (Incoming client certificates)' set to 'On'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0c192fe8-9cbb-4516-85b3-0ade8bd03886)). ++**Severity**: Medium ++### [FTPS should be required in API apps](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/67fc622b-4ce6-8c52-08ae-9f830036b757) ++**Description**: Enable FTPS enforcement for enhanced security +(Related policy: [FTPS only should be required in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f9a1b8c48-453a-4044-86c3-d8bfd823e4f5)). ++**Severity**: High ++### [FTPS should be required in function apps](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/972a6579-f38f-c0b9-1b4b-a5bbeba3ab5b) ++**Description**: Enable FTPS enforcement for enhanced security +(Related policy: [FTPS only should be required in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f399b2637-a50f-4f95-96f8-3a145476eb15)). ++**Severity**: High ++### [FTPS should be required in web apps](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/19beaa2a-a126-b4dd-6d35-617f6cc83fca) ++**Description**: Enable FTPS enforcement for enhanced security +(Related policy: [FTPS should be required in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f4d24b6d4-5e53-4a4f-a7f4-618fa573ee4b)). ++**Severity**: High ++### [Function App should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/cb0acdc6-0846-fd48-debe-9905af151b6d) ++**Description**: Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. +(Related policy: [Function App should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab)). ++**Severity**: Medium ++### [Function apps should have Client Certificates (Incoming client certificates) enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c2ab4bea-c663-3259-a4cd-03a8feb02825) ++**Description**: Client certificates allow for the app to request a certificate for incoming requests. Only clients with valid certificates will be able to reach the app. +(Related policy: [Function apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2feaebaea7-8013-4ceb-9d14-7eb32271373c)). ++**Severity**: Medium ++### [Java should be updated to the latest version for API apps](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/08a3b009-0178-ee60-e357-e7ee5aea59c7) ++**Description**: Periodically, newer versions are released for Java either due to security flaws or to include additional functionality. +Using the latest Python version for API apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version. +(Related policy: [Ensure that 'Java version' is the latest, if used as a part of the API app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f88999f4c-376a-45c8-bcb3-4058f713cf39)). ++**Severity**: Medium ++### [Managed identity should be used in API apps](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/cc6d1865-7617-3cb2-cf7d-4cfc01ece1df) ++**Description**: For enhanced authentication security, use a managed identity. +On Azure, managed identities eliminate the need for developers to have to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens. +(Related policy: [Managed identity should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef)). ++**Severity**: Medium ++### [Managed identity should be used in function apps](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/23aa9cbe-c2fb-6a2f-6c97-885a6d48c4d1) ++**Description**: For enhanced authentication security, use a managed identity. +On Azure, managed identities eliminate the need for developers to have to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens. +(Related policy: [Managed identity should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0da106f2-4ca3-48e8-bc85-c638fe6aea8f)). ++**Severity**: Medium ++### [Managed identity should be used in web apps](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4a3d7cd3-f17c-637a-1ffc-614a01dd03cf) ++**Description**: For enhanced authentication security, use a managed identity. +On Azure, managed identities eliminate the need for developers to have to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens. +(Related policy: [Managed identity should be used in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2b9ad585-36bc-4615-b300-fd4435808332)). ++**Severity**: Medium ++### [Microsoft Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/0876ef51-fee7-449d-ba1e-f2662c7e43c6) ++**Description**: Microsoft Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. +Microsoft Defender for App Service can discover attacks on your applications and identify emerging attacks. ++Remediating this recommendation will result in charges for protecting your App Service plans. If you don't have any App Service plans in this subscription, no charges will be incurred. +If you create any App Service plans on this subscription in the future, they will automatically be protected and charges will begin at that time. +Learn more in [Protect your web apps and APIs](defender-for-app-service-introduction.md). +(Related policy: [Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2f2913021d-f2fd-4f3d-b958-22354e2bdbcb)). ++**Severity**: High ++### [PHP should be updated to the latest version for API apps](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/6b86d069-b3c3-b4d7-47c7-e73ddf786a63) ++**Description**: Periodically, newer versions are released for PHP software either due to security flaws or to include additional functionality. +Using the latest PHP version for API apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version. +(Related policy: [Ensure that 'PHP version' is the latest, if used as a part of the API app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1bc1795e-d44a-4d48-9b3b-6fff0fd5f9ba)). ++**Severity**: Medium ++### [Python should be updated to the latest version for API apps](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c2c90d64-38e2-e984-1457-7f4a98168c72) ++**Description**: Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. +Using the latest Python version for API apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version. +(Related policy: [Ensure that 'Python version' is the latest, if used as a part of the API app](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f74c3584d-afae-46f7-a20a-6f8adba71a16)). ++**Severity**: Medium ++### [Remote debugging should be turned off for API App](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9172da4e-9571-6e33-2b5b-d742847f3be7) ++**Description**: Remote debugging requires inbound ports to be opened on an API app. Remote debugging should be turned off. +(Related policy: [Remote debugging should be turned off for API Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2fe9c8d085-d9cc-4b17-9cdc-059f1f01f19e)). ++**Severity**: Low ++### [Remote debugging should be turned off for Function App](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/093c685b-56dd-13a3-8ed5-887a001837a2) ++**Description**: Remote debugging requires inbound ports to be opened on an Azure Function app. Remote debugging should be turned off. +(Related policy: [Remote debugging should be turned off for Function Apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2f0e60b895-3786-45da-8377-9c6b4b6ac5f9)). ++**Severity**: Low ++### [Remote debugging should be turned off for Web Applications](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/64b8637e-4e1d-76a9-0fc9-c1e487a97ed8) ++**Description**: Remote debugging requires inbound ports to be opened on a web application. Remote debugging is currently enabled. If you no longer need to use remote debugging, it should be turned off. +(Related policy: [Remote debugging should be turned off for Web Applications](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2fcb510bfd-1cba-4d9f-a230-cb0976f4bb71)). ++**Severity**: Low ++### [TLS should be updated to the latest version for API apps](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/5a659d57-117d-bb18-65f6-54e51da1bb9b) ++**Description**: Upgrade to the latest TLS version. +(Related policy: [Latest TLS version should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f8cb6aa8b-9e41-4f4e-aa25-089a7ac2581e)). ++**Severity**: High ++### [TLS should be updated to the latest version for function apps](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/15be5f3c-e0a4-c0fa-fbff-8e50339b4b22) ++**Description**: Upgrade to the latest TLS version. +(Related policy: [Latest TLS version should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff9d614c5-c173-4d56-95a7-b4437057d193)). ++**Severity**: High ++### [TLS should be updated to the latest version for web apps](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2a54c352-7ca4-4bae-ad46-47ecd9595bd2) ++**Description**: Upgrade to the latest TLS version. +(Related policy: [Latest TLS version should be used in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b)). ++**Severity**: High ++### [Web Application should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1b351b29-41ca-6df5-946c-c190a56be5fe) ++**Description**: Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. +(Related policy: [Web Application should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fa4af4a39-4135-47fb-b175-47fbdf85311d)). ++**Severity**: Medium ++### [Web apps should request an SSL certificate for all incoming requests](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ca4e6a5a-3a9a-bad3-798a-d420a1d9bd6d) ++**Description**: Client certificates allow for the app to request a certificate for incoming requests. +Only clients that have a valid certificate will be able to reach the app. +(Related policy: [Ensure WEB app has 'Client Certificates (Incoming client certificates)' set to 'On'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f5bb220d9-2698-4ee4-8404-b9c30c9df609)). ++**Severity**: Medium ++++## Related content ++- [Learn about security recommendations](security-policy-concept.md) +- [Review security recommendations](review-security-recommendations.md) |
defender-for-cloud | Recommendations Reference Aws | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-aws.md | - Title: Reference table for all security recommendations for AWS resources -description: This article lists all Microsoft Defender for Cloud security recommendations that help you harden and protect your Amazon Web Services (AWS) resources. - Previously updated : 06/09/2024--ai-usage: ai-assisted ---# Security recommendations for Amazon Web Services (AWS) resources --This article lists all the recommendations you might see in Microsoft Defender for Cloud if you connect an Amazon Web Services (AWS) account by using the **Environment settings** page. The recommendations that appear in your environment are based on the resources that you're protecting and on your customized configuration. --To learn about actions that you can take in response to these recommendations, see [Remediate recommendations in Defender for Cloud](implement-security-recommendations.md). --Your secure score is based on the number of security recommendations you completed. To decide which recommendations to resolve first, look at the severity of each recommendation and its potential effect on your secure score. --## AWS Compute recommendations --### [Amazon EC2 instances managed by Systems Manager should have a patch compliance status of COMPLIANT after a patch installation](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/5b3c2887-d7b7-4887-b074-4e6057027709) --**Description**: This control checks whether the compliance status of the Amazon EC2 Systems Manager patch compliance is COMPLIANT or NON_COMPLIANT after the patch installation on the instance. -It only checks instances managed by AWS Systems Manager Patch Manager. -It doesn't check whether the patch was applied within the 30-day limit prescribed by PCI DSS requirement '6.2'. -It also doesn't validate whether the patches applied were classified as security patches. -You should create patching groups with the appropriate baseline settings and ensure in-scope systems are managed by those patch groups in Systems Manager. For more information about patch groups, see [AWS Systems Manager User Guide](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-patch-group-tagging.html). --**Severity**: Medium --### [Amazon EFS should be configured to encrypt file data at rest using AWS KMS](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4e482075-311f-401e-adc7-f8a8affc5635) --**Description**: This control checks whether Amazon Elastic File System is configured to encrypt the file data using AWS KMS. The check fails in the following cases: -*"[Encrypted](https://docs.aws.amazon.com/efs/latest/ug/API_DescribeFileSystems.html)" is set to "false" in the DescribeFileSystems response. - The "[KmsKeyId](https://docs.aws.amazon.com/efs/latest/ug/API_DescribeFileSystems.html)" key in the [DescribeFileSystems](https://docs.aws.amazon.com/efs/latest/ug/API_DescribeFileSystems.html) response doesn't match the KmsKeyId parameter for [efs-encrypted-check](https://docs.aws.amazon.com/config/latest/developerguide/efs-encrypted-check.html). - Note that this control doesn't use the "KmsKeyId" parameter for [efs-encrypted-check](https://docs.aws.amazon.com/config/latest/developerguide/efs-encrypted-check.html). It only checks the value of "Encrypted". For an added layer of security for your sensitive data in Amazon EFS, you should create encrypted file systems. - Amazon EFS supports encryption for file systems at-rest. You can enable encryption of data at rest when you create an Amazon EFS file system. -To learn more about Amazon EFS encryption, see [Data encryption in Amazon EFS](https://docs.aws.amazon.com/efs/latest/ug/encryption.html) in the Amazon Elastic File System User Guide. --**Severity**: Medium --### [Amazon EFS volumes should be in backup plans](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e864e460-158b-4a4a-beb9-16ebc25c1240) --**Description**: This control checks whether Amazon Elastic File System (Amazon EFS) file systems are added to the backup plans in AWS Backup. The control fails if Amazon EFS file systems aren't included in the backup plans. - Including EFS file systems in the backup plans helps you to protect your data from deletion and data loss. --**Severity**: Medium --### [Application Load Balancer deletion protection should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/5c508bf1-26f9-4696-bb61-8341d395e3de) --**Description**: This control checks whether an Application Load Balancer has deletion protection enabled. The control fails if deletion protection isn't configured. -Enable deletion protection to protect your Application Load Balancer from deletion. --**Severity**: Medium --### [Auto Scaling groups associated with a load balancer should use health checks](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/837d6a45-503f-4c95-bf42-323763960b62) --**Description**: Auto Scaling groups that are associated with a load balancer are using Elastic Load Balancing health checks. - PCI DSS doesn't require load balancing or highly available configurations. This is recommended by AWS best practices. --**Severity**: Low --### [AWS accounts should have Azure Arc auto provisioning enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/882a80f0-943f-473e-b6d7-40c7a625540e) --**Description**: For full visibility of the security content from Microsoft Defender for servers, EC2 instances should be connected to Azure Arc. To ensure that all eligible EC2 instances automatically receive Azure Arc, enable autoprovisioning from Defender for Cloud at the AWS account level. Learn more about [Azure Arc](../azure-arc/servers/overview.md), and [Microsoft Defender for Servers](plan-defender-for-servers.md). --**Severity**: High --### [CloudFront distributions should have origin failover configured](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4779e962-2ea3-4126-aa76-379ea271887c) --**Description**: This control checks whether an Amazon CloudFront distribution is configured with an origin group that has two or more origins. -CloudFront origin failover can increase availability. Origin failover automatically redirects traffic to a secondary origin if the primary origin is unavailable or if it returns specific HTTP response status codes. --**Severity**: Medium --### [CodeBuild GitHub or Bitbucket source repository URLs should use OAuth](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9694d4ef-f21a-40b7-b535-618ac5c5d21e) --**Description**: This control checks whether the GitHub or Bitbucket source repository URL contains either personal access tokens or a user name and password. -Authentication credentials should never be stored or transmitted in clear text or appear in the repository URL. Instead of personal access tokens or user name and password, you should use OAuth to grant authorization for accessing GitHub or Bitbucket repositories. - Using personal access tokens or a user name and password could expose your credentials to unintended data exposure and unauthorized access. --**Severity**: High --### [CodeBuild project environment variables should not contain credentials](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a88b4b72-b461-4b5e-b024-91da1cbe500f) --**Description**: This control checks whether the project contains the environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`. -Authentication credentials `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` should never be stored in clear text, as this could lead to unintended data exposure and unauthorized access. --**Severity**: High --### [DynamoDB Accelerator (DAX) clusters should be encrypted at rest](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/58e67d3d-8b17-4c1c-9bc4-550b10f0328a) --**Description**: This control checks whether a DAX cluster is encrypted at rest. - Encrypting data at rest reduces the risk of data stored on disk being accessed by a user not authenticated to AWS. The encryption adds another set of access controls to limit the ability of unauthorized users to access to the data. - For example, API permissions are required to decrypt the data before it can be read. --**Severity**: Medium --### [DynamoDB tables should automatically scale capacity with demand](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/47476790-2527-4bdb-b839-3b48ed18dccf) --**Description**: This control checks whether an Amazon DynamoDB table can scale its read and write capacity as needed. This control passes if the table uses either on-demand capacity mode or provisioned mode with auto scaling configured. - Scaling capacity with demand avoids throttling exceptions, which helps to maintain availability of your applications. --**Severity**: Medium --### [EC2 instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231dee23-84db-44d2-bd9d-c32fbcfb42a3) --**Description**: Connect your EC2 instances to Azure Arc in order to have full visibility to Microsoft Defender for Servers security content. Learn more about [Azure Arc](../azure-arc/servers/overview.md), and about [Microsoft Defender for Servers](plan-defender-for-servers.md) on hybrid-cloud environment. --**Severity**: High --### [EC2 instances should be managed by AWS Systems Manager](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4be5393d-cc33-4ef7-acae-80295bc3ae35) --**Description**: Status of the Amazon EC2 Systems Manager patch compliance is 'COMPLIANT' or 'NON_COMPLIANT' after the patch installation on the instance. - Only instances managed by AWS Systems Manager Patch Manager are checked. Patches that were applied within the 30-day limit prescribed by PCI DSS requirement '6' aren't checked. --**Severity**: Medium --### [EDR configuration issues should be resolved on EC2s](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/695abd03-82bd-4d7f-a94c-140e8a17666c) --**Description**: To protect virtual machines from the latest threats and vulnerabilities, resolve all identified configuration issues with the installed Endpoint Detection and Response (EDR) solution. <br> Note: Currently, this recommendation only applies to resources with Microsoft Defender for Endpoint (MDE) enabled. --**Severity**: High --### [EDR solution should be installed on EC2s](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/77d09952-2bc2-4495-8795-cc8391452f85) --**Description**: To protect EC2s, install an Endpoint Detection and Response (EDR) solution. EDRs help prevent, detect, investigate, and respond to advanced threats. Use Microsoft Defender for Servers to deploy Microsoft Defender for Endpoint. If resource is classified as "Unhealthy", it doesn't have a supported EDR solution installed. If you have an EDR solution installed which isn't discoverable by this recommendation, you can exempt it. --**Severity**: High --### [Instances managed by Systems Manager should have an association compliance status of COMPLIANT](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/67a90ae0-b3d1-44f0-9dcf-a03234ebeb65) --**Description**: This control checks whether the status of the AWS Systems Manager association compliance is COMPLIANT or NON_COMPLIANT after the association is run on an instance. The control passes if the association compliance status is COMPLIANT. -A State Manager association is a configuration that is assigned to your managed instances. The configuration defines the state that you want to maintain on your instances. For example, an association can specify that antivirus software must be installed and running on your instances, or that certain ports must be closed. -After you create one or more State Manager associations, compliance status information is immediately available to you in the console or in response to AWS CLI commands or corresponding Systems Manager API operations. For associations, "Configuration" Compliance shows statuses of Compliant or Non-compliant and the severity level assigned to the association, such as *Critical* or *Medium*. To learn more about State Manager association compliance, see [About State Manager association compliance](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-compliance-https://docsupdatetracker.net/about.html#sysman-compliance-about-association) in the AWS Systems Manager User Guide. -You must configure your in-scope EC2 instances for Systems Manager association. You must also configure the patch baseline for the security rating of the vendor of patches, and set the autoapproval date to meet PCI DSS *3.2.1* requirement *6.2*. For more guidance on how to [Create an association](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-state-assoc.html), see Create an association in the AWS Systems Manager User Guide. For more information on working with patching in Systems Manager, see [AWS Systems Manager Patch Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html) in the AWS Systems Manager User Guide. --**Severity**: Low --### [Lambda functions should have a dead-letter queue configured](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dcf10b98-798f-4734-9afd-800916bf1e65) --**Description**: This control checks whether a Lambda function is configured with a dead-letter queue. The control fails if the Lambda function isn't configured with a dead-letter queue. -As an alternative to an on-failure destination, you can configure your function with a dead-letter queue to save discarded events for further processing. - A dead-letter queue acts the same as an on-failure destination. It's used when an event fails all processing attempts or expires without being processed. -A dead-letter queue allows you to look back at errors or failed requests to your Lambda function to debug or identify unusual behavior. -From a security perspective, it's important to understand why your function failed and to ensure that your function doesn't drop data or compromise data security as a result. - For example, if your function can't communicate to an underlying resource, that could be a symptom of a denial of service (DoS) attack elsewhere in the network. --**Severity**: Medium --### [Lambda functions should use supported runtimes](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e656e5b7-130c-4fb4-be90-9bdd4f82fdfb) --**Description**: This control checks that the Lambda function settings for runtimes match the expected values set for the supported runtimes for each language. This control checks for the following runtimes: - **nodejs14.x**, **nodejs12.x**, **nodejs10.x**, **python3.8**, **python3.7**, **python3.6**, **ruby2.7**, **ruby2.5**, **java11**, **java8**, **java8.al2**, **go1.x**, **dotnetcore3.1**, **dotnetcore2.1** -[Lambda runtimes](https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html) are built around a combination of operating system, programming language, and software libraries that are subject to maintenance and security updates. When a runtime component is no longer supported for security updates, Lambda deprecates the runtime. Even though you can't create functions that use the deprecated runtime, the function is still available to process invocation events. Make sure that your Lambda functions are current and don't use out-of-date runtime environments. -To learn more about the supported runtimes that this control checks for the supported languages, see [AWS Lambda runtimes](https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html) in the AWS Lambda Developer Guide. --**Severity**: Medium --### [Management ports of EC2 instances should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9b26b102-ccde-4697-aa30-f0621f865f99) --**Description**: Microsoft Defender for Cloud identified some overly permissive inbound rules for management ports in your network. Enable just-in-time access control to protect your Instances from internet-based brute-force attacks. [Learn more.](just-in-time-access-usage.yml) --**Severity**: High --### [Unused EC2 security groups should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f065cc7b-f63b-4865-b8ff-4a1292e1a5cb) --**Description**: Security groups should be attached to Amazon EC2 instances or to an ENI. - Healthy finding can indicate there are unused Amazon EC2 security groups. --**Severity**: Low --## AWS Container recommendations --### [[Preview] Container images in AWS registry should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2a139383-ec7e-462a-90ac-b1b60e87d576) --**Description**: Defender for Cloud scans your registry images for known vulnerabilities (CVEs) and provides detailed findings for each scanned image. Scanning and remediating vulnerabilities for container images in the registry helps maintain a secure and reliable software supply chain, reduces the risk of security incidents, and ensures compliance with industry standards. --**Severity**: High --**Type**: Vulnerability Assessment --### [[Preview] Containers running in AWS should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d5d1e526-363a-4223-b860-f4b6e710859f) --**Description**: Defender for Cloud creates an inventory of all container workloads currently running in your Kubernetes clusters and provides vulnerability reports for those workloads by matching the images being used and the vulnerability reports created for the registry images. Scanning and remediating vulnerabilities of container workloads is critical to ensure a robust and secure software supply chain, reduce the risk of security incidents, and ensures compliance with industry standards. --**Severity**: High --**Type**: Vulnerability Assessment --### [EKS clusters should grant the required AWS permissions to Microsoft Defender for Cloud](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/7d3a977e-46f1-419a-9046-4bd44db80aac) --**Description**: Microsoft Defender for Containers provides protections for your EKS clusters. - To monitor your cluster for security vulnerabilities and threats, Defender for Containers needs permissions for your AWS account. These permissions are used to enable Kubernetes control plane logging on your cluster and establish a reliable pipeline between your cluster and Defender for Cloud's backend in the cloud. - Learn more about [Microsoft Defender for Cloud's security features for containerized environments](defender-for-containers-introduction.md). --**Severity**: High --### [EKS clusters should have Microsoft Defender's extension for Azure Arc installed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/38307993-84fb-4636-8ce7-3a64466bb5cc) --**Description**: Microsoft Defender's [cluster extension](../azure-arc/kubernetes/extensions.md) provides security capabilities for your EKS clusters. The extension collects data from a cluster and its nodes to identify security vulnerabilities and threats. - The extension works with [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md). -Learn more about [Microsoft Defender for Cloud's security features for containerized environments](defender-for-containers-introduction.md?tabs=defender-for-container-arch-aks). --**Severity**: High --### [Microsoft Defender for Containers should be enabled on AWS connectors](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/11d0f4af-6924-4a2e-8b66-781a4553c828) --**Description**: Microsoft Defender for Containers provides real-time threat protection for containerized environments and generates alerts about suspicious activities. -Use this information to harden the security of Kubernetes clusters and remediate security issues. --Important: When you enabled Microsoft Defender for Containers and deployed Azure Arc to your EKS clusters, the protections - and charges - will begin. If you don't deploy Azure Arc on a cluster, Defender for Containers won't protect it, and no charges are incurred for this Microsoft Defender plan for that cluster. --**Severity**: High --### Data plane recommendations --All the [Kubernetes data plane security recommendations](kubernetes-workload-protections.md#view-and-configure-the-bundle-of-recommendations) are supported for AWS after you [enable Azure Policy for Kubernetes](kubernetes-workload-protections.md#enable-kubernetes-data-plane-hardening). --## AWS Data recommendations --### [Amazon Aurora clusters should have backtracking enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d0ef47dc-95aa-4765-a075-72c07df8acff) --**Description**: This control checks whether Amazon Aurora clusters have backtracking enabled. -Backups help you to recover more quickly from a security incident. They also strengthen the resilience of your systems. Aurora backtracking reduces the time to recover a database to a point in time. It doesn't require a database restore to do so. -For more information about backtracking in Aurora, see [Backtracking an Aurora DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Managing.Backtrack.html) in the Amazon Aurora User Guide. --**Severity**: Medium --### [Amazon EBS snapshots shouldn't be publicly restorable](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/02e8de17-1a01-45cb-b906-6d07a78f4b3c) --**Description**: Amazon EBS snapshots shouldn't be publicly restorable by everyone unless explicitly allowed, to avoid accidental exposure of data. Additionally, permission to change Amazon EBS configurations should be restricted to authorized AWS accounts only. --**Severity**: High --### [Amazon ECS task definitions should have secure networking modes and user definitions](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/0dc124a8-2a69-47c5-a4e1-678d725a33ab) --**Description**: This control checks whether an active Amazon ECS task definition that has host networking mode also has privileged or user container definitions. - The control fails for task definitions that have host network mode and container definitions where privileged=false or is empty and user=root or is empty. -If a task definition has elevated privileges, it is because the customer specifically opted in to that configuration. - This control checks for unexpected privilege escalation when a task definition has host networking enabled but the customer didn't opt in to elevated privileges. --**Severity**: High --### [Amazon Elasticsearch Service domains should encrypt data sent between nodes](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9b63a099-6c0c-4354-848b-17de1f3c8ae3) --**Description**: This control checks whether Amazon ES domains have node-to-node encryption enabled. HTTPS (TLS) can be used to help prevent potential attackers from eavesdropping on or manipulating network traffic using person-in-the-middle or similar attacks. Only encrypted connections over HTTPS (TLS) should be allowed. Enabling node-to-node encryption for Amazon ES domains ensures that intra-cluster communications are encrypted in transit. There can be a performance penalty associated with this configuration. You should be aware of and test the performance trade-off before enabling this option. --**Severity**: Medium --### [Amazon Elasticsearch Service domains should have encryption at rest enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/cf747c91-14f3-4b30-aafe-eb12c18fd030) --**Description**: It's important to enable encryptions rest of Amazon ES domains to protect sensitive data --**Severity**: Medium --### [Amazon RDS database should be encrypted using customer managed key](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9137f5de-aac8-4cee-a22f-8d81f19be67f) --**Description**: This check identifies RDS databases that are encrypted with default KMS keys and not with customer managed keys. As a leading practice, use customer managed keys to encrypt the data on your RDS databases and maintain control of your keys and data on sensitive workloads. --**Severity**: Medium --### [Amazon RDS instance should be configured with automatic backup settings](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/894259c2-c1d5-47dc-b5c6-b242d5c76fdf) --**Description**: This check identifies RDS instances, which aren't set with the automatic backup setting. If Automatic Backup is set, RDS creates a storage volume snapshot of your DB instance, backing up the entire DB instance and not just individual databases, which provide for point-in-time recovery. The automatic backup happens during the specified backup window time and keeps the backups for a limited period of time as defined in the retention period. It's recommended to set automatic backups for your critical RDS servers that help in the data restoration process. --**Severity**: Medium --### [Amazon Redshift clusters should have audit logging enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e2a0ec17-447b-44b6-8646-c0b5584b6b0a) --**Description**: This control checks whether an Amazon Redshift cluster has audit logging enabled. -Amazon Redshift audit logging provides additional information about connections and user activities in your cluster. This data can be stored and secured in Amazon S3 and can be helpful in security audits and investigations. For more information, see [Database audit logging](https://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html) in the *Amazon Redshift Cluster Management Guide*. --**Severity**: Medium --### [Amazon Redshift clusters should have automatic snapshots enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/7a152832-6600-49d1-89be-82e474190e13) --**Description**: This control checks whether Amazon Redshift clusters have automated snapshots enabled. It also checks whether the snapshot retention period is greater than or equal to seven. -Backups help you to recover more quickly from a security incident. They strengthen the resilience of your systems. Amazon Redshift takes periodic snapshots by default. This control checks whether automatic snapshots are enabled and retained for at least seven days. For more information about Amazon Redshift automated snapshots, see [Automated snapshots](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-snapshots.html#about-automated-snapshots) in the *Amazon Redshift Cluster Management Guide*. --**Severity**: Medium --### [Amazon Redshift clusters should prohibit public access](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/7f5ac036-11e1-4cda-89b5-a115b9ae4f72) --**Description**: We recommend Amazon Redshift clusters to avoid public accessibility by evaluating the 'publiclyAccessible' field in the cluster configuration item. --**Severity**: High --### [Amazon Redshift should have automatic upgrades to major versions enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/176f9062-64d0-4edd-bb0f-915012a6ef16) --**Description**: This control checks whether automatic major version upgrades are enabled for the Amazon Redshift cluster. -Enabling automatic major version upgrades ensures that the latest major version updates to Amazon Redshift clusters are installed during the maintenance window. - These updates might include security patches and bug fixes. Keeping up to date with patch installation is an important step in securing systems. --**Severity**: Medium --### [Amazon SQS queues should be encrypted at rest](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/340a07a1-7d68-4562-ac25-df77c214fe13) --**Description**: This control checks whether Amazon SQS queues are encrypted at rest. -Server-side encryption (SSE) allows you to transmit sensitive data in encrypted queues. To protect the content of messages in queues, SSE uses keys managed in AWS KMS. -For more information, see [Encryption at rest](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-server-side-encryption.html) in the Amazon Simple Queue Service Developer Guide. --**Severity**: Medium --### [An RDS event notifications subscription should be configured for critical cluster events](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/65659c22-6588-405b-b118-614c2b4ead5b) --**Description**: This control checks whether an Amazon RDS event subscription exists that has notifications enabled for the following source type, - event category key-value pairs. DBCluster: ["maintenance" and "failure"]. - RDS event notifications use Amazon SNS to make you aware of changes in the availability or configuration of your RDS resources. These notifications allow for rapid response. -For more information about RDS event notifications, see [Using Amazon RDS event notification](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html) in the Amazon RDS User Guide. --**Severity**: Low --### [An RDS event notifications subscription should be configured for critical database instance events](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ff4f3ab3-8ed7-4b4f-a721-4c3b66a59140) --**Description**: This control checks whether an Amazon RDS event subscription exists with notifications enabled for the following source type. - event category key-value pairs. DBInstance: ["maintenance", "configuration change" and "failure"]. -RDS event notifications use Amazon SNS to make you aware of changes in the availability or configuration of your RDS resources. These notifications allow for rapid response. -For more information about RDS event notifications, see [Using Amazon RDS event notification](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html) in the Amazon RDS User Guide. --**Severity**: Low --### [An RDS event notifications subscription should be configured for critical database parameter group events](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c6f24bb0-b696-451c-a26e-0cc9ea8e97e3) --**Description**: This control checks whether an Amazon RDS event subscription exists with notifications enabled for the following source type. - event category key-value pairs. DBParameterGroup: ["configuration","change"]. - RDS event notifications use Amazon SNS to make you aware of changes in the availability or configuration of your RDS resources. These notifications allow for rapid response. -For more information about RDS event notifications, see [Using Amazon RDS event notification](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html) in the Amazon RDS User Guide. --**Severity**: Low --### [An RDS event notifications subscription should be configured for critical database security group events](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ab5c51fb-ecdb-46de-b8df-c28ae46ce5bc) --**Description**: This control checks whether an Amazon RDS event subscription exists with notifications enabled for the following source type, event category key-value pairs.DBSecurityGroup: ["configuration","change","failure"]. - RDS event notifications use Amazon SNS to make you aware of changes in the availability or configuration of your RDS resources. These notifications allow for a rapid response. -For more information about RDS event notifications, see [Using Amazon RDS event notification](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html) in the Amazon RDS User Guide. --**Severity**: Low --### [API Gateway REST and WebSocket API logging should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2cac0072-6f56-46f0-9518-ddec3660ee56) --**Description**: This control checks whether all stages of an Amazon API Gateway REST or WebSocket API have logging enabled. - The control fails if logging isn't enabled for all methods of a stage or if logging Level is neither ERROR nor INFO. - API Gateway REST or WebSocket API stages should have relevant logs enabled. API Gateway REST and WebSocket API execution logging provides detailed records of requests made to API Gateway REST and WebSocket API stages. - The stages include API integration backend responses, Lambda authorizer responses, and the requestId for AWS integration endpoints. --**Severity**: Medium --### [API Gateway REST API cache data should be encrypted at rest](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1a0ce4e0-b61e-4ec7-ab65-aeaff3893bd3) --**Description**: This control checks whether all methods in API Gateway REST API stages that have cache enabled are encrypted. The control fails if any method in an API Gateway REST API stage is configured to cache and the cache isn't encrypted. - Encrypting data at rest reduces the risk of data stored on disk being accessed by a user not authenticated to AWS. It adds another set of access controls to limit unauthorized users ability access the data. For example, API permissions are required to decrypt the data before it can be read. - API Gateway REST API caches should be encrypted at rest for an added layer of security. --**Severity**: Medium --### [API Gateway REST API stages should be configured to use SSL certificates for backend authentication](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ec268d38-c94b-4df3-8b4e-5248fcaaf3fc) --**Description**: This control checks whether Amazon API Gateway REST API stages have SSL certificates configured. - Backend systems use these certificates to authenticate that incoming requests are from API Gateway. - API Gateway REST API stages should be configured with SSL certificates to allow backend systems to authenticate that requests originate from API Gateway. --**Severity**: Medium --### [API Gateway REST API stages should have AWS X-Ray tracing enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/5cbaff4f-f8d5-49fe-9fdc-63c4507ac670) --**Description**: This control checks whether AWS X-Ray active tracing is enabled for your Amazon API Gateway REST API stages. - X-Ray active tracing enables a more rapid response to performance changes in the underlying infrastructure. Changes in performance could result in a lack of availability of the API. - X-Ray active tracing provides real-time metrics of user requests that flow through your API Gateway REST API operations and connected services. --**Severity**: Low --### [API Gateway should be associated with an AWS WAF web ACL](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d69eb8b0-79ba-4963-a683-a96a8ea787e2) --**Description**: This control checks whether an API Gateway stage uses an AWS WAF web access control list (ACL). - This control fails if an AWS WAF web ACL isn't attached to a REST API Gateway stage. - AWS WAF is a web application firewall that helps protect web applications and APIs from attacks. It enables you to configure an ACL, which is a set of rules that allow, block, or count web requests based on customizable web security rules and conditions that you define. - Ensure that your API Gateway stage is associated with an AWS WAF web ACL to help protect it from malicious attacks. --**Severity**: Medium --### [Application and Classic Load Balancers logging should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4ba5c359-495f-4ba6-9897-7fdbc0aed675) --**Description**: This control checks whether the Application Load Balancer and the Classic Load Balancer have logging enabled. The control fails if `access_logs.s3.enabled` is false. -Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client's IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and to troubleshoot issues. -To learn more, see [Access logs for your Classic Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/access-log-collection.html) in User Guide for Classic Load Balancers. --**Severity**: Medium --### [Attached EBS volumes should be encrypted at-rest](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/0bde343a-0681-4ee2-883a-027cc1e655b8) --**Description**: This control checks whether the EBS volumes that are in an attached state are encrypted. To pass this check, EBS volumes must be in use and encrypted. If the EBS volume isn't attached, then it isn't subject to this check. -For an added layer of security of your sensitive data in EBS volumes, you should enable EBS encryption at rest. Amazon EBS encryption offers a straightforward encryption solution for your EBS resources that doesn't require you to build, maintain, and secure your own key management infrastructure. It uses AWS KMS customer master keys (CMK) when creating encrypted volumes and snapshots. -To learn more about Amazon EBS encryption, see [Amazon EBS encryption](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html) in the Amazon EC2 User Guide for Linux Instances. --**Severity**: Medium --### [AWS Database Migration Service replication instances shouldn't be public](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/132a70b8-ffda-457a-b7a3-e6f2e01fc0af) --**Description**: To protect your replicated instances from threats. A private replication instance should have a private IP address that you can't access outside of the replication network. - A replication instance should have a private IP address when the source and target databases are in the same network, and the network is connected to the replication instance's VPC using a VPN, AWS Direct Connect, or VPC peering. - You should also ensure that access to your AWS DMS instance configuration is limited to only authorized users. - To do this, restrict users' IAM permissions to modify AWS DMS settings and resources. --**Severity**: High --### [Classic Load Balancer listeners should be configured with HTTPS or TLS termination](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/773667f7-6511-4aec-ae9c-e3286c56a254) --**Description**: This control checks whether your Classic Load Balancer listeners are configured with HTTPS or TLS protocol for front-end (client to load balancer) connections. The control is applicable if a Classic Load Balancer has listeners. If your Classic Load Balancer doesn't have a listener configured, then the control doesn't report any findings. -The control passes if the Classic Load Balancer listeners are configured with TLS or HTTPS for front-end connections. -The control fails if the listener isn't configured with TLS or HTTPS for front-end connections. -Before you start to use a load balancer, you must add one or more listeners. A listener is a process that uses the configured protocol and port to check for connection requests. Listeners can support both HTTP and HTTPS/TLS protocols. You should always use an HTTPS or TLS listener, so that the load balancer does the work of encryption and decryption in transit. --**Severity**: Medium --### [Classic Load Balancers should have connection draining enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dd60e31e-073a-42b6-9b23-db7ca86fd5e0) --**Description**: This control checks whether Classic Load Balancers have connection draining enabled. -Enabling connection draining on Classic Load Balancers ensures that the load balancer stops sending requests to instances that are deregistering or unhealthy. It keeps the existing connections open. This is useful for instances in Auto Scaling groups, to ensure that connections aren't severed abruptly. --**Severity**: Medium --### [CloudFront distributions should have AWS WAF enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/0e0d5964-2895-45b1-b646-fcded8d567be) --**Description**: This control checks whether CloudFront distributions are associated with either AWS WAF or AWS WAFv2 web ACLs. The control fails if the distribution isn't associated with a web ACL. -AWS WAF is a web application firewall that helps protect web applications and APIs from attacks. It allows you to configure a set of rules, called a web access control list (web ACL), that allow, block, or count web requests based on customizable web security rules and conditions that you define. Ensure your CloudFront distribution is associated with an AWS WAF web ACL to help protect it from malicious attacks. --**Severity**: Medium --### [CloudFront distributions should have logging enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/88114970-36db-42b3-9549-20608b1ab8ad) --**Description**: This control checks whether server access logging is enabled on CloudFront distributions. The control fails if access logging isn't enabled for a distribution. - CloudFront access logs provide detailed information about every user request that CloudFront receives. Each log contains information such as the date and time the request was received, the IP address of the viewer that made the request, the source of the request, and the port number of the request from the viewer. -These logs are useful for applications such as security and access audits and forensics investigation. For more information on how to analyze access logs, see Querying Amazon CloudFront logs in the Amazon Athena User Guide. --**Severity**: Medium --### [CloudFront distributions should require encryption in transit](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a67adff8-625f-4891-9f61-43f837d18ad2) --**Description**: This control checks whether an Amazon CloudFront distribution requires viewers to use HTTPS directly or whether it uses redirection. The control fails if ViewerProtocolPolicy is set to allow-all for defaultCacheBehavior or for cacheBehaviors. -HTTPS (TLS) can be used to help prevent potential attackers from using person-in-the-middle or similar attacks to eavesdrop on or manipulate network traffic. Only encrypted connections over HTTPS (TLS) should be allowed. Encrypting data in transit can affect performance. You should test your application with this feature to understand the performance profile and the impact of TLS. --**Severity**: Medium --### [CloudTrail logs should be encrypted at rest using KMS CMKs](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/190f732b-c68e-4816-9961-aba074272627) --**Description**: We recommended configuring CloudTrail to use SSE-KMS. -Configuring CloudTrail to use SSE-KMS provides more confidentiality controls on log data as a given user must have S3 read permission on the corresponding log bucket and must be granted decrypt permission by the CMK policy. --**Severity**: Medium --### [Connections to Amazon Redshift clusters should be encrypted in transit](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/036bb56b-c442-4352-bb4c-5bd0353ad314) --**Description**: This control checks whether connections to Amazon Redshift clusters are required to use encryption in transit. The check fails if the Amazon Redshift cluster parameter require_SSL isn't set to *1*. -TLS can be used to help prevent potential attackers from using person-in-the-middle or similar attacks to eavesdrop on or manipulate network traffic. Only encrypted connections over TLS should be allowed. Encrypting data in transit can affect performance. You should test your application with this feature to understand the performance profile and the impact of TLS. --**Severity**: Medium --### [Connections to Elasticsearch domains should be encrypted using TLS 1.2](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/effb5011-f8db-45ac-b981-b5bdfd7beb88) --**Description**: This control checks whether connections to Elasticsearch domains are required to use TLS 1.2. The check fails if the Elasticsearch domain TLSSecurityPolicy isn't Policy-Min-TLS-1-2-2019-07. -HTTPS (TLS) can be used to help prevent potential attackers from using person-in-the-middle or similar attacks to eavesdrop on or manipulate network traffic. Only encrypted connections over HTTPS (TLS) should be allowed. Encrypting data in transit can affect performance. You should test your application with this feature to understand the performance profile and the impact of TLS. TLS 1.2 provides several security enhancements over previous versions of TLS. --**Severity**: Medium --### [DynamoDB tables should have point-in-time recovery enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/cc873508-40c1-41b6-8507-8a431d74f831) --**Description**: This control checks whether point-in-time recovery (PITR) is enabled for an Amazon DynamoDB table. - Backups help you to recover more quickly from a security incident. They also strengthen the resilience of your systems. DynamoDB point-in-time recovery automates backups for DynamoDB tables. It reduces the time to recover from accidental delete or write operations. - DynamoDB tables that have PITR enabled can be restored to any point in time in the last 35 days. --**Severity**: Medium --### [EBS default encryption should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/56406d4c-87b4-4aeb-b1cc-7f6312d78e0a) --**Description**: This control checks whether account-level encryption is enabled by default for Amazon Elastic Block Store(Amazon EBS). - The control fails if the account level encryption isn't enabled. -When encryption is enabled for your account, Amazon EBS volumes and snapshot copies are encrypted at rest. This adds another layer of protection for your data. -For more information, see [Encryption by default](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html#encryption-by-default) in the Amazon EC2 User Guide for Linux Instances. -Note that following instance types don't support encryption: R1, C1, and M1. --**Severity**: Medium --### [Elastic Beanstalk environments should have enhanced health reporting enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4170067b-345d-47ed-ab4a-c6b6046881f1) --**Description**: This control checks whether enhanced health reporting is enabled for your AWS Elastic Beanstalk environments. -Elastic Beanstalk enhanced health reporting enables a more rapid response to changes in the health of the underlying infrastructure. These changes could result in a lack of availability of the application. -Elastic Beanstalk enhanced health reporting provides a status descriptor to gauge the severity of the identified issues and identify possible causes to investigate. The Elastic Beanstalk health agent, included in supported Amazon Machine Images (AMIs), evaluates logs and metrics of environment EC2 instances. --**Severity**: Low --### [Elastic Beanstalk managed platform updates should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/820f6c6e-f73f-432c-8c60-cae1794ea150) --**Description**: This control checks whether managed platform updates are enabled for the Elastic Beanstalk environment. -Enabling managed platform updates ensures that the latest available platform fixes, updates, and features for the environment are installed. Keeping up to date with patch installation is an important step in securing systems. --**Severity**: High --### [Elastic Load Balancer shouldn't have ACM certificate expired or expiring in 90 days.](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a5e0d700-3de1-469a-96d2-6536d9a92604) --**Description**: This check identifies Elastic Load Balancers (ELB) which are using ACM certificates expired or expiring in 90 days. AWS Certificate Manager (ACM) is the preferred tool to provision, manage, and deploy your server certificates. With ACM, you can request a certificate or deploy an existing ACM or external certificate to AWS resources. As a best practice, it's recommended to reimport expiring/expired certificates while preserving the ELB associations of the original certificate. --**Severity**: High --### [Elasticsearch domain error logging to CloudWatch Logs should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f48af569-2e67-464b-9a62-b8df0f85bc5e) --**Description**: This control checks whether Elasticsearch domains are configured to send error logs to CloudWatch Logs. -You should enable error logs for Elasticsearch domains and send those logs to CloudWatch Logs for retention and response. Domain error logs can assist with security and access audits, and can help to diagnose availability issues. --**Severity**: Medium --### [Elasticsearch domains should be configured with at least three dedicated master nodes](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/b4b9a67c-c315-4f9b-b06b-04867a453aab) --**Description**: This control checks whether Elasticsearch domains are configured with at least three dedicated master nodes. This control fails if the domain doesn't use dedicated master nodes. This control passes if Elasticsearch domains have five dedicated master nodes. However, using more than three master nodes might be unnecessary to mitigate the availability risk, and will result in more cost. -An Elasticsearch domain requires at least three dedicated master nodes for high availability and fault-tolerance. Dedicated master node resources can be strained during data node blue/green deployments because there are more nodes to manage. Deploying an Elasticsearch domain with at least three dedicated master nodes ensures sufficient master node resource capacity and cluster operations if a node fails. --**Severity**: Medium --### [Elasticsearch domains should have at least three data nodes](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/994cbcb3-43d4-419d-b5c4-9adc558f3ca2) --**Description**: This control checks whether Elasticsearch domains are configured with at least three data nodes and zoneAwarenessEnabled is true. -An Elasticsearch domain requires at least three data nodes for high availability and fault-tolerance. Deploying an Elasticsearch domain with at least three data nodes ensures cluster operations if a node fails. --**Severity**: Medium --### [Elasticsearch domains should have audit logging enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/12ebb4cd-34b6-4c3a-bee9-7e35f4f6caff) --**Description**: This control checks whether Elasticsearch domains have audit logging enabled. This control fails if an Elasticsearch domain doesn't have audit logging enabled. -Audit logs are highly customizable. They allow you to track user activity on your Elasticsearch clusters, including authentication successes and failures, requests to OpenSearch, index changes, and incoming search queries. --**Severity**: Medium --### [Enhanced monitoring should be configured for RDS DB instances and clusters](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/93e5a579-dd2f-4a56-b827-ebbfe7376b16) --**Description**: This control checks whether enhanced monitoring is enabled for your RDS DB instances. -In Amazon RDS, Enhanced Monitoring enables a more rapid response to performance changes in underlying infrastructure. These performance changes could result in a lack of availability of the data. Enhanced Monitoring provides real-time metrics of the operating system that your RDS DB instance runs on. An agent is installed on the instance. The agent can obtain metrics more accurately than is possible from the hypervisor layer. -Enhanced Monitoring metrics are useful when you want to see how different processes or threads on a DB instance use the CPU. For more information, see [Enhanced Monitoring](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.OS.html) in the *Amazon RDS User Guide*. --**Severity**: Low --### [Ensure rotation for customer created CMKs is enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/66748314-d51c-4d9c-b789-eebef29a7039) --**Description**: AWS Key Management Service (KMS) allows customers to rotate the backing key, which is key material stored within the KMS that is tied to the key ID of the Customer Created customer master key (CMK). - It's the backing key that is used to perform cryptographic operations such as encryption and decryption. - Automated key rotation currently retains all prior backing keys so that decryption of encrypted data can take place transparently. It's recommended that CMK key rotation be enabled. - Rotating encryption keys helps reduce the potential impact of a compromised key as data encrypted with a new key can't be accessed with a previous key that might have been exposed. --**Severity**: Medium --### [Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/759e80dc-92c2-4afd-afa3-c01294999363) --**Description**: S3 Bucket Access Logging generates a log that contains access records Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket for each request made to your S3 bucket. - An access log record contains details about the request, such as the request type, the resources specified in the request worked, and the time and date the request was processed. -It's recommended that bucket access logging be enabled on the CloudTrail S3 bucket. -By enabling S3 bucket logging on target S3 buckets, it's possible to capture all events, which might affect objects within target buckets. Configuring logs to be placed in a separate bucket allows access to log information, which can be useful in security and incident response workflows. --**Severity**: Low --### [Ensure the S3 bucket used to store CloudTrail logs isn't publicly accessible](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a41f2846-4a59-44e9-89bb-1f62d4b03a85) --**Description**: CloudTrail logs a record of every API call made in your AWS account. These log files are stored in an S3 bucket. - It's recommended that the bucket policy, or access control list (ACL), applied to the S3 bucket that CloudTrail logs to prevent public access to the CloudTrail logs. -Allowing public access to CloudTrail log content might aid an adversary in identifying weaknesses in the affected account's use or configuration. --**Severity**: High --### [IAM shouldn't have expired SSL/TLS certificates](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/03a8f33c-b01c-4dfc-b627-f98114715ae0) --**Description**: This check identifies expired SSL/TLS certificates. To enable HTTPS connections to your website or application in AWS, you need an SSL/TLS server certificate. You can use ACM or IAM to store and deploy server certificates. Removing expired SSL/TLS certificates eliminates the risk that an invalid certificate will be deployed accidentally to a resource such as AWS Elastic Load Balancer (ELB), which can damage the credibility of the application/website behind the ELB. This check generates alerts if there are any expired SSL/TLS certificates stored in AWS IAM. As a best practice, it's recommended to delete expired certificates. --**Severity**: High --### [Imported ACM certificates should be renewed after a specified time period](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/0e68b4d8-1a5e-47fc-a3eb-b3542fea43f1) --**Description**: This control checks whether ACM certificates in your account are marked for expiration within 30 days. It checks both imported certificates and certificates provided by AWS Certificate Manager. -ACM can automatically renew certificates that use DNS validation. For certificates that use email validation, you must respond to a domain validation email. - ACM also doesn't automatically renew certificates that you import. You must renew imported certificates manually. -For more information about managed renewal for ACM certificates, see [Managed renewal for ACM certificates](https://docs.aws.amazon.com/acm/latest/userguide/managed-renewal.html) in the AWS Certificate Manager User Guide. --**Severity**: Medium --### [Over-provisioned identities in accounts should be investigated to reduce the Permission Creep Index (PCI)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2482620f-f324-4add-af68-2e01e27485e9) --**Description**: Over-provisioned identities in accounts should be investigated to reduce the Permission Creep Index (PCI) and to safeguard your infrastructure. Reduce the PCI by removing the unused high risk permission assignments. High PCI reflects risk associated with the identities with permissions that exceed their normal or required usage. --**Severity**: Medium --### [RDS automatic minor version upgrades should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d352afac-cebc-4e02-b474-7ef402fb1d65) --**Description**: This control checks whether automatic minor version upgrades are enabled for the RDS database instance. -Enabling automatic minor version upgrades ensures that the latest minor version updates to the relational database management system (RDBMS) are installed. These upgrades might include security patches and bug fixes. Keeping up to date with patch installation is an important step in securing systems. --**Severity**: High --### [RDS cluster snapshots and database snapshots should be encrypted at rest](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4f4fbc5e-0b10-4208-b52f-1f47f1c73b6a) --**Description**: This control checks whether RDS DB snapshots are encrypted. -This control is intended for RDS DB instances. However, it can also generate findings for snapshots of Aurora DB instances, Neptune DB instances, and Amazon DocumentDB clusters. If these findings aren't useful, then you can suppress them. -Encrypting data at rest reduces the risk that an unauthenticated user gets access to data that is stored on disk. Data in RDS snapshots should be encrypted at rest for an added layer of security. --**Severity**: Medium --### [RDS clusters should have deletion protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9e769650-868c-46f5-b8c0-1a8ba12a4c92) --**Description**: This control checks whether RDS clusters have deletion protection enabled. -This control is intended for RDS DB instances. However, it can also generate findings for Aurora DB instances, Neptune DB instances, and Amazon DocumentDB clusters. If these findings aren't useful, then you can suppress them. -Enabling cluster deletion protection is another layer of protection against accidental database deletion or deletion by an unauthorized entity. -When deletion protection is enabled, an RDS cluster can't be deleted. Before a deletion request can succeed, deletion protection must be disabled. --**Severity**: Low --### [RDS DB clusters should be configured for multiple Availability Zones](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/cdf441dd-0ab7-4ef2-a643-de12725e5d5d) --**Description**: RDS DB clusters should be configured for multiple the data that is stored. - Deployment to multiple Availability Zones allows for automate Availability Zones to ensure availability of ed failover in the event of an Availability Zone availability issue and during regular RDS maintenance events. --**Severity**: Medium --### [RDS DB clusters should be configured to copy tags to snapshots](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/b9ed02d0-afca-4bed-838d-70bf31ecf19a) --**Description**: Identification and inventory of your IT assets is a crucial aspect of governance and security. - You need to have visibility of all your RDS DB clusters so that you can assess their security posture and act on potential areas of weakness. - Snapshots should be tagged in the same way as their parent RDS database clusters. - Enabling this setting ensures that snapshots inherit the tags of their parent database clusters. --**Severity**: Low --### [RDS DB instances should be configured to copy tags to snapshots](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/fcd891e5-c6a2-41ce-bca6-f49ec582f3ce) --**Description**: This control checks whether RDS DB instances are configured to copy all tags to snapshots when the snapshots are created. -Identification and inventory of your IT assets is a crucial aspect of governance and security. - You need to have visibility of all your RDS DB instances so that you can assess their security posture and take action on potential areas of weakness. - Snapshots should be tagged in the same way as their parent RDS database instances. Enabling this setting ensures that snapshots inherit the tags of their parent database instances. --**Severity**: Low --### [RDS DB instances should be configured with multiple Availability Zones](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/70ebbd01-cd79-4bc8-ae85-49f47ccdd5ad) --**Description**: This control checks whether high availability is enabled for your RDS DB instances. - RDS DB instances should be configured for multiple Availability Zones (AZs). This ensures the availability of the data stored. Multi-AZ deployments allow for automated failover if there's an issue with Availability Zone availability and during regular RDS maintenance. --**Severity**: Medium --### [RDS DB instances should have deletion protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8e1f7933-faa9-4379-a9bd-697740dedac8) --**Description**: This control checks whether your RDS DB instances that use one of the listed database engines have deletion protection enabled. -Enabling instance deletion protection is another layer of protection against accidental database deletion or deletion by an unauthorized entity. -While deletion protection is enabled, an RDS DB instance can't be deleted. Before a deletion request can succeed, deletion protection must be disabled. --**Severity**: Low --### [RDS DB instances should have encryption at rest enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/bfa7d2aa-f362-11eb-9a03-0242ac130003) --**Description**: This control checks whether storage encryption is enabled for your Amazon RDS DB instances. -This control is intended for RDS DB instances. However, it can also generate findings for Aurora DB instances, Neptune DB instances, and Amazon DocumentDB clusters. If these findings aren't useful, then you can suppress them. - For an added layer of security for your sensitive data in RDS DB instances, you should configure your RDS DB instances to be encrypted at rest. To encrypt your RDS DB instances and snapshots at rest, enable the encryption option for your RDS DB instances. Data that is encrypted at rest includes the underlying storage for DB instances, its automated backups, read replicas, and snapshots. -RDS encrypted DB instances use the open standard AES-256 encryption algorithm to encrypt your data on the server that hosts your RDS DB instances. After your data is encrypted, Amazon RDS handles authentication of access and decryption of your data transparently with a minimal impact on performance. You don't need to modify your database client applications to use encryption. -Amazon RDS encryption is currently available for all database engines and storage types. Amazon RDS encryption is available for most DB instance classes. To learn about DB instance classes that don't support Amazon RDS encryption, see [Encrypting Amazon RDS resources](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html) in the *Amazon RDS User Guide*. --**Severity**: Medium --### [RDS DB Instances should prohibit public access](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/72f3b7f1-76b8-4cf5-8da5-4ba5745b512c) --**Description**: We recommend that you also ensure that access to your RDS instance's configuration is limited to authorized users only, by restricting users' IAM permissions to modify RDS instances' settings and resources. --**Severity**: High --### [RDS snapshots should prohibit public access](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f64521fc-a9f1-4d43-b667-8d94b4a202af) --**Description**: We recommend only allowing authorized principals to access the snapshot and change Amazon RDS configuration. --**Severity**: High --### [Remove unused Secrets Manager secrets](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/bfa82db5-c112-44f0-89e6-a9adfb9a4028) --**Description**: This control checks whether your secrets have been accessed within a specified number of days. The default value is 90 days. If a secret wasn't accessed within the defined number of days, this control fails. -Deleting unused secrets is as important as rotating secrets. Unused secrets can be abused by their former users, who no longer need access to these secrets. Also, as more users get access to a secret, someone might have mishandled and leaked it to an unauthorized entity, which increases the risk of abuse. Deleting unused secrets helps revoke secret access from users who no longer need it. It also helps to reduce the cost of using Secrets Manager. Therefore, it's essential to routinely delete unused secrets. --**Severity**: Medium --### [S3 buckets should have cross-region replication enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/35713036-bd12-4646-9b92-4c56a761a710) --**Description**: Enabling S3 cross-region replication ensures that multiple versions of the data are available in different distinct Regions. - This allows you to protect your S3 bucket against DDoS attacks and data corruption events. --**Severity**: Low --### [S3 buckets should have server-side encryption enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/3cb793ab-20d3-4677-9723-024c8fed0c23) --**Description**: Enable server-side encryption to protect data in your S3 buckets. - Encrypting the data can prevent access to sensitive data in the event of a data breach. --**Severity**: Medium --### [Secrets Manager secrets configured with automatic rotation should rotate successfully](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/bec42e2d-956b-4940-a37d-7c1b1e8c525f) --**Description**: This control checks whether an AWS Secrets Manager secret rotated successfully based on the rotation schedule. The control fails if **RotationOccurringAsScheduled** is **false**. The control doesn't evaluate secrets that don't have rotation configured. -Secrets Manager helps you improve the security posture of your organization. Secrets include database credentials, passwords, and third-party API keys. You can use Secrets Manager to store secrets centrally, encrypt secrets automatically, control access to secrets, and rotate secrets safely and automatically. -Secrets Manager can rotate secrets. You can use rotation to replace long-term secrets with short-term ones. Rotating your secrets limits how long an unauthorized user can use a compromised secret. For this reason, you should rotate your secrets frequently. -In addition to configuring secrets to rotate automatically, you should ensure that those secrets rotate successfully based on the rotation schedule. -To learn more about rotation, see [Rotating your AWS Secrets Manager secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html) in the AWS Secrets Manager User Guide. --**Severity**: Medium --### [Secrets Manager secrets should be rotated within a specified number of days](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/323f0eb4-ea19-4b55-83e9-d104009616b4) --**Description**: This control checks whether your secrets have been rotated at least once within 90 days. -Rotating secrets can help you to reduce the risk of an unauthorized use of your secrets in your AWS account. Examples include database credentials, passwords, third-party API keys, and even arbitrary text. If you don't change your secrets for a long period of time, the secrets are more likely to be compromised. -As more users get access to a secret, it can become more likely that someone mishandled and leaked it to an unauthorized entity. Secrets can be leaked through logs and cache data. They can be shared for debugging purposes and not changed or revoked once the debugging completes. For all these reasons, secrets should be rotated frequently. -You can configure your secrets for automatic rotation in AWS Secrets Manager. With automatic rotation, you can replace long-term secrets with short-term ones, significantly reducing the risk of compromise. -Security Hub recommends that you enable rotation for your Secrets Manager secrets. To learn more about rotation, see [Rotating your AWS Secrets Manager secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html) in the AWS Secrets Manager User Guide. --**Severity**: Medium --### [SNS topics should be encrypted at rest using AWS KMS](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/90917e06-2781-4857-9d74-9043c6475d03) --**Description**: This control checks whether an SNS topic is encrypted at rest using AWS KMS. -Encrypting data at rest reduces the risk of data stored on disk being accessed by a user not authenticated to AWS. It also adds another set of access controls to limit the ability of unauthorized users to access the data. -For example, API permissions are required to decrypt the data before it can be read. SNS topics should be [encrypted at-rest](https://docs.aws.amazon.com/sns/latest/dg/sns-server-side-encryption.html) for an added layer of security. For more information, see Encryption at rest in the Amazon Simple Notification Service Developer Guide. --**Severity**: Medium --### [VPC flow logging should be enabled in all VPCs](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/3428e584-0fa6-48c0-817e-6d689d7bb879) --**Description**: VPC Flow Logs provide visibility into network traffic that passes through the VPC and can be used to detect anomalous traffic or insight during security events. --**Severity**: Medium --## AWS IdentityAndAccess recommendations --### [Amazon Elasticsearch Service domains should be in a VPC](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/df952171-786d-44b5-b309-9c982bddeb7c) --**Description**: VPC can't contain domains with a public endpoint. -Note: this doesn't evaluate the VPC subnet routing configuration to determine public reachability. --**Severity**: High --### [Amazon S3 permissions granted to other AWS accounts in bucket policies should be restricted](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/de8ae504-ec39-4ffb-b3ef-6e36fdcbb455) --**Description**: Implementing least privilege access is fundamental to reducing security risk and the impact of errors or malicious intent. If an S3 bucket policy allows access from external accounts, it could result in data exfiltration by an insider threat or an attacker. The 'blacklistedactionpatterns' parameter allows for successful evaluation of the rule for S3 buckets. The parameter grants access to external accounts for action patterns that aren't included in the 'blacklistedactionpatterns' list. --**Severity**: High --### [Avoid the use of the "root" account](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a47a6c3b-0629-406c-ad09-d91f7d9f78a3) --**Description**: The "root" account has unrestricted access to all resources in the AWS account. It's highly recommended that the use of this account be avoided. -The "root" account is the most privileged AWS account. Minimizing the use of this account and adopting the principle of least privilege for access management will reduce the risk of accidental changes and unintended disclosure of highly privileged credentials. --**Severity**: High --### [AWS KMS keys should not be unintentionally deleted](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/10c59743-84c4-4711-adb7-ba895dc57339) --**Description**: This control checks whether KMS keys are scheduled for deletion. The control fails if a KMS key is scheduled for deletion. -KMS keys can't be recovered once deleted. Data encrypted under a KMS key is also permanently unrecoverable if the KMS key is deleted. If meaningful data has been encrypted under a KMS key scheduled for deletion, consider decrypting the data or re-encrypting the data under a new KMS key unless you're intentionally performing a cryptographic erasure. -When a KMS key is scheduled for deletion, a mandatory waiting period is enforced to allow time to reverse the deletion, if it was scheduled in error. The default waiting period is 30 days, but it can be reduced to as short as seven days when the KMS key is scheduled for deletion. During the waiting period, the scheduled deletion can be canceled and the KMS key won't be deleted. -For more information regarding deleting KMS keys, see [Deleting KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/deleting-keys.html) in the AWS Key Management Service Developer Guide. --**Severity**: High --### [AWS WAF Classic global web ACL logging should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ad593449-a095-47b5-91b8-894396a1aa7f) --**Description**: This control checks whether logging is enabled for an AWS WAF global Web ACL. This control fails if logging isn't enabled for the web ACL. -Logging is an important part of maintaining the reliability, availability, and performance of AWS WAF globally. It's a business and compliance requirement in many organizations, and allows you to troubleshoot application behavior. It also provides detailed information about the traffic that is analyzed by the web ACL that is attached to AWS WAF. --**Severity**: Medium --### [CloudFront distributions should have a default root object configured](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/186509dc-f326-415f-b085-4d27f1342849) --**Description**: This control checks whether an Amazon CloudFront distribution is configured to return a specific object that is the default root object. The control fails if the CloudFront distribution doesn't have a default root object configured. -A user might sometimes request the distributions root URL instead of an object in the distribution. When this happens, specifying a default root object can help you to avoid exposing the contents of your web distribution. --**Severity**: High --### [CloudFront distributions should have origin access identity enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a0ab1f4e-bafb-4947-a7d1-13a9c35c7d82) --**Description**: This control checks whether an Amazon CloudFront distribution with Amazon S3 Origin type has Origin Access Identity (OAI) configured. The control fails if OAI isn't configured. -CloudFront OAI prevents users from accessing S3 bucket content directly. When users access an S3 bucket directly, they effectively bypass the CloudFront distribution and any permissions that are applied to the underlying S3 bucket content. --**Severity**: Medium --### [CloudTrail log file validation should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/324ec96c-9719-46ce-b6a9-e7f4fed7dd6e) --**Description**: To ensure additional integrity checking of CloudTrail logs, we recommend enabling file validation on all CloudTrails. --**Severity**: Low --### [CloudTrail should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2917bcec-6991-4ea4-9e73-156e6ef831e4) --**Description**: AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you. Not all services enable logging by default for all APIs and events. - You should implement any additional audit trails other than CloudTrail and review the documentation for each service in CloudTrail Supported Services and Integrations. --**Severity**: High --### [CloudTrail trails should be integrated with CloudWatch Logs](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/842be2e5-2cd8-420f-969a-6d6b4096c580) --**Description**: In addition to capturing CloudTrail logs within a specified S3 bucket for long term analysis, real-time analysis can be performed by configuring CloudTrail to send logs to CloudWatch Logs. - For a trail that is enabled in all regions in an account, CloudTrail sends log files from all those regions to a CloudWatch Logs log group. We recommended that CloudTrail logs will be sent to CloudWatch Logs to ensure AWS account activity is being captured, monitored, and appropriately alarmed on. -Sending CloudTrail logs to CloudWatch Logs facilitates real-time and historic activity logging based on user, API, resource, and IP address, and provides opportunity to establish alarms and notifications for anomalous or sensitivity account activity. --**Severity**: Low --### [Database logging should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/678b2afa-7fc7-45e5-ad4e-2c49efb57ac8) --**Description**: This control checks whether the following logs of Amazon RDS are enabled and sent to CloudWatch Logs: --- Oracle: (Alert, Audit, Trace, Listener)-- PostgreSQL: (Postgresql, Upgrade)-- MySQL: (Audit, Error, General, SlowQuery)-- MariaDB: (Audit, Error, General, SlowQuery)-- SQL Server: (Error, Agent)-- Aurora: (Audit, Error, General, SlowQuery)-- Aurora-MySQL: (Audit, Error, General, SlowQuery)-- Aurora-PostgreSQL: (Postgresql, Upgrade).-RDS databases should have relevant logs enabled. Database logging provides detailed records of requests made to RDS. Database logs can assist with security and access audits and can help to diagnose availability issues. --**Severity**: Medium --### [Disable direct internet access for Amazon SageMaker notebook instances](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/0991c64b-ccf5-4408-aee9-2ef03d460020) --**Description**: Direct internet access should be disabled for a SageMaker notebook instance. - This checks whether the 'DirectInternetAccess' field is disabled for the notebook instance. - Your instance should be configured with a VPC and the default setting should be Disable - Access the internet through a VPC. - In order to enable internet access to train or host models from a notebook, make sure that your VPC has a NAT gateway and your security group allows outbound connections. Ensure access to your SageMaker configuration is limited to only authorized users, and restrict users' IAM permissions to modify SageMaker settings and resources. --**Severity**: High --### [Do not setup access keys during initial user setup for all IAM users that have a console password](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/655f9340-184f-4b6e-8214-b835003ab0b1) --**Description**: AWS console defaults the checkbox for creating access keys to enabled. This results in many access keys being generated unnecessarily. - In addition to unnecessary credentials, it also generates unnecessary management work in auditing and rotating these keys. - Requiring that additional steps be taken by the user after their profile has been created will give a stronger indication of intent that access keys are [a] necessary for their work and [b] once the access key is established on an account that the keys might be in use somewhere in the organization. --**Severity**: Medium --### [Ensure a support role has been created to manage incidents with AWS Support](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/6614c30d-c9f3-4acd-8371-c8f362148398) --**Description**: AWS provides a support center that can be used for incident notification and response, as well as technical support and customer services. - Create an IAM Role to allow authorized users to manage incidents with AWS Support. -By implementing least privilege for access control, an IAM Role requires an appropriate IAM Policy to allow Support Center Access in order to manage Incidents with AWS Support. --**Severity**: Low --### [Ensure access keys are rotated every 90 days or less](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d72f547e-c011-4cdb-9dda-8c4d6dc09bf2) --**Description**: Access keys consist of an access key ID and secret access key, which are used to sign programmatic requests that you make to AWS. - AWS users need their own access keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI), Tools for Windows PowerShell, the AWS SDKs, or direct HTTP calls using the APIs for individual AWS services. - It's recommended that all access keys be regularly rotated. - Rotating access keys reduce the window of opportunity for an access key that is associated with a compromised or terminated account to be used. - Access keys should be rotated to ensure that data can't be accessed with an old key, which might have been lost, cracked, or stolen. --**Severity**: Medium --### [Ensure AWS Config is enabled in all regions](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/3ff06f36-f8fd-4af5-bd02-5195593423fb) --**Description**: AWS Config is a web service that performs configuration management of supported AWS resources within your account and delivers log files to you. -The recorded information includes the configuration item (AWS resource), relationships between configuration items (AWS resources), any configuration changes between resources. -It's recommended to enable AWS Config be enabled in all regions. --The AWS configuration item history captured by AWS Config enables security analysis, resource change tracking, and compliance auditing. --**Severity**: Medium --### [Ensure CloudTrail is enabled in all regions](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/b3d8e09b-83a6-417a-ae1e-3f5b54576965) --**Description**: AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you. -The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. CloudTrail provides a history of AWS API calls for an account, including API calls made via the Management Console, SDKs, command line tools, and higher-level AWS services (such as CloudFormation). -The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing. Additionally: --- ensuring that a multi-regions trail exists will ensure that unexpected activity occurring in otherwise unused regions is detected-- ensuring that a multi-regions trail exists will ensure that "Global Service Logging" is enabled for a trail by default to capture recording of events generated on AWS global services-- for a multi-regions trail, ensuring that management events configured for all type of Read/Writes ensures recording of management operations that are performed on all resources in an AWS account--**Severity**: High --### [Ensure credentials unused for 90 days or greater are disabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f13dc885-79aa-456b-ba28-3428147ecf55) --**Description**: AWS IAM users can access AWS resources using different types of credentials, such as passwords or access keys. - It's recommended that all credentials that have been unused in 90 or greater days be removed or deactivated. - Disabling or removing unnecessary credentials reduce the window of opportunity for credentials associated with a compromised or abandoned account to be used. --**Severity**: Medium --### [Ensure IAM password policy expires passwords within 90 days or less](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/729c20d1-fe7c-4e1b-8c9c-ab5ad56d7f96) --**Description**: IAM password policies can require passwords to be rotated or expired after a given number of days. - It's recommended that the password policy expire passwords after 90 days or less. - Reducing the password lifetime increases account resiliency against brute force login attempts. Additionally, requiring regular password changes help in the following scenarios: --- Passwords can be stolen or compromised sometimes without your knowledge. This can happen via a system compromise, software vulnerability, or internal threat.-- Certain corporate and government web filters or proxy servers have the ability to intercept and record traffic even if it's encrypted.-- Many people use the same password for many systems such as work, email, and personal.-- Compromised end user workstations might have a keystroke logger.--**Severity**: Low --### [Ensure IAM password policy prevents password reuse](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/22e99393-671c-4979-a08a-cd1533da9ece) --**Description**: IAM password policies can prevent the reuse of a given password by the same user. -It's recommended that the password policy prevent the reuse of passwords. - Preventing password reuse increases account resiliency against brute force login attempts. --**Severity**: Low --### [Ensure IAM password policy requires at least one lowercase letter](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1c420241-9bec-4af8-afb7-038a711b7d22) --**Description**: Password policies are, in part, used to enforce password complexity requirements. IAM password policies can be used to ensure password are composed of different character sets. - It's recommended that the password policy require at least one lowercase letter. -Setting a password complexity policy increases account resiliency against brute force login attempts. --**Severity**: Medium --### [Ensure IAM password policy requires at least one number](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/84fb0ae8-4785-449c-b9ac-e106a2509540) --**Description**: Password policies are, in part, used to enforce password complexity requirements. IAM password policies can be used to ensure password are composed of different character sets. - It's recommended that the password policy require at least one number. - Setting a password complexity policy increases account resiliency against brute force login attempts. --**Severity**: Medium --### [Ensure IAM password policy requires at least one symbol](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1919c309-1c8b-4fab-bd8c-7ff77521db40) --**Description**: Password policies are, in part, used to enforce password complexity requirements. - IAM password policies can be used to ensure password are composed of different character sets. - It's recommended that the password policy require at least one symbol. - Setting a password complexity policy increases account resiliency against brute force login attempts. --**Severity**: Medium --### [Ensure IAM password policy requires at least one uppercase letter](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/6e5ebe18-e026-4c26-875c-fcbea8089071) --**Description**: Password policies are, in part, used to enforce password complexity requirements. IAM password policies can be used to ensure password are composed of different character sets. - It's recommended that the password policy require at least one uppercase letter. - Setting a password complexity policy increases account resiliency against brute force login attempts. --**Severity**: Medium --### [Ensure IAM password policy requires minimum length of 14 or greater](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e109af9f-128b-4774-a40c-aab8eff3934c) --**Description**: Password policies are, in part, used to enforce password complexity requirements. IAM password policies can be used to ensure password are at least a given length. -It's recommended that the password policy require a minimum password length '14'. - Setting a password complexity policy increases account resiliency against brute force login attempts. --**Severity**: Medium --### [Ensure multifactor authentication (MFA) is enabled for all IAM users that have a console password](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/b73d3c97-01e1-43b4-bf01-a459e5eed3de) --**Description**: Multifactor Authentication (MFA) adds an extra layer of protection on top of a user name and password. - With MFA enabled, when a user signs in to an AWS website, they'll be prompted for their user name and password as well as for an authentication code from their AWS MFA device. - It's recommended that MFA be enabled for all accounts that have a console password. -Enabling MFA provides increased security for console access as it requires the authenticating principal to possess a device that emits a time-sensitive key and have knowledge of a credential. --**Severity**: Medium --### [GuardDuty should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4b32e0a4-44a7-4f18-ad92-549f7d219061) --**Description**: To provide additional protection against intrusions, GuardDuty should be enabled on your AWS account and region. - Note: GuardDuty might not be a complete solution for every environment. --**Severity**: Medium --### [Hardware MFA should be enabled for the "root" account](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/eb39e935-38fc-4b0c-8cf2-d6affab0306a) --**Description**: The root account is the most privileged user in an account. MFA adds an extra layer of protection on top of a user name and password. With MFA enabled, when a user signs in to an AWS website, they're prompted for their user name and password and for an authentication code from their AWS MFA device. - For Level 2, it's recommended that you protect the root account with a hardware MFA. A hardware MFA has a smaller attack surface than a virtual MFA. For example, a hardware MFA doesn't suffer the attack surface introduced by the mobile smartphone that a virtual MFA resides on. - Using hardware MFA for many, many accounts might create a logistical device management issue. If this occurs, consider implementing this Level 2 recommendation selectively to the highest security accounts. You can then apply the Level 1 recommendation to the remaining accounts. --**Severity**: Low --### [IAM authentication should be configured for RDS clusters](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/3ac30502-52e5-4fc6-af40-095dddfbc8b9) --**Description**: This control checks whether an RDS DB cluster has IAM database authentication enabled. -IAM database authentication allows for password-free authentication to database instances. The authentication uses an authentication token. Network traffic to and from the database is encrypted using SSL. For more information, see IAM database authentication in the Amazon Aurora User Guide. --**Severity**: Medium --### [IAM authentication should be configured for RDS instances](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/cd307f02-2ca7-44b4-8c1b-b580251d613c) --**Description**: This control checks whether an RDS DB instance has IAM database authentication enabled. -IAM database authentication allows authentication to database instances with an authentication token instead of a password. Network traffic to and from the database is encrypted using SSL. For more information, see IAM database authentication in the Amazon Aurora User Guide. --**Severity**: Medium --### [IAM customer managed policies should not allow decryption actions on all KMS keys](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d088fb9f-11dc-451e-8f79-393916e42bb2) --**Description**: Checks whether the default version of IAM customer managed policies allow principals to use the AWS KMS decryption actions on all resources. This control uses [Zelkova](https://aws.amazon.com/blogs/security/protect-sensitive-data-in-the-cloud-with-automated-reasoning-zelkova), an automated reasoning engine, to validate and warn you about policies that might grant broad access to your secrets across AWS accounts. This control fails if the "kms:Decrypt" or "kms:ReEncryptFrom" actions are allowed on all KMS keys. The control evaluates both attached and unattached customer managed policies. It doesn't check inline policies or AWS managed policies. -With AWS KMS, you control who can use your KMS keys and gain access to your encrypted data. IAM policies define which actions an identity (user, group, or role) can perform on which resources. Following security best practices, AWS recommends that you allow least privilege. In other words, you should grant to identities only the "kms:Decrypt" or "kms:ReEncryptFrom" permissions and only for the keys that are required to perform a task. Otherwise, the user might use keys that aren't appropriate for your data. -Instead of granting permissions for all keys, determine the minimum set of keys that users need to access encrypted data. Then design policies that allow users to use only those keys. For example, don't allow "kms:Decrypt" permission on all KMS keys. Instead, allow "kms:Decrypt" only on keys in a particular Region for your account. By adopting the principle of least privilege, you can reduce the risk of unintended disclosure of your data. --**Severity**: Medium --### [IAM customer managed policies that you create should not allow wildcard actions for services](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/5a0476c5-a14b-4195-8c31-633511234b38) --**Description**: This control checks whether the IAM identity-based policies that you create have Allow statements that use the \* wildcard to grant permissions for all actions on any service. The control fails if any policy statement includes 'Effect': 'Allow' with 'Action': 'Service:*'. - For example, the following statement in a policy results in a failed finding. --```json -'Statement': [ -{ - 'Sid': 'EC2-Wildcard', - 'Effect': 'Allow', - 'Action': 'ec2:*', - 'Resource': '*' -} -``` -- The control also fails if you use 'Effect': 'Allow' with 'NotAction': 'service:*'. In that case, the NotAction element provides access to all of the actions in an AWS service, except for the actions specified in NotAction. -This control only applies to customer managed IAM policies. It doesn't apply to IAM policies that are managed by AWS. - When you assign permissions to AWS services, it's important to scope the allowed IAM actions in your IAM policies. You should restrict IAM actions to only those actions that are needed. This helps you to provision least privilege permissions. Overly permissive policies might lead to privilege escalation if the policies are attached to an IAM principal that might not require the permission. -In some cases, you might want to allow IAM actions that have a similar prefix, such as DescribeFlowLogs and DescribeAvailabilityZones. In these authorized cases, you can add a suffixed wildcard to the common prefix. For example, ec2:Describe*. --This control passes if you use a prefixed IAM action with a suffixed wildcard. For example, the following statement in a policy results in a passed finding. --```json - 'Statement': [ -{ - 'Sid': 'EC2-Wildcard', - 'Effect': 'Allow', - 'Action': 'ec2:Describe*', - 'Resource': '*' -} -``` --When you group related IAM actions in this way, you can also avoid exceeding the IAM policy size limits. --**Severity**: Low --### [IAM policies should be attached only to groups or roles](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a773f81a-0b2d-4f8e-826a-77fc432416c3) --**Description**: By default, IAM users, groups, and roles have no access to AWS resources. IAM policies are the means by which privileges are granted to users, groups, or roles. - It's recommended that IAM policies be applied directly to groups and roles but not users. -Assigning privileges at the group or role level reduces the complexity of access management as the number of users grow. - Reducing access management complexity might in-turn reduce opportunity for a principal to inadvertently receive or retain excessive privileges. --**Severity**: Low --### [IAM policies that allow full "*:*" administrative privileges should not be created](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1d08b362-7e24-46b0-bed1-4a6c1d1526a5) --**Description**: IAM policies are the means by which privileges are granted to users, groups, or roles. - It's recommended and considered a standard security advice to grant least privilege-that is, granting only the permissions required to perform a task. - Determine what users need to do and then craft policies for them that let the users perform only those tasks, instead of allowing full administrative privileges. - It's more secure to start with a minimum set of permissions and grant additional permissions as necessary, rather than starting with permissions that are too lenient and then trying to tighten them later. - Providing full administrative privileges instead of restricting to the minimum set of permissions that the user is required to do exposes the resources to potentially unwanted actions. - IAM policies that have a statement with "Effect": "Allow" with "Action": "*" over "Resource": "*" should be removed. --**Severity**: High --### [IAM principals should not have IAM inline policies that allow decryption actions on all KMS keys](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/18be55d0-b681-4693-af8d-b8815518d758) --**Description**: Checks whether the inline policies that are embedded in your IAM identities (role, user, or group) allow the AWS KMS decryption actions on all KMS keys. This control uses [Zelkova](https://aws.amazon.com/blogs/security/protect-sensitive-data-in-the-cloud-with-automated-reasoning-zelkova), an automated reasoning engine, to validate and warn you about policies that might grant broad access to your secrets across AWS accounts. -This control fails if "kms:Decrypt" or "kms:ReEncryptFrom" actions are allowed on all KMS keys in an inline policy. -With AWS KMS, you control who can use your KMS keys and gain access to your encrypted data. IAM policies define which actions an identity (user, group, or role) can perform on which resources. Following security best practices, AWS recommends that you allow least privilege. In other words, you should grant to identities only the permissions they need and only for keys that are required to perform a task. Otherwise, the user might use keys that aren't appropriate for your data. -Instead of granting permission for all keys, determine the minimum set of keys that users need to access encrypted data. Then design policies that allow the users to use only those keys. For example, don't allow "kms:Decrypt" permission on all KMS keys. Instead, allow them only on keys in a particular Region for your account. By adopting the principle of least privilege, you can reduce the risk of unintended disclosure of your data. --**Severity**: Medium --### [Lambda functions should restrict public access](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/64b236a0-f9d7-454a-942a-8c2ba3943cf7) --**Description**: Lambda function resource-based policy should restrict public access. This recommendation doesn't check access by internal principals. - Ensure access to the function is restricted to authorized principals only by using least privilege resource-based policies. --**Severity**: High --### [MFA should be enabled for all IAM users](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9c676d6f-60cb-4c7b-a484-17164c598016) --**Description**: All IAM users should have multifactor authentication (MFA) enabled. --**Severity**: Medium --### [MFA should be enabled for the "root" account](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1c9ea4ef-3bb5-4f02-b8b9-55e788e1a21a) --**Description**: The root account is the most privileged user in an account. MFA adds an extra layer of protection on top of a user name and password. With MFA enabled, when a user signs in to an AWS website, they're prompted for their user name and password and for an authentication code from their AWS MFA device. - When you use virtual MFA for root accounts, it's recommended that the device used isn't a personal device. Instead, use a dedicated mobile device (tablet or phone) that you manage to keep charged and secured independent of any individual personal devices. - This lessens the risks of losing access to the MFA due to device loss, device trade-in, or if the individual owning the device is no longer employed at the company. --**Severity**: Low --### [Password policies for IAM users should have strong configurations](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/fd751d04-8378-4cf8-8f1b-594ee340ae08) --**Description**: Checks whether the account password policy for IAM users uses the following minimum configurations. --- RequireUppercaseCharacters- Require at least one uppercase character in password. (Default = true)-- RequireLowercaseCharacters- Require at least one lowercase character in password. (Default = true)-- RequireNumbers- Require at least one number in password. (Default = true)-- MinimumPasswordLength- Password minimum length. (Default = 7 or longer)-- PasswordReusePrevention- Number of passwords before allowing reuse. (Default = 4)-- MaxPasswordAge- Number of days before password expiration. (Default = 90)--**Severity**: Medium --### [Root account access key shouldn't exist](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/412835f5-0339-4180-9c22-ea8735dc6c24) --**Description**: The root account is the most privileged user in an AWS account. AWS Access Keys provide programmatic access to a given AWS account. - It's recommended that all access keys associated with the root account be removed. - Removing access keys associated with the root account limits vectors by which the account can be compromised. - Additionally, removing the root access keys encourages the creation and use of role based accounts that are least privileged. --**Severity**: High --### [S3 Block Public Access setting should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ac66d910-ae29-4cab-967b-c3f0810b7642) --**Description**: Enabling Block Public Access setting for your S3 bucket can help prevent sensitive data leaks and protect your bucket from malicious actions. --**Severity**: Medium --### [S3 Block Public Access setting should be enabled at the bucket level](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/83f16376-e2dd-487d-b5ee-ba67fef4c5c0) --**Description**: This control checks whether S3 buckets have bucket-level public access blocks applied. This control fails if any of the following settings are set to false: --- ignorePublicAcls-- blockPublicPolicy-- blockPublicAcls-- restrictPublicBuckets-Block Public Access at the S3 bucket level provides controls to ensure that objects never have public access. Public access is granted to buckets and objects through access control lists (ACLs), bucket policies, or both. -Unless you intend to have your S3 buckets publicly accessible, you should configure the bucket level Amazon S3 Block Public Access feature. --**Severity**: High --### [S3 buckets public read access should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f65de27c-1b77-4a2d-bc89-8631ff9ee786) --**Description**: Removing public read access to your S3 bucket can help protect your data and prevent a data breach. --**Severity**: High --### [S3 buckets public write access should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/994d14f1-b8d7-4cb3-ad4e-a7ccb08065d5) --**Description**: Allowing public write access to your S3 bucket can leave you vulnerable to malicious actions such as storing data at your expense, encrypting your files for ransom, or using your bucket to operate malware. --**Severity**: High --### [Secrets Manager secrets should have automatic rotation enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4aa0f6dc-40be-43b2-92f1-3a52ad9d68d1) --**Description**: This control checks whether a secret stored in AWS Secrets Manager is configured with automatic rotation. -Secrets Manager helps you improve the security posture of your organization. Secrets include database credentials, passwords, and third-party API keys. You can use Secrets Manager to store secrets centrally, encrypt secrets automatically, control access to secrets, and rotate secrets safely and automatically. -Secrets Manager can rotate secrets. You can use rotation to replace long-term secrets with short-term ones. Rotating your secrets limits how long an unauthorized user can use a compromised secret. For this reason, you should rotate your secrets frequently. To learn more about rotation, see [Rotating your AWS Secrets Manager secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html) in the AWS Secrets Manager User Guide. --**Severity**: Medium --### [Stopped EC2 instances should be removed after a specified time period](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1a3340b3-8916-40fe-942d-a937e60f5d4c) --**Description**: This control checks whether any EC2 instances have been stopped for more than the allowed number of days. An EC2 instance fails this check if it's stopped for longer than the maximum allowed time period, which by default is 30 days. - A failed finding indicates that an EC2 instance has not run for a significant period of time. This creates a security risk because the EC2 instance isn't being actively maintained (analyzed, patched, updated). If it's later launched, the lack of proper maintenance could result in unexpected issues in your AWS environment. To safely maintain an EC2 instance over time in a nonrunning state, start it periodically for maintenance and then stop it after maintenance. Ideally this is an automated process. --**Severity**: Medium --### [AWS overprovisioned identities should have only the necessary permissions](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/427f7886-bb3c-42f6-a22c-979780b8e5ef) --**Description**: An over-provisioned active identity is an identity that has access to privileges that they haven't used. Over-provisioned active identities, especially for non-human accounts that have defined actions and responsibilities, can increase the blast radius in the event of a user, key, or resource compromise. Remove unneeded permissions and establish review processes to achieve the least privileged permissions. --**Severity**: Medium --### [Permissions of inactive identities in your AWS account should be revoked](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/71016e8c-d079-479d-942b-9c95b463e4a6) --**Description**: Microsoft Defender for Cloud discovered an identity that has not performed any action on any resource within your AWS account in the past 45 days. It is recommended to revoke permissions of inactive identities, in order to reduce the attack surface of your cloud environment. --**Severity**: Medium --## AWS Networking recommendations --### [Amazon EC2 should be configured to use VPC endpoints](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e700ddd4-bb55-4602-b93a-d75895fbf7c6) --**Description**: This control checks whether a service endpoint for Amazon EC2 is created for each VPC. The control fails if a VPC doesn't have a VPC endpoint created for the Amazon EC2 service. - To improve the security posture of your VPC, you can configure Amazon EC2 to use an interface VPC endpoint. Interface endpoints are powered by AWS PrivateLink, a technology that enables you to access Amazon EC2 API operations privately. It restricts all network traffic between your VPC and Amazon EC2 to the Amazon network. Because endpoints are supported within the same Region only, you can't create an endpoint between a VPC and a service in a different Region. This prevents unintended Amazon EC2 API calls to other Regions. -To learn more about creating VPC endpoints for Amazon EC2, see [Amazon EC2 and interface VPC endpoints](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/interface-vpc-endpoints.html) in the Amazon EC2 User Guide for Linux Instances. --**Severity**: Medium --### [Amazon ECS services should not have public IP addresses assigned to them automatically](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9bb205cd-a931-4f77-a620-0a263479732b) --**Description**: A public IP address is an IP address that is reachable from the internet. - If you launch your Amazon ECS instances with a public IP address, then your Amazon ECS instances are reachable from the internet. - Amazon ECS services shouldn't be publicly accessible, as this might allow unintended access to your container application servers. --**Severity**: High --### [Amazon EMR cluster master nodes should not have public IP addresses](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/fe770214-7b47-48f7-a78c-1279c35d8279) --**Description**: This control checks whether master nodes on Amazon EMR clusters have public IP addresses. -The control fails if the master node has public IP addresses that are associated with any of its instances. Public IP addresses are designated in the PublicIp field of the NetworkInterfaces configuration for the instance. - This control only checks Amazon EMR clusters that are in a RUNNING or WAITING state. --**Severity**: High --### [Amazon Redshift clusters should use enhanced VPC routing](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1ee72ceb-2cb7-4686-84e6-0e1ac1c27241) --**Description**: This control checks whether an Amazon Redshift cluster has EnhancedVpcRouting enabled. -Enhanced VPC routing forces all COPY and UNLOAD traffic between the cluster and data repositories to go through your VPC. You can then use VPC features such as security groups and network access control lists to secure network traffic. You can also use VPC Flow Logs to monitor network traffic. --**Severity**: High --### [Application Load Balancer should be configured to redirect all HTTP requests to HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/fce0daac-96e4-47ab-ab35-18ac6b7dcc70) --**Description**: To enforce encryption in transit, you should use redirect actions with Application Load Balancers to redirect client HTTP requests to an HTTPS request on port 443. --**Severity**: Medium --### [Application load balancers should be configured to drop HTTP headers](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ca924610-5a8e-4c5e-9f17-8dff1ab1757b) --**Description**: This control evaluates AWS Application Load Balancers (ALB) to ensure they're configured to drop invalid HTTP headers. The control fails if the value of routing.http.drop_invalid_header_fields.enabled is set to false. -By default, ALBs aren't configured to drop invalid HTTP header values. Removing these header values prevents HTTP desync attacks. --**Severity**: Medium --### [Configure Lambda functions to a VPC](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/10445918-c305-4c6a-9851-250e8ec7b872) --**Description**: This control checks whether a Lambda function is in a VPC. It doesn't evaluate the VPC subnet routing configuration to determine public reachability. - Note that if Lambda@Edge is found in the account, then this control generates failed findings. To prevent these findings, you can disable this control. --**Severity**: Low --### [EC2 instances should not have a public IP address](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/63afb20c-4e8e-42ad-bc6d-dc48d4bebc5f) --**Description**: This control checks whether EC2 instances have a public IP address. The control fails if the "publicIp" field is present in the EC2 instance configuration item. This control applies to IPv4 addresses only. - A public IPv4 address is an IP address that is reachable from the internet. If you launch your instance with a public IP address, then your EC2 instance is reachable from the internet. A private IPv4 address is an IP address that isn't reachable from the internet. You can use private IPv4 addresses for communication between EC2 instances in the same VPC or in your connected private network. -IPv6 addresses are globally unique, and therefore are reachable from the internet. However, by default all subnets have the IPv6 addressing attribute set to false. For more information about IPv6, see [IP addressing in your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html) in the Amazon VPC User Guide. -If you have a legitimate use case to maintain EC2 instances with public IP addresses, then you can suppress the findings from this control. For more information about front-end architecture options, see the [AWS Architecture Blog](https://aws.amazon.com/blogs/architecture/) or the [This Is My Architecture series](https://aws.amazon.com/blogs/architecture/). --**Severity**: High --### [EC2 instances should not use multiple ENIs](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/fead4128-7325-4b82-beda-3fd42de36920) --**Description**: This control checks whether an EC2 instance uses multiple Elastic Network Interfaces (ENIs) or Elastic Fabric Adapters (EFAs). This control passes if a single network adapter is used. The control includes an optional parameter list to identify the allowed ENIs. -Multiple ENIs can cause dual-homed instances, meaning instances that have multiple subnets. This can add network security complexity and introduce unintended network paths and access. --**Severity**: Low --### [EC2 instances should use IMDSv2](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/5ea3248a-8af5-4df3-8e08-f7d1925ea147) --**Description**: This control checks whether your EC2 instance metadata version is configured with Instance Metadata Service Version 2 (IMDSv2). The control passes if "HttpTokens" is set to "required" for IMDSv2. The control fails if "HttpTokens" is set to "optional". -You use instance metadata to configure or manage the running instance. The IMDS provides access to temporary, frequently rotated credentials. These credentials remove the need to hard code or distribute sensitive credentials to instances manually or programmatically. The IMDS is attached locally to every EC2 instance. It runs on a special 'link local' IP address of 169.254.169.254. This IP address is only accessible by software that runs on the instance. -Version 2 of the IMDS adds new protections for the following types of vulnerabilities. These vulnerabilities could be used to try to access the IMDS. --- Open website application firewalls-- Open reverse proxies-- Server-side request forgery (SSRF) vulnerabilities-- Open Layer 3 firewalls and network address translation (NAT)-Security Hub recommends that you configure your EC2 instances with IMDSv2. --**Severity**: High --### [EC2 subnets should not automatically assign public IP addresses](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ace790eb-39b9-4b4f-b53d-26d0f77d4ab8) --**Description**: This control checks whether the assignment of public IPs in Amazon Virtual Private Cloud (Amazon VPC) subnets have "MapPublicIpOnLaunch" set to "FALSE". The control passes if the flag is set to "FALSE". - All subnets have an attribute that determines whether a network interface created in the subnet automatically receives a public IPv4 address. Instances that are launched into subnets that have this attribute enabled have a public IP address assigned to their primary network interface. --**Severity**: Medium --### [Ensure a log metric filter and alarm exist for AWS Config configuration changes](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/965a7c7f-e6da-4062-83f4-9c1800e51e44) --**Description**: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. - It's recommended that a metric filter and alarm be established for detecting changes to CloudTrail's configurations. -Monitoring changes to AWS Config configuration helps ensure sustained visibility of configuration items within the AWS account. --**Severity**: Low --### [Ensure a log metric filter and alarm exist for AWS Management Console authentication failures](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/0e09bb35-54a3-48a1-855d-9fd3239deaf7) --**Description**: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. - It's recommended that a metric filter and alarm be established for failed console authentication attempts. - Monitoring failed console logins might decrease lead time to detect an attempt to brute force a credential, which might provide an indicator, such as source IP, that can be used in other event correlation. --**Severity**: Low --### [Ensure a log metric filter and alarm exist for changes to Network Access Control Lists (NACL)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ec356185-75b9-4ff2-a284-9f64fc885e72) --**Description**: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. NACLs are used as a stateless packet filter to control ingress and egress traffic for subnets within a VPC. - It is recommended that a metric filter and alarm be established for changes made to NACLs. -Monitoring changes to NACLs helps ensure that AWS resources and services aren't unintentionally exposed. --**Severity**: Low --### [Ensure a log metric filter and alarm exist for changes to network gateways](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c7156050-6f51-4d3f-a880-9f2363648cfb) --**Description**: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. Network gateways are required to send/receive traffic to a destination outside of a VPC. - It's recommended that a metric filter and alarm be established for changes to network gateways. -Monitoring changes to network gateways helps ensure that all ingress/egress traffic traverses the VPC border via a controlled path. --**Severity**: Low --### [Ensure a log metric filter and alarm exist for CloudTrail configuration changes](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/0dc3b824-092a-4fc6-b8b4-31d5c2403024) --**Description**: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. - It's recommended that a metric filter and alarm be established for detecting changes to CloudTrail's configurations. -- Monitoring changes to CloudTrail's configuration helps ensure sustained visibility to activities performed in the AWS account. --**Severity**: Low --### [Ensure a log metric filter and alarm exist for disabling or scheduled deletion of customer created CMKs](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d12e97c1-1f3e-4c69-8cc1-6e4cc6a9b167) --**Description**: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. - It's recommended that a metric filter and alarm be established for customer created CMKs, which have changed state to disabled or scheduled deletion. - Data encrypted with disabled or deleted keys will no longer be accessible. --**Severity**: Low --### [Ensure a log metric filter and alarm exist for IAM policy changes](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8e5ad1a9-3803-4399-baf2-a7eb9483b954) --**Description**: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. - It's recommended that a metric filter and alarm be established changes made to Identity and Access Management (IAM) policies. - Monitoring changes to IAM policies helps ensure authentication and authorization controls remain intact. --**Severity**: Low --### [Ensure a log metric filter and alarm exist for Management Console sign-in without MFA](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/001ddfe0-1b98-443f-819d-99f060fd67d5) --**Description**: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. - It's recommended that a metric filter and alarm be established for console logins that aren't protected by multifactor authentication (MFA). -Monitoring for single-factor console logins increases visibility into accounts that aren't protected by MFA. --**Severity**: Low --### [Ensure a log metric filter and alarm exist for route table changes](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/7e70666f-4bec-4ca0-8b59-c6c8b9b3cc1e) --**Description**: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. Routing tables are used to route network traffic between subnets and to network gateways. - It's recommended that a metric filter and alarm be established for changes to route tables. -Monitoring changes to route tables helps ensure that all VPC traffic flows through an expected path. --**Severity**: Low --### [Ensure a log metric filter and alarm exist for S3 bucket policy changes](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/69ed2dc0-6f39-4a33-a747-20a28f85b33c) --**Description**: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. - It is recommended that a metric filter and alarm be established for changes to S3 bucket policies. -Monitoring changes to S3 bucket policies might reduce time to detect and correct permissive policies on sensitive S3 buckets. --**Severity**: Low --### [Ensure a log metric filter and alarm exist for security group changes](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/aedabb63-8bdb-47f9-955c-72b652a75e2a) --**Description**: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. Security Groups are a stateful packet filter that controls ingress and egress traffic within a VPC. - It's recommended that a metric filter and alarm be established changes to Security Groups. -Monitoring changes to security group helps ensure that resources and services aren't unintentionally exposed. --**Severity**: Low --### [Ensure a log metric filter and alarm exist for unauthorized API calls](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231951ea-e9db-41cd-a7d0-611105fa4fb9) --**Description**: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. - It's recommended that a metric filter and alarm be established for unauthorized API calls. - Monitoring unauthorized API calls helps reveal application errors and might reduce time to detect malicious activity. --**Severity**: Low --### [Ensure a log metric filter and alarm exist for usage of 'root' account](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/59f84fbd-7946-41b3-88b1-d899dcac92bc) --**Description**: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. - It's recommended that a metric filter and alarm be established for root login attempts. -- Monitoring for root account logins provides visibility into the use of a fully privileged account and an opportunity to reduce the use of it. --**Severity**: Low --### [Ensure a log metric filter and alarm exist for VPC changes](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4b4bfa9b-fd2a-43f1-961f-654b9d5c9a60) --**Description**: Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. - It's possible to have more than one VPC within an account, in addition it's also possible to create a peer connection between 2 VPCs enabling network traffic to route between VPCs. It's recommended that a metric filter and alarm be established for changes made to VPCs. -Monitoring changes to IAM policies helps ensure authentication and authorization controls remain intact. --**Severity**: Low --### [Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/79082bbe-34fc-480a-a7fc-3aad94954609) --**Description**: Security groups provide stateful filtering of ingress/egress network traffic to AWS resources. It's recommended that no security group allows unrestricted ingress access to port 3389. - Removing unfettered connectivity to remote console services, such as RDP, reduces a server's exposure to risk. --**Severity**: High --### [RDS databases and clusters should not use a database engine default port](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f1736090-65fc-454f-a437-af58fd91ad1e) --**Description**: This control checks whether the RDS cluster or instance uses a port other than the default port of the database engine. -If you use a known port to deploy an RDS cluster or instance, an attacker can guess information about the cluster or instance. - The attacker can use this information in conjunction with other information to connect to an RDS cluster or instance or gain additional information about your application. -When you change the port, you must also update the existing connection strings that were used to connect to the old port. - You should also check the security group of the DB instance to ensure that it includes an ingress rule that allows connectivity on the new port. --**Severity**: Low --### [RDS instances should be deployed in a VPC](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9a84b879-8aab-4b82-80f2-22e637a26813) --**Description**: VPCs provide a number of network controls to secure access to RDS resources. - These controls include VPC Endpoints, network ACLs, and security groups. - To take advantage of these controls, we recommend that you move EC2-Classic RDS instances to EC2-VPC. --**Severity**: Low --### [S3 buckets should require requests to use Secure Socket Layer](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1fb7ea50-412e-4dd4-ac79-94d54bd8f21e) --**Description**: We recommend requiring requests to use Secure Socket Layer (SSL) on all Amazon S3 bucket. - S3 buckets should have policies that require all requests ('Action: S3:*') to only accept transmission of data over HTTPS in the S3 resource policy, indicated by the condition key 'aws:SecureTransport'. --**Severity**: Medium --### [Security groups should not allow ingress from 0.0.0.0/0 to port 22](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e1f4bba6-5f43-4dc5-ab15-f2a9f5807fea) --**Description**: To reduce the server's exposure, it's recommended not to allow unrestricted ingress access to port '22'. --**Severity**: High --### [Security groups should not allow unrestricted access to ports with high risk](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/194fd099-90fa-43e1-8d06-6b4f5138e952) --**Description**: This control checks whether unrestricted incoming traffic for the security groups is accessible to the specified ports that have the highest risk. This control passes when none of the rules in a security group allow ingress traffic from 0.0.0.0/0 for those ports. -Unrestricted access (0.0.0.0/0) increases opportunities for malicious activity, such as hacking, denial-of-service attacks, and loss of data. -Security groups provide stateful filtering of ingress and egress network traffic to AWS resources. No security group should allow unrestricted ingress access to the following ports: --- 3389 (RDP)-- 20, 21 (FTP)-- 22 (SSH)-- 23 (Telnet)-- 110 (POP3)-- 143 (IMAP)-- 3306 (MySQL)-- 8080 (proxy)-- 1433, 1434 (MSSQL)-- 9200 or 9300 (Elasticsearch)-- 5601 (Kibana)-- 25 (SMTP)-- 445 (CIFS)-- 135 (RPC)-- 4333 (ahsp)-- 5432 (postgresql)-- 5500 (fcp-addr-srvr1)--**Severity**: Medium --### [Security groups should only allow unrestricted incoming traffic for authorized ports](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8b328664-f3f1-45ab-976d-f6c66647b3b8) --**Description**: This control checks whether the security groups that are in use allow unrestricted incoming traffic. Optionally the rule checks whether the port numbers are listed in the "authorizedTcpPorts" parameter. --- If the security group rule port number allows unrestricted incoming traffic, but the port number is specified in "authorizedTcpPorts", then the control passes. The default value for "authorizedTcpPorts" is **80, 443**.-- If the security group rule port number allows unrestricted incoming traffic, but the port number isn't specified in authorizedTcpPorts input parameter, then the control fails.-- If the parameter isn't used, then the control fails for any security group that has an unrestricted inbound rule.-Security groups provide stateful filtering of ingress and egress network traffic to AWS. Security group rules should follow the principle of least privileged access. Unrestricted access (IP address with a /0 suffix) increases the opportunity for malicious activity such as hacking, denial-of-service attacks, and loss of data. -Unless a port is specifically allowed, the port should deny unrestricted access. --**Severity**: High --### [Unused EC2 EIPs should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/601406b5-110c-41be-ad69-9c5661ba5f7c) --**Description**: Elastic IP addresses that are allocated to a VPC should be attached to Amazon EC2 instances or in-use elastic network interfaces (ENIs). --**Severity**: Low --### [Unused network access control lists should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/5f9a7d87-ec2e-409a-991a-48c29484d6b5) --**Description**: This control checks whether there are any unused network access control lists (ACLs). - The control checks the item configuration of the resource "AWS::EC2::NetworkAcl" and determines the relationships of the network ACL. - If the only relationship is the VPC of the network ACL, then the control fails. -If other relationships are listed, then the control passes. --**Severity**: Low --### [VPC's default security group should restricts all traffic](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/500c4d2e-9baf-4081-b8a8-936ac85771a5) --**Description**: Security group should restrict all traffic to reduce resource exposure. --**Severity**: Low --## AI recommendations --### [AWS Bedrock should have model invocation logging enabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/Recommendation.ReactView/assessedResourceId/%2Fsubscriptions%2Fd1d8779d-38d7-4f06-91db-9cbc8de0176f%2Fresourcegroups%2Fsoc-asc%2Fproviders%2Fmicrosoft.security%2Fsecurityconnectors%2Fawsdspm%2Fsecurityentitydata%2Faws-account-in-region-323104580785-us-west-2%2Fproviders%2Fmicrosoft.security%2Fassessments%2F1a202dce-e13f-43ba-8a97-2f9235c5c834/recommendationDisplayName/AWS%20Bedrock%20should%20have%20model%20invocation%20logging%20enabled) --**Description:** With invocation logging, you can collect the full request data, response data, and metadata associated with all calls performed in your account. This enables you to recreate activity trails for investigation purposes when a security incident occurs. --**Severity:** Low --## Related content --- [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md)-- [What are security policies, initiatives, and recommendations?](security-policy-concept.md)-- [Review your security recommendations](review-security-recommendations.md) |
defender-for-cloud | Recommendations Reference Compute | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-compute.md | + + Title: Reference table for all compute security recommendations in Microsoft Defender for Cloud +description: This article lists all Microsoft Defender for Cloud compute security recommendations that help you harden and protect your resources. +++ Last updated : 03/13/2024+++ai-usage: ai-assisted +++# Compute security recommendations ++This article lists all the multicloud compute security recommendations you might see in Microsoft Defender for Cloud. ++The recommendations that appear in your environment are based on the resources that you're protecting and on your customized configuration. ++To learn about actions that you can take in response to these recommendations, see [Remediate recommendations in Defender for Cloud](implement-security-recommendations.md). ++++> [!TIP] +> If a recommendation description says *No related policy*, usually it's because that recommendation is dependent on a different recommendation. +> +> For example, the recommendation *Endpoint protection health failures should be remediated* relies on the recommendation that checks whether an endpoint protection solution is installed (*Endpoint protection solution should be installed*). The underlying recommendation *does* have a policy. +> Limiting policies to only foundational recommendations simplifies policy management. +++## Azure compute recommendations ++### [Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/35f45c95-27cf-4e52-891f-8390d1de5828) ++**Description**: Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Defender for Cloud uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. +(Related policy: [Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f47a6b606-51aa-4496-8bb7-64b11cf66adc)). ++**Severity**: High ++### [Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1234abcd-1b53-4fd4-9835-2c2fa3935313) ++**Description**: Monitor for changes in behavior on groups of machines configured for auditing by Defender for Cloud's adaptive application controls. Defender for Cloud uses machine learning to analyze the running processes on your machines and suggest a list of known-safe applications. These are presented as recommended apps to allow in adaptive application control policies. +(Related policy: [Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f123a3936-f020-408a-ba0c-47873faf1534)). ++**Severity**: High ++### [Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/22441184-2f7b-d4a0-e00b-4c5eaef4afc9) ++**Description**: Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more in [Detailed steps: Create and manage SSH keys for authentication to a Linux VM in Azure](../virtual-machines/linux/create-ssh-keys-detailed.md). +(Related policy: [Audit Linux machines that are not using SSH key for authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f630c64f9-8b6b-4c64-b511-6544ceff6fd6)). ++**Severity**: Medium ++### [Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/b12bc79e-4f12-44db-acda-571820191ddc) ++**Description**: It is important to enable encryption of Automation account variable assets when storing sensitive data. +(Related policy: [Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2f3657f5a0-770e-44a3-b44e-9431ba1e9735)). ++**Severity**: High ++### [Azure Backup should be enabled for virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f2f595ec-5dc6-68b4-82ef-b63563e9c610) ++**Description**: Protect the data on your Azure virtual machines with Azure Backup. +Azure Backup is an Azure-native, cost-effective, data protection solution. +It creates recovery points that are stored in geo-redundant recovery vaults. +When you restore from a recovery point, you can restore the whole VM or specific files. +(Related policy: [Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f013e242c-8828-4970-87b3-ab247555486d)). ++**Severity**: Low ++### [(Preview) Azure Stack HCI servers should meet Secured-core requirements](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f56c47221-b8b7-446e-9ab7-c7c9dc07f0ad) ++**Description**: Ensure that all Azure Stack HCI servers meet the Secured-core requirements. (Related policy: [Guest Configuration extension should be installed on machines - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6c99f570-2ce7-46bc-8175-cde013df43bc)). ++**Severity**: Low ++### [(Preview) Azure Stack HCI servers should have consistently enforced application control policies](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7384fde3-11b0-4047-acbd-b3cf3cc8ce07) ++**Description**: At a minimum, apply the Microsoft WDAC base policy in enforced mode on all Azure Stack HCI servers. Applied Windows Defender Application Control (WDAC) policies must be consistent across servers in the same cluster. (Related policy: [Guest Configuration extension should be installed on machines - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6c99f570-2ce7-46bc-8175-cde013df43bc)). ++**Severity**: High ++### [(Preview) Azure Stack HCI systems should have encrypted volumes](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fae95f12a-b6fd-42e0-805c-6b94b86c9830) ++**Description**: Use BitLocker to encrypt the OS and data volumes on Azure Stack HCI systems. (Related policy: [Guest Configuration extension should be installed on machines - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6c99f570-2ce7-46bc-8175-cde013df43bc)). ++**Severity**: High ++### [Container hosts should be configured securely](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/0677209d-e675-2c6f-e91a-54cef2878663) ++**Description**: Remediate vulnerabilities in security configuration settings on machines with Docker installed to protect them from attacks. +(Related policy: [Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fe8cbc669-f12d-49eb-93e7-9273119e9933)). ++**Severity**: High ++### [Diagnostic logs in Azure Stream Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f11b27f2-8c49-5bb4-eff5-e1e5384bf95e) ++**Description**: Enable logs and retain them for up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised. +(Related policy: [Diagnostic logs in Azure Stream Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff9be5368-9bf5-4b84-9e0a-7850da98bb46)). ++**Severity**: Low ++### [Diagnostic logs in Batch accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/32771b45-220c-1a8b-584e-fdd5a2584a66) ++**Description**: Enable logs and retain them for up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised. +(Related policy: [Diagnostic logs in Batch accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f428256e6-1fac-4f48-a757-df34c2b3336d)). ++**Severity**: Low ++### [Diagnostic logs in Event Hubs should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1597605a-0faf-5860-eb74-462ae2e9fc21) ++**Description**: Enable logs and retain them for up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised. +(Related policy: [Diagnostic logs in Event Hubs should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f83a214f7-d01a-484b-91a9-ed54470c9a6a)). ++**Severity**: Low ++### [Diagnostic logs in Logic Apps should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/91387f44-7e43-4ecc-55f0-46f5adee3dd5) ++**Description**: To ensure you can recreate activity trails for investigation purposes when a security incident occurs or your network is compromised, enable logging. If your diagnostic logs aren't being sent to a Log Analytics workspace, Azure Storage account, or Azure Event Hubs, ensure you've configured diagnostic settings to send platform metrics and platform logs to the relevant destinations. Learn more in Create diagnostic settings to send platform logs and metrics to different destinations. +(Related policy: [Diagnostic logs in Logic Apps should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f34f95f76-5386-4de7-b824-0d8478470c9d)). ++**Severity**: Low ++### [Diagnostic logs in Service Bus should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f19ab7d9-5ff2-f8fd-ab3b-0bf95dcb6889) ++**Description**: Enable logs and retain them for up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised. +(Related policy: [Diagnostic logs in Service Bus should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff8d36e2f-389b-4ee4-898d-21aeb69a0f45)). ++**Severity**: Low ++### [Diagnostic logs in Virtual Machine Scale Sets should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/961eb649-3ea9-f8c2-6595-88e9a3aeedeb) ++**Description**: Enable logs and retain them for up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised. +(Related policy: [Diagnostic logs in Virtual Machine Scale Sets should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7c1b1214-f927-48bf-8882-84f0af6588b1)). ++**Severity**: High ++### [EDR configuration issues should be resolved on virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dc5357d0-3858-4d17-a1a3-072840bff5be) ++**Description**: To protect virtual machines from the latest threats and vulnerabilities, resolve all identified configuration issues with the installed Endpoint Detection and Response (EDR) solution. Currently, this recommendation only applies to resources with Microsoft Defender for Endpoint enabled. ++This agentless endpoint recommendation is available if you have Defender for Servers Plan 2 or the Defender CSPM plan. [Learn more](endpoint-detection-response.md) about agentless endpoint protection recommendations. ++- These new agentless endpoint recommendations support Azure and multicloud machines. On-premises servers aren't supported. +- These new agentless endpoint recommendations replace existing recommendations [Endpoint protection should be installed on your machines (preview)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/4fb67663-9ab9-475d-b026-8c544cced439) and [Endpoint protection health issues should be resolved on your machines (preview)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/37a3689a-818e-4a0e-82ac-b1392b9bb000). +- These older recommendations use the MMA/AMA agent and will be replaced as the agents are [phased out in Defender for Servers](https://techcommunity.microsoft.com/t5/user/ssoregistrationpage?dest_url=https:%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fblogs%2Fblogworkflowpage%2Fblog-id%2FMicrosoftDefenderCloudBlog%2Farticle-id%2F1269). ++**Severity**: Low +++### [EDR solution should be installed on Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/06e3a6db-6c0c-4ad9-943f-31d9d73ecf6c) ++**Description**: Installing an Endpoint Detection and Response (EDR) solution on virtual machines is important for protection against advanced threats. EDRs aid in preventing, detecting, investigating, and responding to these threats. Microsoft Defender for Servers can be used to deploy Microsoft Defender for Endpoint. +- If a resource is classified as "Unhealthy", it indicates the absence of a supported EDR solution. +- If an EDR solution is installed but not discoverable by this recommendation, it can be exempted +- Without an EDR solution, the virtual machines are at risk of advanced threats. ++This agentless endpoint recommendation is available if you have Defender for Servers Plan 2 or the Defender CSPM plan. [Learn more](endpoint-detection-response.md) about agentless endpoint protection recommendations. ++- These new agentless endpoint recommendations support Azure and multicloud machines. On-premises servers aren't supported. +- These new agentless endpoint recommendations replace existing recommendations [Endpoint protection should be installed on your machines (preview)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/4fb67663-9ab9-475d-b026-8c544cced439) and [Endpoint protection health issues should be resolved on your machines (preview)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/37a3689a-818e-4a0e-82ac-b1392b9bb000). +- These older recommendations use the MMA/AMA agent and will be replaced as the agents are [phased out in Defender for Servers](https://techcommunity.microsoft.com/t5/user/ssoregistrationpage?dest_url=https:%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fblogs%2Fblogworkflowpage%2Fblog-id%2FMicrosoftDefenderCloudBlog%2Farticle-id%2F1269). ++**Severity**: High ++### [Endpoint protection health issues on virtual machine scale sets should be resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e71020c2-860c-3235-cd39-04f3f8c936d2) ++**Description**: On virtual machine scale sets, remediate endpoint protection health failures to protect them from threats and vulnerabilities. +(Related policy: [Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f26a828e1-e88f-464e-bbb3-c134a282b9de)). ++**Severity**: Low ++### [Endpoint protection should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/21300918-b2e3-0346-785f-c77ff57d243b) ++**Description**: Install an endpoint protection solution on your virtual machines scale sets, to protect them from threats and vulnerabilities. +(Related policy: [Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f26a828e1-e88f-464e-bbb3-c134a282b9de)). ++**Severity**: High ++### [File integrity monitoring should be enabled on machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9b7d740f-c271-4bfd-88fb-515680c33440) ++**Description**: Defender for Cloud has identified machines that are missing a file integrity monitoring solution. To monitor changes to critical files, registry keys, and more on your servers, enable file integrity monitoring. +When the file integrity monitoring solution is enabled, create data collection rules to define the files to be monitored. To define rules, or see the files changed on machines with existing rules, go to the [file integrity monitoring management page](https://aka.ms/FimMMA). +(No related policy) ++**Severity**: High ++### [Guest Attestation extension should be installed on supported Linux virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a9a53f4f-26b6-3d68-33f3-2ec1f2452b5d) ++**Description**: Install Guest Attestation extension on supported Linux virtual machine scale sets to allow Microsoft Defender for Cloud to proactively attest and monitor the boot integrity. Once installed, boot integrity will be attested via Remote Attestation. This assessment only applies to trusted launch enabled Linux virtual machine scale sets. ++- Trusted launch requires the creation of new virtual machines. +- You can't enable trusted launch on existing virtual machines that were initially created without it. ++Learn more about [Trusted launch for Azure virtual machines](../virtual-machines/trusted-launch.md). +(No related policy) ++**Severity**: Low ++### [Guest Attestation extension should be installed on supported Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e94a7421-fc27-7a4d-e9ba-2ba01384cacd) ++**Description**: Install Guest Attestation extension on supported Linux virtual machines to allow Microsoft Defender for Cloud to proactively attest and monitor the boot integrity. Once installed, boot integrity will be attested via Remote Attestation. This assessment only applies to trusted launch enabled Linux virtual machines. ++- Trusted launch requires the creation of new virtual machines. +- You can't enable trusted launch on existing virtual machines that were initially created without it. ++Learn more about [Trusted launch for Azure virtual machines](../virtual-machines/trusted-launch.md). +(No related policy) ++**Severity**: Low ++### [Guest Attestation extension should be installed on supported Windows virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/02e8ca50-0e7e-cc34-0b91-215af2904248) ++**Description**: Install Guest Attestation extension on supported virtual machine scale sets to allow Microsoft Defender for Cloud to proactively attest and monitor the boot integrity. Once installed, boot integrity will be attested via Remote Attestation. This assessment only applies to trusted launch enabled virtual machine scale sets. ++- Trusted launch requires the creation of new virtual machines. +- You can't enable trusted launch on existing virtual machines that were initially created without it. ++Learn more about [Trusted launch for Azure virtual machines](../virtual-machines/trusted-launch.md). +(No related policy) ++**Severity**: Low ++### [Guest Attestation extension should be installed on supported Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/874b14bd-b49e-495a-88c6-46acb89b0a33) ++**Description**: Install Guest Attestation extension on supported virtual machines to allow Microsoft Defender for Cloud to proactively attest and monitor the boot integrity. Once installed, boot integrity will be attested via Remote Attestation. This assessment only applies to trusted launch enabled virtual machines. ++- Trusted launch requires the creation of new virtual machines. +- You can't enable trusted launch on existing virtual machines that were initially created without it. ++Learn more about [Trusted launch for Azure virtual machines](../virtual-machines/trusted-launch.md). +(No related policy) ++**Severity**: Low ++### [Guest Configuration extension should be installed on machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/6c99f570-2ce7-46bc-8175-cde013df43bc) ++**Description**: To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as [Windows Exploit guard should be enabled](https://aka.ms/gcpol). +(Related policy: [Virtual machines should have the Guest Configuration extension](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2fae89ebca-1c92-4898-ac2c-9f63decb045c)). ++**Severity**: Medium ++### [(Preview) Host and VM networking should be protected on Azure Stack HCI systems](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2faee306e7-80b0-46f3-814c-d3d3083ed034) ++**Description**: Protect data on the Azure Stack HCI host's network and on virtual machine network connections. (Related policy: [Guest Configuration extension should be installed on machines - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6c99f570-2ce7-46bc-8175-cde013df43bc)). ++**Severity**: Low ++### [Install endpoint protection solution on virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/83f577bd-a1b6-b7e1-0891-12ca19d1e6df) ++**Description**: Install an endpoint protection solution on your virtual machines, to protect them from threats and vulnerabilities. +(Related policy: [Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2faf6cd1bd-1635-48cb-bde7-5b15693900b9)). ++**Severity**: High ++### [Linux virtual machines should enable Azure Disk Encryption or EncryptionAtHost](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/a40cc620-e72c-fdf4-c554-c6ca2cd705c0) ++**Description**: By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys; temp disks and data caches aren't encrypted, and data isn't encrypted when flowing between compute and storage resources. Use Azure Disk Encryption or EncryptionAtHost to encrypt all this data. Visit [https://aka.ms/diskencryptioncomparison](https://aka.ms/diskencryptioncomparison) to compare encryption offerings. This policy requires two prerequisites to be deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). +(Related policy: [[Preview]: Linux virtual machines should enable Azure Disk Encryption or EncryptionAtHost](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2fca88aadc-6e2b-416c-9de2-5a0f01d1693f)). ++Replaces the older recommendation *Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources*. The recommendation enables you to audit VM encryption compliance. ++**Severity**: High ++### [Linux virtual machines should enforce kernel module signature validation](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e2f798b8-621a-4d46-99d7-1310e09eba26) ++**Description**: To help mitigate against the execution of malicious or unauthorized code in kernel mode, enforce kernel module signature validation on supported Linux virtual machines. Kernel module signature validation ensures that only trusted kernel modules will be allowed to run. This assessment only applies to Linux virtual machines that have the Azure Monitor Agent installed. +(No related policy) ++**Severity**: Low ++### [Linux virtual machines should use only signed and trusted boot components](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ad50b498-f90c-451f-886f-d0a169cc5002) ++**Description**: With Secure Boot enabled, all OS boot components (boot loader, kernel, kernel drivers) must be signed by trusted publishers. Defender for Cloud has identified untrusted OS boot components on one or more of your Linux machines. To protect your machines from potentially malicious components, add them to your allowlist or remove the identified components. +(No related policy) ++**Severity**: Low ++### [Linux virtual machines should use Secure Boot](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/0396b18c-41aa-489c-affd-4ee5d1714a59) ++**Description**: To protect against the installation of malware-based rootkits and boot kits, enable Secure Boot on supported Linux virtual machines. Secure Boot ensures that only signed operating systems and drivers will be allowed to run. This assessment only applies to Linux virtual machines that have the Azure Monitor Agent installed. +(No related policy) ++**Severity**: Low ++++### [Log Analytics agent should be installed on Linux-based Azure Arc-enabled machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/720a3e77-0b9a-4fa9-98b6-ddf0fd7e32c1) ++**Description**: Defender for Cloud uses the Log Analytics agent (also known as OMS) to collect security events from your Azure Arc machines. To deploy the agent on all your Azure Arc machines, follow the remediation steps. +(No related policy) ++**Severity**: High ++As use of the AMA and MMA is phased out in Defender for Servers, recommendations that rely on those agents, like this one, will be removed. Instead, Defender for Servers features will use the Microsoft Defender for Endpoint agent, or agentless scanning, with no reliance on the MMA or AMA. ++Estimated deprecation: July 2024 +++### [Log Analytics agent should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/45cfe080-ceb1-a91e-9743-71551ed24e94) ++**Description**: Defender for Cloud collects data from your Azure virtual machines (VMs) to monitor for security vulnerabilities and threats. Data is collected using the [Log Analytics agent](../azure-monitor/platform/log-analytics-agent.md), formerly known as the Microsoft Monitoring Agent (MMA), which reads various security-related configurations and event logs from the machine and copies the data to your workspace for analysis. You'll also need to follow that procedure if your VMs are used by an Azure managed service such as Azure Kubernetes Service or Azure Service Fabric. You cannot configure auto-provisioning of the agent for Azure virtual machine scale sets. To deploy the agent on virtual machine scale sets (including those used by Azure managed services such as Azure Kubernetes Service and Azure Service Fabric), follow the procedure in the remediation steps. +(Related policy: [Log Analytics agent should be installed on your virtual machine scale sets for Azure Security Center monitoring](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fa3a6ea0c-e018-4933-9ef0-5aaa1501449b)). ++As use of the AMA and MMA is phased out in Defender for Servers, recommendations that rely on those agents, like this one, will be removed. Instead, Defender for Servers features will use the Microsoft Defender for Endpoint agent, or agentless scanning, with no reliance on the MMA or AMA. ++Estimated deprecation: July 2024 +++**Severity**: High ++### [Log Analytics agent should be installed on virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d1db3318-01ff-16de-29eb-28b344515626) ++**Description**: Defender for Cloud collects data from your Azure virtual machines (VMs) to monitor for security vulnerabilities and threats. Data is collected using the [Log Analytics agent](../azure-monitor/platform/log-analytics-agent.md), formerly known as the Microsoft Monitoring Agent (MMA), which reads various security-related configurations and event logs from the machine and copies the data to your Log Analytics workspace for analysis. This agent is also required if your VMs are used by an Azure managed service such as Azure Kubernetes Service or Azure Service Fabric. We recommend configuring [auto-provisioning](enable-data-collection.md) to automatically deploy the agent. If you choose not to use auto-provisioning, manually deploy the agent to your VMs using the instructions in the remediation steps. +(Related policy: [Log Analytics agent should be installed on your virtual machine for Azure Security Center monitoring](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fa4fe33eb-e377-4efb-ab31-0784311bc499)). ++As use of the AMA and MMA is phased out in Defender for Servers, recommendations that rely on those agents, like this one, will be removed. Instead, Defender for Servers features will use the Microsoft Defender for Endpoint agent, or agentless scanning, with no reliance on the MMA or AMA. ++Estimated deprecation: July 2024 +++**Severity**: High ++### [Log Analytics agent should be installed on Windows-based Azure Arc-enabled machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/27ac71b1-75c5-41c2-adc2-858f5db45b08) ++**Description**: Defender for Cloud uses the Log Analytics agent (also known as MMA) to collect security events from your Azure Arc machines. To deploy the agent on all your Azure Arc machines, follow the remediation steps. +(No related policy) ++**Severity**: High ++As use of the AMA and MMA is phased out in Defender for Servers, recommendations that rely on those agents, like this one, will be removed. Instead, Defender for Servers features will use the Microsoft Defender for Endpoint agent, or agentless scanning, with no reliance on the MMA or AMA. ++Estimated deprecation: July 2024 ++### [Machines should be configured securely](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c476dc48-8110-4139-91af-c8d940896b98) ++**Description**: Remediate vulnerabilities in security configuration on your machines to protect them from attacks. +(Related policy: [Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15)). ++This recommendation helps you to improve server security posture. Defender for Cloud enhances the Center for Internet Security (CIS) benchmarks by providing security baselines that are powered by Microsoft Defender Vulnerability Management. [Learn more](remediate-security-baseline.md). ++**Severity**: Low ++### [Machines should be restarted to apply security configuration updates](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d79a60ef-d490-484e-91ed-f45ceb0e7cfb) ++**Description**: To apply security configuration updates and protect against vulnerabilities, restart your machines. This assessment only applies to Linux virtual machines that have the Azure Monitor Agent installed. +(No related policy) ++**Severity**: Low ++### [Machines should have a vulnerability assessment solution](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ffff0522-1e88-47fc-8382-2a80ba848f5d) ++**Description**: Defender for Cloud regularly checks your connected machines to ensure they're running vulnerability assessment tools. Use this recommendation to deploy a vulnerability assessment solution. +(Related policy: [A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f501541f7-f7e7-4cd6-868c-4190fdad3ac9)). ++**Severity**: Medium ++### [Machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1195afff-c881-495e-9bc5-1486211ae03f) ++**Description**: Resolve the findings from the vulnerability assessment solutions on your virtual machines. +(Related policy: [A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f501541f7-f7e7-4cd6-868c-4190fdad3ac9)). ++**Severity**: Low ++### [Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/805651bc-6ecd-4c73-9b55-97a19d0582d0) ++**Description**: Defender for Cloud has identified some overly permissive inbound rules for management ports in your Network Security Group. Enable just-in-time access control to protect your VM from internet-based brute-force attacks. Learn more in [Understanding just-in-time (JIT) VM access](just-in-time-access-overview.md). +(Related policy: [Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb0f33259-77d7-4c9e-aac6-3aabcfae693c)). ++**Severity**: High ++### [Microsoft Defender for Servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/56a6e81f-7413-4f72-9a1b-aaeeaa87c872) ++**Description**: Microsoft Defender for servers provides real-time threat protection for your server workloads and generates hardening recommendations as well as alerts about suspicious activities. +You can use this information to quickly remediate security issues and improve the security of your servers. ++Remediating this recommendation will result in charges for protecting your servers. If you don't have any servers in this subscription, no charges will be incurred. +If you create any servers on this subscription in the future, they will automatically be protected and charges will begin at that time. +Learn more in [Introduction to Microsoft Defender for servers](defender-for-servers-introduction.md). +(Related policy: [Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2f4da35fc9-c9e7-4960-aec9-797fe7d9051d)). ++**Severity**: High ++### [Microsoft Defender for Servers should be enabled on workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1ce68079-b783-4404-b341-d2851d6f0fa2) ++**Description**: Microsoft Defender for servers brings threat detection and advanced defenses for your Windows and Linux machines. +With this Defender plan enabled on your subscriptions but not on your workspaces, you're paying for the full capability of Microsoft Defender for servers but missing out on some of the benefits. +When you enable Microsoft Defender for servers on a workspace, all machines reporting to that workspace will be billed for Microsoft Defender for servers - even if they're in subscriptions without Defender plans enabled. Unless you also enable Microsoft Defender for servers on the subscription, those machines won't be able to take advantage of just-in-time VM access, adaptive application controls, and network detections for Azure resources. +Learn more in [Introduction to Microsoft Defender for servers](defender-for-servers-introduction.md). +(No related policy) ++**Severity**: Medium ++### [Secure Boot should be enabled on supported Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/69ad830b-d98c-b1cf-2158-9d69d38c7093) ++**Description**: Enable Secure Boot on supported Windows virtual machines to mitigate against malicious and unauthorized changes to the boot chain. Once enabled, only trusted bootloaders, kernel, and kernel drivers will be allowed to run. This assessment only applies to trusted launch enabled Windows virtual machines. +++- Trusted launch requires the creation of new virtual machines. +- You can't enable trusted launch on existing virtual machines that were initially created without it. ++Learn more about [Trusted launch for Azure virtual machines](../virtual-machines/trusted-launch.md). +(No related policy) ++**Severity**: Low ++### [Service Fabric clusters should have the ClusterProtectionLevel property set to EncryptAndSign](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/7f04fc0c-4a3d-5c7e-ce19-666cb871b510) ++**Description**: Service Fabric provides three levels of protection (None, Sign, and EncryptAndSign) for node-to-node communication using a primary cluster certificate. Set the protection level to ensure that all node-to-node messages are encrypted and digitally signed. +(Related policy: [Service Fabric clusters should have the ClusterProtectionLevel property set to EncryptAndSign](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f617c02be-7f02-4efd-8836-3180d47b6c68)). ++**Severity**: High ++### [Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/03afeb6f-7634-adb3-0a01-803b0b9cb611) ++**Description**: Perform Client authentication only via Azure Active Directory in Service Fabric +(Related policy: [Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb54ed75b-3e1a-44ac-a333-05ba39b99ff0)). ++**Severity**: High +++### [System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/bd20bd91-aaf1-7f14-b6e4-866de2f43146) ++**Description**: Install missing system security and critical updates to secure your Windows and Linux virtual machine scale sets. +(Related policy: [System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fc3f317a7-a95c-4547-b7e7-11017ebdf2fe)). ++As use of the Azure Monitor Agent (AMA) and the Log Analytics agent (also known as the Microsoft Monitoring Agent (MMA)) is phased out in Defender for Servers, recommendations that rely on those agents, like this one, will be removed. Instead, Defender for Servers features will use the Microsoft Defender for Endpoint agent, or agentless scanning, with no reliance on the MMA or AMA. ++Estimated deprecation: July 2024. These recommendations are [replaced by new ones](release-notes-archive.md#two-recommendations-related-to-missing-operating-system-os-updates-were-released-to-ga). +++**Severity**: High ++### [System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4ab6e3c5-74dd-8b35-9ab9-f61b30875b27) ++**Description**: Install missing system security and critical updates to secure your Windows and Linux virtual machines and computers +(Related policy: [System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2f86b3d65f-7626-441e-b690-81a8b71cff60)). ++As use of the Azure Monitor Agent (AMA) and the Log Analytics agent (also known as the Microsoft Monitoring Agent (MMA)) is phased out in Defender for Servers, recommendations that rely on those agents, like this one, will be removed. Instead, Defender for Servers features will use the Microsoft Defender for Endpoint agent, or agentless scanning, with no reliance on the MMA or AMA. ++Estimated deprecation: July 2024. These recommendations are [replaced by new ones](release-notes-archive.md#two-recommendations-related-to-missing-operating-system-os-updates-were-released-to-ga). ++**Severity**: High ++### [System updates should be installed on your machines (powered by Update Center)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e1145ab1-eb4f-43d8-911b-36ddf771d13f) ++**Description**: Your machines are missing system, security, and critical updates. Software updates often include critical patches to security holes. Such holes are frequently exploited in malware attacks so it's vital to keep your software updated. To install all outstanding patches and secure your machines, follow the remediation steps. +(No related policy) ++**Severity**: High ++### [Virtual machines and virtual machine scale sets should have encryption at host enabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/efbbd784-656d-473a-9863-ea7693bfcd2a) ++**Description**: Use encryption at host to get end-to-end encryption for your virtual machine and virtual machine scale set data. Encryption at host enables encryption at rest for your temporary disk and OS/data disk caches. Temporary and ephemeral OS disks are encrypted with platform-managed keys when encryption at host is enabled. OS/data disk caches are encrypted at rest with either customer-managed or platform-managed key, depending on the encryption type selected on the disk. Learn more at [Use the Azure portal to enable end-to-end encryption using encryption at host](../virtual-machines/disks-enable-host-based-encryption-portal.md). (Related policy: [Virtual machines and virtual machine scale sets should have encryption at host enabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ffc4d8e41-e223-45ea-9bf5-eada37891d87)). ++**Severity**: Medium ++### [Virtual machines should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/12018f4f-3d10-999b-e4c4-86ec25be08a1) ++**Description**: Virtual machines (classic) are deprecated and these VMs should be migrated to Azure Resource Manager. +Because Azure Resource Manager now has full IaaS capabilities and other advancements, we deprecated the management of IaaS virtual machines (VMs) through Azure Service Manager (ASM) on February 28, 2020. This functionality will be fully retired on March 1, 2023. ++To view all affected classic VMs make sure to select all your Azure subscriptions under 'directories + subscriptions' tab. ++Available resources and information about this tool & migration: +[Overview of Virtual machines (classic) deprecation, step by step process for migration & available Microsoft resources.](../virtual-machines/classic-vm-deprecation.md?toc=/azure/virtual-machines/windows/toc.json&bc=/azure/virtual-machines/windows/breadcrumb/toc.json) +[Details about Migrate to Azure Resource Manager migration tool.](../virtual-machines/migration-classic-resource-manager-deep-dive.md?toc=/azure/virtual-machines/windows/toc.json&bc=/azure/virtual-machines/windows/breadcrumb/toc.json) +[Migrate to Azure Resource Manager migration tool using PowerShell.](../virtual-machines/windows/migration-classic-resource-manager-ps.md) +(Related policy: [Virtual machines should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1d84d5fb-01f6-4d12-ba4f-4a26081d403d)). ++**Severity**: High +++### [Virtual machines guest attestation status should be healthy](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/b7604066-ed76-45f9-a5c1-c97e4812dc55) ++**Description**: Guest attestation is performed by sending a trusted log (TCGLog) to an attestation server. The server uses these logs to determine whether boot components are trustworthy. This assessment is intended to detect compromises of the boot chain, which might be the result of a ```bootkit``` or ```rootkit``` infection. +This assessment only applies to Trusted Launch enabled virtual machines that have the Guest Attestation extension installed. +(No related policy) ++**Severity**: Medium ++### [Virtual machines' Guest Configuration extension should be deployed with system-assigned managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/69133b6b-695a-43eb-a763-221e19556755) ++**Description**: The Guest Configuration extension requires a system assigned managed identity. Azure virtual machines in the scope of this policy will be non-compliant when they have the Guest Configuration extension installed but do not have a system assigned managed identity. [Learn more](https://aka.ms/gcpol) +(Related policy: [Guest Configuration extension should be deployed to Azure virtual machines with system assigned managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fd26f7642-7545-4e18-9b75-8c9bbdee3a9a)). ++**Severity**: Medium +++### [Virtual machine scale sets should be configured securely](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8941d121-f740-35f6-952c-6561d2b38d36) ++**Description**: On virtual machine scale sets, remediate vulnerabilities to protect them from attacks. +(Related policy: [Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4)). ++**Severity**: High +++### [Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d57a4221-a804-52ca-3dea-768284f06bb7) ++**Description**: By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys; +temp disks and data caches aren't encrypted, and data isn't encrypted when flowing between compute and storage resources. +For a comparison of different disk encryption technologies in Azure, see <https://aka.ms/diskencryptioncomparison>. +Use Azure Disk Encryption to encrypt all this data. +Disregard this recommendation if: ++You're using the encryption-at-host feature, or server-side encryption on Managed Disks meets your security requirements. Learn more in [server-side encryption of Azure Disk Storage](https://aka.ms/disksse). ++(Related policy: [Disk encryption should be applied on virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0961003e-5a0a-4549-abde-af6a37f2724d)) ++**Severity**: High ++### [vTPM should be enabled on supported virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/861bbc73-0a55-8d1d-efc6-e92d9e1176e0) ++**Description**: Enable virtual TPM device on supported virtual machines to facilitate Measured Boot and other OS security features that require a TPM. Once enabled, vTPM can be used to attest boot integrity. This assessment only applies to trusted launch enabled virtual machines. +++- Trusted launch requires the creation of new virtual machines. +- You can't enable trusted launch on existing virtual machines that were initially created without it. ++Learn more about [Trusted launch for Azure virtual machines](../virtual-machines/trusted-launch.md). +(No related policy) ++**Severity**: Low ++### [Vulnerabilities in security configuration on your Linux machines should be remediated (powered by Guest Configuration)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1f655fb7-63ca-4980-91a3-56dbc2b715c6) ++**Description**: Remediate vulnerabilities in security configuration on your Linux machines to protect them from attacks. +(Related policy: [Linux machines should meet requirements for the Azure security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ffc9b3da7-8347-4380-8e70-0a0361d8dedd)). ++**Severity**: Low ++### [Vulnerabilities in security configuration on your Windows machines should be remediated (powered by Guest Configuration)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8c3d9ad0-3639-4686-9cd2-2b2ab2609bda) ++**Description**: Remediate vulnerabilities in security configuration on your Windows machines to protect them from attacks. +(No related policy) ++**Severity**: Low ++### [Windows Defender Exploit Guard should be enabled on machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/22489c48-27d1-4e40-9420-4303ad9cffef) ++**Description**: Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). +(Related policy: [Audit Windows machines on which Windows Defender Exploit Guard is not enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2fbed48b13-6647-468e-aa2f-1af1d3f4dd40)). ++**Severity**: Medium ++### [Windows virtual machines should enable Azure Disk Encryption or EncryptionAtHost](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/0cb5f317-a94b-6b80-7212-13a9cc8826af) ++**Description**: By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys; temp disks and data caches aren't encrypted, and data isn't encrypted when flowing between compute and storage resources. Use Azure Disk Encryption or EncryptionAtHost to encrypt all this data. Visit [https://aka.ms/diskencryptioncomparison](https://aka.ms/diskencryptioncomparison) to compare encryption offerings. This policy requires two prerequisites to be deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). +(Related policy: [[Preview]: Windows virtual machines should enable Azure Disk Encryption or EncryptionAtHost](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f3dc5edcd-002d-444c-b216-e123bbfa37c0)). ++Replaces the older recommendation Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources. The recommendation enables you to audit VM encryption compliance. ++**Severity**: High ++### [Windows web servers should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/87448ec1-55f6-3746-3f79-0f35beee76b4) ++**Description**: To protect the privacy of information communicated over the Internet, your web servers should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by using security certificates to encrypt a connection between machines. +(Related policy: [Audit Windows web servers that are not using secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f5752e6d6-1206-46d8-8ab1-ecc2f71a8112)). ++**Severity**: High ++++## AWS Compute recommendations ++### [Amazon EC2 instances managed by Systems Manager should have a patch compliance status of COMPLIANT after a patch installation](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/5b3c2887-d7b7-4887-b074-4e6057027709) ++**Description**: This control checks whether the compliance status of the Amazon EC2 Systems Manager patch compliance is COMPLIANT or NON_COMPLIANT after the patch installation on the instance. +It only checks instances managed by AWS Systems Manager Patch Manager. +It doesn't check whether the patch was applied within the 30-day limit prescribed by PCI DSS requirement '6.2'. +It also doesn't validate whether the patches applied were classified as security patches. +You should create patching groups with the appropriate baseline settings and ensure in-scope systems are managed by those patch groups in Systems Manager. For more information about patch groups, see [AWS Systems Manager User Guide](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-patch-group-tagging.html). ++**Severity**: Medium ++### [Amazon EFS should be configured to encrypt file data at rest using AWS KMS](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4e482075-311f-401e-adc7-f8a8affc5635) ++**Description**: This control checks whether Amazon Elastic File System is configured to encrypt the file data using AWS KMS. The check fails in the following cases: +*"[Encrypted](https://docs.aws.amazon.com/efs/latest/ug/API_DescribeFileSystems.html)" is set to "false" in the DescribeFileSystems response. + The "[KmsKeyId](https://docs.aws.amazon.com/efs/latest/ug/API_DescribeFileSystems.html)" key in the [DescribeFileSystems](https://docs.aws.amazon.com/efs/latest/ug/API_DescribeFileSystems.html) response doesn't match the KmsKeyId parameter for [efs-encrypted-check](https://docs.aws.amazon.com/config/latest/developerguide/efs-encrypted-check.html). + Note that this control doesn't use the "KmsKeyId" parameter for [efs-encrypted-check](https://docs.aws.amazon.com/config/latest/developerguide/efs-encrypted-check.html). It only checks the value of "Encrypted". For an added layer of security for your sensitive data in Amazon EFS, you should create encrypted file systems. + Amazon EFS supports encryption for file systems at-rest. You can enable encryption of data at rest when you create an Amazon EFS file system. +To learn more about Amazon EFS encryption, see [Data encryption in Amazon EFS](https://docs.aws.amazon.com/efs/latest/ug/encryption.html) in the Amazon Elastic File System User Guide. ++**Severity**: Medium ++### [Amazon EFS volumes should be in backup plans](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e864e460-158b-4a4a-beb9-16ebc25c1240) ++**Description**: This control checks whether Amazon Elastic File System (Amazon EFS) file systems are added to the backup plans in AWS Backup. The control fails if Amazon EFS file systems aren't included in the backup plans. + Including EFS file systems in the backup plans helps you to protect your data from deletion and data loss. ++**Severity**: Medium ++### [Application Load Balancer deletion protection should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/5c508bf1-26f9-4696-bb61-8341d395e3de) ++**Description**: This control checks whether an Application Load Balancer has deletion protection enabled. The control fails if deletion protection isn't configured. +Enable deletion protection to protect your Application Load Balancer from deletion. ++**Severity**: Medium ++### [Auto Scaling groups associated with a load balancer should use health checks](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/837d6a45-503f-4c95-bf42-323763960b62) ++**Description**: Auto Scaling groups that are associated with a load balancer are using Elastic Load Balancing health checks. + PCI DSS doesn't require load balancing or highly available configurations. This is recommended by AWS best practices. ++**Severity**: Low ++### [AWS accounts should have Azure Arc auto provisioning enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/882a80f0-943f-473e-b6d7-40c7a625540e) ++**Description**: For full visibility of the security content from Microsoft Defender for servers, EC2 instances should be connected to Azure Arc. To ensure that all eligible EC2 instances automatically receive Azure Arc, enable autoprovisioning from Defender for Cloud at the AWS account level. Learn more about [Azure Arc](../azure-arc/servers/overview.md), and [Microsoft Defender for Servers](plan-defender-for-servers.md). ++**Severity**: High ++### [CloudFront distributions should have origin failover configured](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4779e962-2ea3-4126-aa76-379ea271887c) ++**Description**: This control checks whether an Amazon CloudFront distribution is configured with an origin group that has two or more origins. +CloudFront origin failover can increase availability. Origin failover automatically redirects traffic to a secondary origin if the primary origin is unavailable or if it returns specific HTTP response status codes. ++**Severity**: Medium ++### [CodeBuild GitHub or Bitbucket source repository URLs should use OAuth](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9694d4ef-f21a-40b7-b535-618ac5c5d21e) ++**Description**: This control checks whether the GitHub or Bitbucket source repository URL contains either personal access tokens or a user name and password. +Authentication credentials should never be stored or transmitted in clear text or appear in the repository URL. Instead of personal access tokens or user name and password, you should use OAuth to grant authorization for accessing GitHub or Bitbucket repositories. + Using personal access tokens or a user name and password could expose your credentials to unintended data exposure and unauthorized access. ++**Severity**: High ++### [CodeBuild project environment variables should not contain credentials](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a88b4b72-b461-4b5e-b024-91da1cbe500f) ++**Description**: This control checks whether the project contains the environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`. +Authentication credentials `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` should never be stored in clear text, as this could lead to unintended data exposure and unauthorized access. ++**Severity**: High ++### [DynamoDB Accelerator (DAX) clusters should be encrypted at rest](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/58e67d3d-8b17-4c1c-9bc4-550b10f0328a) ++**Description**: This control checks whether a DAX cluster is encrypted at rest. + Encrypting data at rest reduces the risk of data stored on disk being accessed by a user not authenticated to AWS. The encryption adds another set of access controls to limit the ability of unauthorized users to access to the data. + For example, API permissions are required to decrypt the data before it can be read. ++**Severity**: Medium ++### [DynamoDB tables should automatically scale capacity with demand](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/47476790-2527-4bdb-b839-3b48ed18dccf) ++**Description**: This control checks whether an Amazon DynamoDB table can scale its read and write capacity as needed. This control passes if the table uses either on-demand capacity mode or provisioned mode with auto scaling configured. + Scaling capacity with demand avoids throttling exceptions, which helps to maintain availability of your applications. ++**Severity**: Medium ++### [EC2 instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231dee23-84db-44d2-bd9d-c32fbcfb42a3) ++**Description**: Connect your EC2 instances to Azure Arc in order to have full visibility to Microsoft Defender for Servers security content. Learn more about [Azure Arc](../azure-arc/servers/overview.md), and about [Microsoft Defender for Servers](plan-defender-for-servers.md) on hybrid-cloud environment. ++**Severity**: High ++### [EC2 instances should be managed by AWS Systems Manager](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4be5393d-cc33-4ef7-acae-80295bc3ae35) ++**Description**: Status of the Amazon EC2 Systems Manager patch compliance is 'COMPLIANT' or 'NON_COMPLIANT' after the patch installation on the instance. + Only instances managed by AWS Systems Manager Patch Manager are checked. Patches that were applied within the 30-day limit prescribed by PCI DSS requirement '6' aren't checked. ++**Severity**: Medium ++### [EDR configuration issues should be resolved on EC2s](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/695abd03-82bd-4d7f-a94c-140e8a17666c) ++**Description**: To protect virtual machines from the latest threats and vulnerabilities, resolve all identified configuration issues with the installed Endpoint Detection and Response (EDR) solution. Currently, this recommendation only applies to resources with Microsoft Defender for Endpoint enabled. ++This agentless endpoint recommendation is available if you have Defender for Servers Plan 2 or the Defender CSPM plan. [Learn more](endpoint-detection-response.md) about agentless endpoint protection recommendations. +- These new agentless endpoint recommendations support Azure and multicloud machines. On-premises servers aren't supported. +- These new agentless endpoint recommendations replace existing recommendations [Endpoint protection should be installed on your machines (preview)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/4fb67663-9ab9-475d-b026-8c544cced439) and [Endpoint protection health issues should be resolved on your machines (preview)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/37a3689a-818e-4a0e-82ac-b1392b9bb000). +- These older recommendations use the MMA/AMA agent and will be replaced as the agents are [phased out in Defender for Servers](https://techcommunity.microsoft.com/t5/user/ssoregistrationpage?dest_url=https:%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fblogs%2Fblogworkflowpage%2Fblog-id%2FMicrosoftDefenderCloudBlog%2Farticle-id%2F1269). ++**Severity**: High ++### [EDR solution should be installed on EC2s](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/77d09952-2bc2-4495-8795-cc8391452f85) ++**Description**: To protect EC2s, install an Endpoint Detection and Response (EDR) solution. EDRs help prevent, detect, investigate, and respond to advanced threats. Use Microsoft Defender for Servers to deploy Microsoft Defender for Endpoint. If resource is classified as "Unhealthy", it doesn't have a supported EDR solution installed. If you have an EDR solution installed which isn't discoverable by this recommendation, you can exempt it. ++This agentless endpoint recommendation is available if you have Defender for Servers Plan 2 or the Defender CSPM plan. [Learn more](endpoint-detection-response.md) about agentless endpoint protection recommendations. ++- These new agentless endpoint recommendations support Azure and multicloud machines. On-premises servers aren't supported. +- These new agentless endpoint recommendations replace existing recommendations [Endpoint protection should be installed on your machines (preview)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/4fb67663-9ab9-475d-b026-8c544cced439) and [Endpoint protection health issues should be resolved on your machines (preview)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/37a3689a-818e-4a0e-82ac-b1392b9bb000). +- These older recommendations use the MMA/AMA agent and will be replaced as the agents are [phased out in Defender for Servers](https://techcommunity.microsoft.com/t5/user/ssoregistrationpage?dest_url=https:%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fblogs%2Fblogworkflowpage%2Fblog-id%2FMicrosoftDefenderCloudBlog%2Farticle-id%2F1269). ++**Severity**: High ++### [Instances managed by Systems Manager should have an association compliance status of COMPLIANT](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/67a90ae0-b3d1-44f0-9dcf-a03234ebeb65) ++**Description**: This control checks whether the status of the AWS Systems Manager association compliance is COMPLIANT or NON_COMPLIANT after the association is run on an instance. The control passes if the association compliance status is COMPLIANT. +A State Manager association is a configuration that is assigned to your managed instances. The configuration defines the state that you want to maintain on your instances. For example, an association can specify that antivirus software must be installed and running on your instances, or that certain ports must be closed. +After you create one or more State Manager associations, compliance status information is immediately available to you in the console or in response to AWS CLI commands or corresponding Systems Manager API operations. For associations, "Configuration" Compliance shows statuses of Compliant or Non-compliant and the severity level assigned to the association, such as *Critical* or *Medium*. To learn more about State Manager association compliance, see [About State Manager association compliance](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-compliance-https://docsupdatetracker.net/about.html#sysman-compliance-about-association) in the AWS Systems Manager User Guide. +You must configure your in-scope EC2 instances for Systems Manager association. You must also configure the patch baseline for the security rating of the vendor of patches, and set the autoapproval date to meet PCI DSS *3.2.1* requirement *6.2*. For more guidance on how to [Create an association](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-state-assoc.html), see Create an association in the AWS Systems Manager User Guide. For more information on working with patching in Systems Manager, see [AWS Systems Manager Patch Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html) in the AWS Systems Manager User Guide. ++**Severity**: Low ++### [Lambda functions should have a dead-letter queue configured](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dcf10b98-798f-4734-9afd-800916bf1e65) ++**Description**: This control checks whether a Lambda function is configured with a dead-letter queue. The control fails if the Lambda function isn't configured with a dead-letter queue. +As an alternative to an on-failure destination, you can configure your function with a dead-letter queue to save discarded events for further processing. + A dead-letter queue acts the same as an on-failure destination. It's used when an event fails all processing attempts or expires without being processed. +A dead-letter queue allows you to look back at errors or failed requests to your Lambda function to debug or identify unusual behavior. +From a security perspective, it's important to understand why your function failed and to ensure that your function doesn't drop data or compromise data security as a result. + For example, if your function can't communicate to an underlying resource, that could be a symptom of a denial of service (DoS) attack elsewhere in the network. ++**Severity**: Medium ++### [Lambda functions should use supported runtimes](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e656e5b7-130c-4fb4-be90-9bdd4f82fdfb) ++**Description**: This control checks that the Lambda function settings for runtimes match the expected values set for the supported runtimes for each language. This control checks for the following runtimes: + **nodejs14.x**, **nodejs12.x**, **nodejs10.x**, **python3.8**, **python3.7**, **python3.6**, **ruby2.7**, **ruby2.5**, **java11**, **java8**, **java8.al2**, **go1.x**, **dotnetcore3.1**, **dotnetcore2.1** +[Lambda runtimes](https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html) are built around a combination of operating system, programming language, and software libraries that are subject to maintenance and security updates. When a runtime component is no longer supported for security updates, Lambda deprecates the runtime. Even though you can't create functions that use the deprecated runtime, the function is still available to process invocation events. Make sure that your Lambda functions are current and don't use out-of-date runtime environments. +To learn more about the supported runtimes that this control checks for the supported languages, see [AWS Lambda runtimes](https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html) in the AWS Lambda Developer Guide. ++**Severity**: Medium ++### [Management ports of EC2 instances should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9b26b102-ccde-4697-aa30-f0621f865f99) ++**Description**: Microsoft Defender for Cloud identified some overly permissive inbound rules for management ports in your network. Enable just-in-time access control to protect your Instances from internet-based brute-force attacks. [Learn more](just-in-time-access-usage.yml). ++**Severity**: High ++### [Unused EC2 security groups should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f065cc7b-f63b-4865-b8ff-4a1292e1a5cb) ++**Description**: Security groups should be attached to Amazon EC2 instances or to an ENI. + Healthy finding can indicate there are unused Amazon EC2 security groups. ++**Severity**: Low ++## GCP Compute recommendations ++### [Compute Engine VMs should use the Container-Optimized OS](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/3e33004b-f0b8-488d-85ed-61336c7ad4ca) ++**Description**: This recommendation evaluates the config property of a node pool for the key-value pair, 'imageType': 'COS.' ++**Severity**: Low ++### [EDR configuration issues should be resolved on GCP virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f36a15fb-61a6-428c-b719-6319538ecfbc) ++**Description**: To protect virtual machines from the latest threats and vulnerabilities, resolve all identified configuration issues with the installed Endpoint Detection and Response (EDR) solution. Currently, this recommendation only applies to resources with Microsoft Defender for Endpoint enabled. ++This agentless endpoint recommendation is available if you have Defender for Servers Plan 2 or the Defender CSPM plan. [Learn more](endpoint-detection-response.md) about agentless endpoint protection recommendations. ++- These new agentless endpoint recommendations support Azure and multicloud machines. On-premises servers aren't supported. +- These new agentless endpoint recommendations replace existing recommendations [Endpoint protection should be installed on your machines (preview)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/4fb67663-9ab9-475d-b026-8c544cced439) and [Endpoint protection health issues should be resolved on your machines (preview)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/37a3689a-818e-4a0e-82ac-b1392b9bb000). +- These older recommendations use the MMA/AMA agent and will be replaced as the agents are [phased out in Defender for Servers](https://techcommunity.microsoft.com/t5/user/ssoregistrationpage?dest_url=https:%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fblogs%2Fblogworkflowpage%2Fblog-id%2FMicrosoftDefenderCloudBlog%2Farticle-id%2F1269). ++**Severity**: High ++### [EDR solution should be installed on GCP Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/68e595c1-a031-4354-b37c-4bdf679732f1) ++**Description**: To protect virtual machines, install an Endpoint Detection and Response (EDR) solution. EDRs help prevent, detect, investigate, and respond to advanced threats. Use Microsoft Defender for Servers to deploy Microsoft Defender for Endpoint. If resource is classified as "Unhealthy", it doesn't have a supported EDR solution installed. If you have an EDR solution installed which isn't discoverable by this recommendation, you can exempt it. ++This agentless endpoint recommendation is available if you have Defender for Servers Plan 2 or the Defender CSPM plan. [Learn more](endpoint-detection-response.md) about agentless endpoint protection recommendations. ++- These new agentless endpoint recommendations support Azure and multicloud machines. On-premises servers aren't supported. +- These new agentless endpoint recommendations replace existing recommendations [Endpoint protection should be installed on your machines (preview)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/4fb67663-9ab9-475d-b026-8c544cced439) and [Endpoint protection health issues should be resolved on your machines (preview)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/37a3689a-818e-4a0e-82ac-b1392b9bb000). +- These older recommendations use the MMA/AMA agent and will be replaced as the agents are [phased out in Defender for Servers](https://techcommunity.microsoft.com/t5/user/ssoregistrationpage?dest_url=https:%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fblogs%2Fblogworkflowpage%2Fblog-id%2FMicrosoftDefenderCloudBlog%2Farticle-id%2F1269). ++**Severity**: High +++### [Ensure 'Block Project-wide SSH keys' is enabled for VM instances](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/00f8a6a6-cf69-4c11-822e-3ebf4910e545) ++**Description**: It's recommended to use Instance specific SSH key(s) instead of using common/shared project-wide SSH key(s) to access Instances. +Project-wide SSH keys are stored in Compute/Project-meta-data. Project wide SSH keys can be used to log in into all the instances within project. Using project-wide SSH keys eases the SSH key management but if compromised, poses the security risk that can affect all the instances within project. + It's recommended to use Instance specific SSH keys that can limit the attack surface if the SSH keys are compromised. ++**Severity**: Medium ++### [Ensure Compute instances are launched with Shielded VM enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1a4b3b3a-7de9-4aa4-a29b-580d59b43f79) ++**Description**: To defend against advanced threats and ensure that the boot loader and firmware on your VMs are signed and untampered, it's recommended that Compute instances are launched with Shielded VM enabled. +Shielded VMs are VMs on Google Cloud Platform hardened by a set of security controls that help defend against ```rootkits``` and ```bootkits```. +Shielded VM offers verifiable integrity of your Compute Engine VM instances, so you can be confident your instances haven't been compromised by boot- or kernel-level malware or rootkits. +Shielded VM's verifiable integrity is achieved through the use of Secure Boot, virtual trusted platform module (vTPM)-enabled Measured Boot, and integrity monitoring. +Shielded VM instances run firmware that is signed and verified using Google's Certificate Authority, ensuring that the instance's firmware is unmodified and establishing the root of trust for Secure Boot. +Integrity monitoring helps you understand and make decisions about the state of your VM instances and the Shielded VM vTPM enables Measured Boot by performing the measurements needed to create a known good boot baseline, called the integrity policy baseline. +The integrity policy baseline is used for comparison with measurements from subsequent VM boots to determine if anything has changed. +Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails. ++**Severity**: High ++### [Ensure 'Enable connecting to serial ports' is not enabled for VM Instance](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/7e060336-2c9e-4289-a2a6-8d301bad47bb) ++**Description**: Interacting with a serial port is often referred to as the serial console, which is similar to using a terminal window, in that input and output is entirely in text mode and there's no graphical interface or mouse support. +If you enable the interactive serial console on an instance, clients can attempt to connect to that instance from any IP address. Therefore interactive serial console support should be disabled. +A virtual machine instance has four virtual serial ports. Interacting with a serial port is similar to using a terminal window, in that input and output is entirely in text mode and there's no graphical interface or mouse support. +The instance's operating system, BIOS, and other system-level entities often write output to the serial ports, and can accept input such as commands or answers to prompts. +Typically, these system-level entities use the first serial port (port 1) and serial port 1 is often referred to as the serial console. +The interactive serial console doesn't support IP-based access restrictions such as IP allowlists. If you enable the interactive serial console on an instance, clients can attempt to connect to that instance from any IP address. +This allows anybody to connect to that instance if they know the correct SSH key, username, project ID, zone, and instance name. +Therefore interactive serial console support should be disabled. ++**Severity**: Medium ++### [Ensure 'log_duration' database flag for Cloud SQL PostgreSQL instance is set to 'on'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/272820a7-06ce-44b3-8318-4ec1f82237dc) ++**Description**: Enabling the log_hostname setting causes the duration of each completed statement to be logged. + This doesn't logs the text of the query and thus behaves different from the log_min_duration_statement flag. + This parameter can't be changed after session start. + Monitoring the time taken to execute the queries can be crucial in identifying any resource hogging queries and assessing the performance of the server. + Further steps such as load balancing and use of optimized queries can be taken to ensure the performance and stability of the server. + This recommendation is applicable to PostgreSQL database instances. ++**Severity**: Low ++### [Ensure 'log_executor_stats' database flag for Cloud SQL PostgreSQL instance is set to 'off'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/19711549-76eb-4f1f-b43b-b1048e66c1f0) ++**Description**: The PostgreSQL executor is responsible to execute the plan handed over by the PostgreSQL planner. + The executor processes the plan recursively to extract the required set of rows. + The "log_executor_stats" flag controls the inclusion of PostgreSQL executor performance statistics in the PostgreSQL logs for each query. + The "log_executor_stats" flag enables a crude profiling method for logging PostgreSQL executor performance statistics, which even though can be useful for troubleshooting, it might increase the number of logs significantly and have performance overhead. + This recommendation is applicable to PostgreSQL database instances. ++**Severity**: Low ++### [Ensure 'log_min_error_statement' database flag for Cloud SQL PostgreSQL instance is set to 'Error' or stricter](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/50a1058e-925b-4998-9d93-5eaa8f7021a3) ++**Description**: The "log_min_error_statement" flag defines the minimum message severity level that is considered as an error statement. + Messages for error statements are logged with the SQL statement. + Valid values include "DEBUG5," "DEBUG4," "DEBUG3," "DEBUG2," "DEBUG1," "INFO," "NOTICE," "WARNING," "ERROR," "LOG," "FATAL," and "PANIC." + Each severity level includes the subsequent levels mentioned above. + Ensure a value of ERROR or stricter is set. + Auditing helps in troubleshooting operational problems and also permits forensic analysis. + If "log_min_error_statement" isn't set to the correct value, messages might not be classified as error messages appropriately. + Considering general log messages as error messages would make is difficult to find actual errors and considering only stricter severity levels as error messages might skip actual errors to log their SQL statements. + The "log_min_error_statement" flag should be set to "ERROR" or stricter. + This recommendation is applicable to PostgreSQL database instances. ++**Severity**: Low ++### [Ensure 'log_parser_stats' database flag for Cloud SQL PostgreSQL instance is set to 'off'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a6efc275-b1c1-4003-8e85-2f30b2eb56e6) ++**Description**: The PostgreSQL planner/optimizer is responsible to parse and verify the syntax of each query received by the server. + If the syntax is correct a "parse tree" is built up else an error is generated. + The "log_parser_stats" flag controls the inclusion of parser performance statistics in the PostgreSQL logs for each query. + The "log_parser_stats" flag enables a crude profiling method for logging parser performance statistics, which even though can be useful for troubleshooting, it might increase the number of logs significantly and have performance overhead. + This recommendation is applicable to PostgreSQL database instances. ++**Severity**: Low ++### [Ensure 'log_planner_stats' database flag for Cloud SQL PostgreSQL instance is set to 'off'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/7d87879a-d498-4e61-b552-b34463f87f83) ++**Description**: The same SQL query can be executed in multiple ways and still produce different results. + The PostgreSQL planner/optimizer is responsible to create an optimal execution plan for each query. + The "log_planner_stats" flag controls the inclusion of PostgreSQL planner performance statistics in the PostgreSQL logs for each query. + The "log_planner_stats" flag enables a crude profiling method for logging PostgreSQL planner performance statistics, which even though can be useful for troubleshooting, it might increase the number of logs significantly and have performance overhead. + This recommendation is applicable to PostgreSQL database instances. ++**Severity**: Low ++### [Ensure 'log_statement_stats' database flag for Cloud SQL PostgreSQL instance is set to 'off'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c36e73b7-ee30-4684-a1ad-2b878d2b10bf) ++**Description**: The "log_statement_stats" flag controls the inclusion of end to end performance statistics of a SQL query in the PostgreSQL logs for each query. + This can't be enabled with other module statistics (*log_parser_stats*, *log_planner_stats*, *log_executor_stats*). + The "log_statement_stats" flag enables a crude profiling method for logging end to end performance statistics of a SQL query. + This can be useful for troubleshooting but might increase the number of logs significantly and have performance overhead. + This recommendation is applicable to PostgreSQL database instances. ++**Severity**: Low ++### [Ensure that Compute instances do not have public IP addresses](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8bdd13ad-a9d2-4910-8b06-9c4cddb55abb) ++**Description**: Compute instances shouldn't be configured to have external IP addresses. +To reduce your attack surface, Compute instances shouldn't have public IP addresses. Instead, instances should be configured behind load balancers, to minimize the instance's exposure to the internet. +Instances created by GKE should be excluded because some of them have external IP addresses and can't be changed by editing the instance settings. +These VMs have names that start with ```gke-``` and are labeled ```goog-gke-node```. ++**Severity**: High ++### [Ensure that instances are not configured to use the default service account](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a107c44c-75e4-4607-b1b0-cd5cfcf249e0) ++**Description**: It's recommended to configure your instance to not use the default Compute Engine service account because it has the Editor role on the project. +The default Compute Engine service account has the Editor role on the project, which allows read and write access to most Google Cloud Services. +To defend against privilege escalations if your VM is compromised, and to prevent an attacker from gaining access to all of your projects, it's recommended to not use the default Compute Engine service account. +Instead, you should create a new service account and assigning only the permissions needed by your instance. +The default Compute Engine service account is named `[PROJECT_NUMBER]- compute@developer.gserviceaccount.com`. +VMs created by GKE should be excluded. These VMs have names that start with ```gke-``` and are labeled ```goog-gke-node```. ++**Severity**: High ++### [Ensure that instances are not configured to use the default service account with full access to all Cloud APIs](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a8c1fcf1-ca66-4fc1-b5e6-51d7f4f76782) ++**Description**: To support principle of least privileges and prevent potential privilege escalation, it's recommended that instances aren't assigned to default service account "Compute Engine default service account" with Scope "Allow full access to all Cloud APIs." +Along with ability to optionally create, manage, and use user managed custom service accounts, Google Compute Engine provides default service account "Compute Engine default service account" for an instance to access necessary cloud services. ++"Project Editor" role is assigned to "Compute Engine default service account" hence, This service account has almost all capabilities over all cloud services except billing. +However, when "Compute Engine default service account" assigned to an instance it can operate in three scopes. ++- Allow default access: Allows only minimum access required to run an Instance (Least Privileges). +- Allow full access to all Cloud APIs: Allow full access to all the cloud APIs/Services (Too much access). +- Set access for each API: Allows Instance administrator to choose only those APIs that are needed to perform specific business functionality expected by instance. ++When an instance is configured with "Compute Engine default service account" with Scope "Allow full access to all Cloud APIs," based on IAM roles assigned to the user(s) accessing Instance, +it might allow user to perform cloud operations/API calls that user isn't supposed to perform leading to successful privilege escalation. ++VMs created by GKE should be excluded. These VMs have names that start with ```gke-``` and are labeled ```goog-gke-node```. ++**Severity**: Medium ++### [Ensure that IP forwarding is not enabled on Instances](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/0ba588a6-4539-4e67-bc62-d7b2b51300fb) ++**Description**: Compute Engine instance can't forward a packet unless the source IP address of the packet matches the IP address of the instance. Similarly, GCP won't deliver a packet whose destination IP address is different than the IP address of the instance receiving the packet. + However, both capabilities are required if you want to use instances to help route packets. +Forwarding of data packets should be disabled to prevent data loss or information disclosure. +Compute Engine instance can't forward a packet unless the source IP address of the packet matches the IP address of the instance. Similarly, GCP won't deliver a packet whose destination IP address is different than the IP address of the instance receiving the packet. + However, both capabilities are required if you want to use instances to help route packets. To enable this source and destination IP check, disable the canIpForward field, which allows an instance to send and receive packets with nonmatching destination or source IPs. ++**Severity**: Medium ++### [Ensure that the 'log_checkpoints' database flag for Cloud SQL PostgreSQL instance is set to 'on'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a2404629-0132-4ab3-839e-8389dbe9fe98) ++**Description**: Ensure that the log_checkpoints database flag for the Cloud SQL PostgreSQL instance is set to on. +Enabling log_checkpoints causes checkpoints and restart points to be logged in the server log. Some statistics are included in the log messages, including the number of buffers written and the time spent writing them. + This parameter can only be set in the postgresql.conf file or on the server command line. This recommendation is applicable to PostgreSQL database instances. ++**Severity**: Low ++### [Ensure that the 'log_lock_waits' database flag for Cloud SQL PostgreSQL instance is set to 'on'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8191f530-fde7-4177-827a-43ce0f69ffe7) ++**Description**: Enabling the "log_lock_waits" flag for a PostgreSQL instance creates a log for any session waits that take longer than the allotted "deadlock_timeout" time to acquire a lock. + The deadlock timeout defines the time to wait on a lock before checking for any conditions. Frequent run overs on deadlock timeout can be an indication of an underlying issue. + Logging such waits on locks by enabling the log_lock_waits flag can be used to identify poor performance due to locking delays or if a specially crafted SQL is attempting to starve resources through holding locks for excessive amounts of time. + This recommendation is applicable to PostgreSQL database instances. ++**Severity**: Low ++### [Ensure that the 'log_min_duration_statement' database flag for Cloud SQL PostgreSQL instance is set to '-1'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1c9e237b-419f-4e73-b43a-94b5863dd73e) ++**Description**: The "log_min_duration_statement" flag defines the minimum amount of execution time of a statement in milliseconds where the total duration of the statement is logged. Ensure that "log_min_duration_statement" is disabled, that is, a value of -1 is set. + Logging SQL statements might include sensitive information that shouldn't be recorded in logs. This recommendation is applicable to PostgreSQL database instances. ++**Severity**: Low ++### [Ensure that the 'log_min_messages' database flag for Cloud SQL PostgreSQL instance is set appropriately](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/492fed4e-1871-4c12-948d-074ee0f07559) ++**Description**: The "log_min_error_statement" flag defines the minimum message severity level that is considered as an error statement. + Messages for error statements are logged with the SQL statement. + Valid values include "DEBUG5," "DEBUG4," "DEBUG3," "DEBUG2," "DEBUG1," "INFO," "NOTICE," "WARNING," "ERROR," "LOG," "FATAL," and "PANIC." + Each severity level includes the subsequent levels mentioned above. + To effectively turn off logging failing statements, set this parameter to PANIC. + ERROR is considered the best practice setting. Changes should only be made in accordance with the organization's logging policy. +Auditing helps in troubleshooting operational problems and also permits forensic analysis. + If "log_min_error_statement" isn't set to the correct value, messages might not be classified as error messages appropriately. + Considering general log messages as error messages would make it difficult to find actual errors, while considering only stricter severity levels as error messages might skip actual errors to log their SQL statements. + The "log_min_error_statement" flag should be set in accordance with the organization's logging policy. + This recommendation is applicable to PostgreSQL database instances. ++**Severity**: Low ++### [Ensure that the 'log_temp_files' database flag for Cloud SQL PostgreSQL instance is set to '0'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/29622fc0-14dc-4d65-a5a8-e9a39ffc4b62) ++**Description**: PostgreSQL can create a temporary file for actions such as sorting, hashing, and temporary query results when these operations exceed "work_mem." + The "log_temp_files" flag controls logging names and the file size when it's deleted. + Configuring "log_temp_files" to 0 causes all temporary file information to be logged, while positive values log only files whose size is greater than or equal to the specified number of kilobytes. + A value of "-1" disables temporary file information logging. + If all temporary files aren't logged, it might be more difficult to identify potential performance issues that might be due to either poor application coding or deliberate resource starvation attempts. ++**Severity**: Low ++### [Ensure VM disks for critical VMs are encrypted with Customer-Supplied Encryption Key](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/6ca40f30-2508-4c90-85b6-36564b909364) ++**Description**: Customer-Supplied Encryption Keys (CSEK) are a feature in Google Cloud Storage and Google Compute Engine. + If you supply your own encryption keys, Google uses your key to protect the Google-generated keys used to encrypt and decrypt your data. + By default, Google Compute Engine encrypts all data at rest. + Compute Engine handles and manages this encryption for you without any additional actions on your part. + However, if you wanted to control and manage this encryption yourself, you can provide your own encryption keys. +By default, Google Compute Engine encrypts all data at rest. Compute Engine handles and manages this encryption for you without any additional actions on your part. +However, if you wanted to control and manage this encryption yourself, you can provide your own encryption keys. +If you provide your own encryption keys, Compute Engine uses your key to protect the Google-generated keys used to encrypt and decrypt your data. +Only users who can provide the correct key can use resources protected by a customer-supplied encryption key. +Google doesn't store your keys on its servers and can't access your protected data unless you provide the key. +This also means that if you forget or lose your key, there's no way for Google to recover the key or to recover any data encrypted with the lost key. +At least business critical VMs should have VM disks encrypted with CSEK. ++**Severity**: Medium ++### [GCP projects should have Azure Arc auto provisioning enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1716d754-8d50-4b90-87b6-0404cad9b4e3) ++**Description**: For full visibility of the security content from Microsoft Defender for servers, GCP VM instances should be connected to Azure Arc. To ensure that all eligible VM instances automatically receive Azure Arc, enable autoprovisioning from Defender for Cloud at the GCP project level. Learn more about [Azure Arc](../azure-arc/servers/overview.md), and [Microsoft Defender for Servers](plan-defender-for-servers.md). ++**Severity**: High ++### [GCP VM instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9bbe2f0f-d6c6-48e8-b4d0-cf25d2c50206) ++**Description**: Connect your GCP Virtual Machines to Azure Arc in order to have full visibility to Microsoft Defender for Servers security content. Learn more about [Azure Arc](../azure-arc/index.yml), and about [Microsoft Defender for Servers](plan-defender-for-servers.md) on hybrid-cloud environment. ++**Severity**: High ++### [GCP VM instances should have OS config agent installed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/20622d8c-2a4f-4a03-9896-a5f2f7ede717) ++**Description**: To receive the full Defender for Servers capabilities using Azure Arc autoprovisioning, GCP VMs should have OS config agent enabled. ++**Severity**: High ++### [GKE cluster's auto repair feature should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/6aeb69dc-0d01-4228-88e9-7e610891d5dd) ++**Description**: This recommendation evaluates the management property of a node pool for the key-value pair, 'key': 'autoRepair,' 'value': true. ++**Severity**: Medium ++### [GKE cluster's auto upgrade feature should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1680e053-2e9b-4e77-a1c7-793ae286155e) ++**Description**: This recommendation evaluates the management property of a node pool for the key-value pair, 'key': 'autoUpgrade,' 'value': true. ++**Severity**: High ++### [Monitoring on GKE clusters should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/6a7b7361-5100-4a8c-b23e-f712d7dad39b) ++**Description**: This recommendation evaluates whether the monitoringService property of a cluster contains the location Cloud Monitoring should use to write metrics. ++**Severity**: Medium +++## Related content ++- [learn about security recommendations](security-policy-concept.md) +- [Review security recommendations](review-security-recommendations.md) |
defender-for-cloud | Recommendations Reference Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-container.md | + + Title: Reference table for all container security recommendations in Microsoft Defender for Cloud +description: This article lists all Microsoft Defender for Cloud container security recommendations that help you harden and protect your resources. +++ Last updated : 03/13/2024+++ai-usage: ai-assisted +++# Container security recommendations ++This article lists all the container security recommendations you might see in Microsoft Defender for Cloud. ++The recommendations that appear in your environment are based on the resources that you're protecting and on your customized configuration. +++> [!TIP] +> If a recommendation description says *No related policy*, usually it's because that recommendation is dependent on a different recommendation. +> +> For example, the recommendation *Endpoint protection health failures should be remediated* relies on the recommendation that checks whether an endpoint protection solution is installed (*Endpoint protection solution should be installed*). The underlying recommendation *does* have a policy. +> Limiting policies to only foundational recommendations simplifies policy management. +++++## Azure container recommendations ++++### [Azure Arc-enabled Kubernetes clusters should have the Azure Policy extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/0642d770-b189-42ef-a2ce-9dcc3ec6c169) ++**Description**: Azure Policy extension for Kubernetes extends [Gatekeeper](https://github.com/open-policy-agent/gatekeeper) v3, an admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/) (OPA), to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. +(No related policy) ++**Severity**: High ++**Type**: Control plane ++### [Azure Arc-enabled Kubernetes clusters should have the Defender extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/3ef9848c-c2c8-4ff3-8b9c-4c8eb8ddfce6) ++**Description**: Defender's extension for Azure Arc provides threat protection for your Arc-enabled Kubernetes clusters. The extension collects data from all control plane (master) nodes in the cluster and sends it to the [Microsoft Defender for Kubernetes backend](defender-for-containers-enable.md?pivots=defender-for-container-arc&tabs=aks-deploy-portal) in the cloud for further analysis. +(No related policy) ++**Severity**: High ++**Type**: Control plane ++### [Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/56a83a6e-c417-42ec-b567-1e6fcb3d09a9) ++**Description**: Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. + When you enable the SecurityProfile.AzureDefender profile on your Azure Kubernetes Service cluster, an agent is deployed to your cluster to collect security event data. +Learn more in [Introduction to Microsoft Defender for Containers](defender-for-containers-introduction.md). +(No related policy) ++**Severity**: High ++**Type**: Control plane ++### [Azure Kubernetes Service clusters should have the Azure Policy add-on for Kubernetes installed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/08e628db-e2ed-4793-bc91-d13e684401c3) ++**Description**: Azure Policy add-on for Kubernetes extends [Gatekeeper](https://github.com/open-policy-agent/gatekeeper) v3, an admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/) (OPA), to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. +Defender for Cloud requires the Add-on to audit and enforce security capabilities and compliance inside your clusters. [Learn more](../governance/policy/concepts/policy-for-kubernetes.md). +Requires Kubernetes v1.14.0 or later. +(Related policy: [Azure Policy Add-on for Kubernetes service (AKS) should be installed and enabled on your clusters](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0a15ec92-a229-4763-bb14-0ea34a568f8d)). ++**Severity**: High ++**Type**: Control plane ++### [Azure registry container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5) ++**Description**: Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. +(Related policy: [Vulnerabilities in Azure Container Registry images should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f5f0f936f-2f01-4bf5-b6be-d423792fa562)). ++**Severity**: High ++**Type**: Vulnerability Assessment +++### [Azure registry container images should have vulnerabilities resolved (powered by Qualys)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648) ++**Description**: Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. +(Related policy: [Vulnerabilities in Azure Container Registry images should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f5f0f936f-2f01-4bf5-b6be-d423792fa562)). ++**Assessment key**: dbd0cb49-b563-45e7-9724-889e799fa648 ++**Type**: Vulnerability Assessment +++### [Azure running container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5) ++**Description**: Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. ++**Severity**: High ++**Type**: Vulnerability Assessment +++### [Azure running container images should have vulnerabilities resolved - (powered by Qualys)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c) ++**Description**: Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. +(No related policy) ++**Assessment key**: 41503391-efa5-47ee-9282-4eff6131462c ++**Type**: Vulnerability Assessment +++### [Container CPU and memory limits should be enforced](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/405c9ae6-49f9-46c4-8873-a86690f27818) ++**Description**: Enforcing CPU and memory limits prevents resource exhaustion attacks (a form of denial of service attack). ++We recommend setting limits for containers to ensure the runtime prevents the container from using more than the configured resource limit. ++(Related policy: [Ensure container CPU and memory resource limits do not exceed the specified limits in Kubernetes cluster](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fe345eecc-fa47-480f-9e88-67dcc122b164)). ++**Severity**: Medium ++**Type**: Kubernetes Data plane ++### [Container images should be deployed from trusted registries only](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8d244d29-fa00-4332-b935-c3a51d525417) ++**Description**: +Images running on your Kubernetes cluster should come from known and monitored container image registries. Trusted registries reduce your cluster's exposure risk by limiting the potential for the introduction of unknown vulnerabilities, security issues, and malicious images. ++(Related policy: [Ensure only allowed container images in Kubernetes cluster](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ffebd0533-8e55-448f-b837-bd0e06f16469)). ++**Severity**: High ++**Type**: Kubernetes Data plane ++### [[Preview] Container images in Azure registry should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/33422d8f-ab1e-42be-bc9a-38685bb567b9) ++**Description**: Defender for Cloud scans your registry images for known vulnerabilities (CVEs) and provides detailed findings for each scanned image. Scanning and remediating vulnerabilities for container images in the registry helps maintain a secure and reliable software supply chain, reduces the risk of security incidents, and ensures compliance with industry standards. ++Recommendation [Azure registry container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/33422d8f-ab1e-42be-bc9a-38685bb567b9recommendations-reference-container.md#preview-containers-running-in-gcp-should-have-vulnerability-findings-resolved) will be removed when the new recommendation is generally available. ++The new recommendation is in preview and not used for secure score calculation. ++**Severity**: High ++**Type**: Vulnerability Assessment ++### [(Enable if required) Container registries should be encrypted with a customer-managed key (CMK)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/af560c4d-9c05-e073-b9f1-f7a94958ff25) ++**Description**: Recommendations to use customer-managed keys for encryption of data at rest are not assessed by default, but are available to enable for applicable scenarios. Data is encrypted automatically using platform-managed keys, so the use of customer-managed keys should only be applied when obligated by compliance or restrictive policy requirements. +To enable this recommendation, navigate to your Security Policy for the applicable scope, and update the *Effect* parameter for the corresponding policy to audit or enforce the use of customer-managed keys. Learn more in [Manage security policies](tutorial-security-policy.md). +Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys (CMK) are commonly required to meet regulatory compliance standards. CMKs enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about CMK encryption at <https://aka.ms/acr/CMK>. +(Related policy: [Container registries should be encrypted with a customer-managed key (CMK)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580)). ++**Severity**: Low ++**Type**: Control plane ++### [Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9b828565-a0ed-61c2-6bf3-1afc99a9b2ca) ++**Description**: Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific public IP addresses or address ranges. If your registry doesn't have an IP/firewall rule or a configured virtual network, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: <https://aka.ms/acr/portal/public-network> and here <https://aka.ms/acr/vnet>. +(Related policy: [Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fd0793b48-0edc-4296-a390-4c75d1bdfd71)). ++**Severity**: Medium ++**Type**: Control plane +++### [Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/13e7d036-6903-821c-6018-962938929bf0) ++**Description**: Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: <https://aka.ms/acr/private-link>. +(Related policy: [Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fe8eef0a8-67cf-4eb4-9386-14b0e78733d4)). ++**Severity**: Medium ++**Type**: Control plane +### [[Preview] Containers running in Azure should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e9acaf48-d2cf-45a3-a6e7-3caa2ef769e0) ++**Description**: Defender for Cloud creates an inventory of all container workloads currently running in your Kubernetes clusters, and provides vulnerability reports for those workloads by matching the images and the vulnerability reports created for the registry images. Scanning and remediating vulnerabilities of container workloads is critical to ensure a robust and secure software supply chain, reduce the risk of security incidents, and ensures compliance with industry standards. ++Recommendation [Azure running container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5) will be removed when the new recommendation is generally available. ++The new recommendation is in preview and not used for secure score calculation. ++**Severity**: High ++**Type**: Vulnerability Assessment +++### [Containers sharing sensitive host namespaces should be avoided](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/802c0637-5a8c-4c98-abd7-7c96d89d6010) ++**Description**: To protect against privilege escalation outside the container, avoid pod access to sensitive host namespaces (host process ID and host IPC) in a Kubernetes cluster. +(Related policy: [Kubernetes cluster containers should not share host process ID or host IPC namespace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f47a1ee2f-2a2a-4576-bf2a-e0e36709c2b8)). ++**Severity**: Medium ++**Type**: Kubernetes data plane ++### [Containers should only use allowed AppArmor profiles](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/86f91051-9d6a-47c3-a07f-bd14cb214b45) ++**Description**: Containers running on Kubernetes clusters should be limited to allowed AppArmor profiles only. +AppArmor (Application Armor) is a Linux security module that protects an operating system and its applications from security threats. To use it, a system administrator associates an AppArmor security profile with each program. +(Related policy: [Kubernetes cluster containers should only use allowed AppArmor profiles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f511f5417-5d12-434d-ab2e-816901e72a5e)). ++**Severity**: High ++**Type**: Kubernetes data plane +++### [Container with privilege escalation should be avoided](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/43dc2a2e-ce69-4d42-923e-ab7d136f2cfe) ++**Description**: Containers shouldn't run with privilege escalation to root in your Kubernetes cluster. +The AllowPrivilegeEscalation attribute controls whether a process can gain more privileges than its parent process. +(Related policy: [Kubernetes clusters should not allow container privilege escalation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1c6e92c9-99f0-4e55-9cf2-0c234dc48f99)). ++**Severity**: Medium ++**Type**: Kubernetes data plane ++++### [Diagnostic logs in Kubernetes services should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/bb318338-de6a-42ff-8428-8274c897d564) ++**Description**: Enable diagnostic logs in your Kubernetes services and retain them up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs. +(No related policy) ++**Severity**: Low ++**Type**: Control plane ++### [Immutable (read-only) root filesystem should be enforced for containers](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/27d6f0e9-b4d5-468b-ae7e-03d5473fd864) ++**Description**: Containers should run with a read only root file system in your Kubernetes cluster. Immutable filesystem protects containers from changes at run-time with malicious binaries being added to PATH. +(Related policy: [Kubernetes cluster containers should run with a read only root file system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fdf49d893-a74c-421d-bc95-c663042e5b80)). ++**Severity**: Medium ++**Type**: Kubernetes data plane ++### [Kubernetes API server should be configured with restricted access](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1a2b5b4c-f80d-46e7-ac81-b51a9fb363de) ++**Description**: To ensure that only applications from allowed networks, machines, or subnets can access your cluster, restrict access to your Kubernetes API server. You can restrict access by defining authorized IP ranges, or by setting up your API servers as private clusters as explained in [Create a private Azure Kubernetes Service cluster](../aks/private-clusters.md). +(Related policy: [Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0e246bcf-5f6f-4f87-bc6f-775d4712c7ea)). ++**Severity**: High ++**Type**: Control plane ++### [Kubernetes clusters should be accessible only over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c6d87087-9ebe-b31f-b452-0bf3bbbaccd2) ++**Description**: Use of HTTPS ensures authentication and protects data in transit from network layer eavesdropping attacks. This capability is currently generally available for Kubernetes Service (AKS), and in preview for AKS Engine and Azure Arc-enabled Kubernetes. For more info, visit <https://aka.ms/kubepolicydoc> +(Related policy: [Enforce HTTPS ingress in Kubernetes cluster](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1a5b4dca-0b6f-4cf5-907c-56316bc1bf3d)). ++**Severity**: High ++**Type**: Kubernetes Data plane ++### [Kubernetes clusters should disable automounting API credentials](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/32060ac3-f17f-4848-db8e-e7cf2c9a53eb) ++**Description**: Disable automounting API credentials to prevent a potentially compromised Pod resource to run API commands against Kubernetes clusters. For more information, see <https://aka.ms/kubepolicydoc>. +(Related policy: [Kubernetes clusters should disable automounting API credentials](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f423dd1ba-798e-40e4-9c4d-b6902674b423)). ++**Severity**: High ++**Type**: Kubernetes Data plane ++### [Kubernetes clusters should not grant CAPSYSADMIN security capabilities](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/aba14f78-27c5-af84-848e-9105d18dfd92) ++**Description**: To reduce the attack surface of your containers, restrict CAP_SYS_ADMIN Linux capabilities. For more information, see <https://aka.ms/kubepolicydoc>. +(No related policy) ++**Severity**: High ++**Type**: Kubernetes data plane ++### [Kubernetes clusters should not use the default namespace](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ff87e0b4-17df-d338-5b19-80e71e0dcc9d) ++**Description**: Prevent usage of the default namespace in Kubernetes clusters to protect against unauthorized access for ConfigMap, Pod, Secret, Service, and ServiceAccount resource types. For more information, see <https://aka.ms/kubepolicydoc>. +(Related policy: [Kubernetes clusters should not use the default namespace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f9f061a12-e40d-4183-a00e-171812443373)). ++**Severity**: Low ++**Type**: Kubernetes data plane ++### [Least privileged Linux capabilities should be enforced for containers](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/11c95609-3553-430d-b788-fd41cde8b2db) ++**Description**: To reduce attack surface of your container, restrict Linux capabilities and grant specific privileges to containers without granting all the privileges of the root user. We recommend dropping all capabilities, then adding those that are required +(Related policy: [Kubernetes cluster containers should only use allowed capabilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fc26596ff-4d70-4e6a-9a30-c2506bd2f80c)). ++**Severity**: Medium ++**Type**: Kubernetes data plane +++### [Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e599a9fe-30e3-47c6-a173-8b4b6d9d3255) ++**Description**: Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multicloud Kubernetes environments. +You can use this information to quickly remediate security issues and improve the security of your containers. ++Remediating this recommendation will result in charges for protecting your Kubernetes clusters. If you don't have any Kubernetes clusters in this subscription, no charges will be incurred. +If you create any Kubernetes clusters on this subscription in the future, they'll automatically be protected and charges will begin at that time. +Learn more in [Introduction to Microsoft Defender for Containers](container-security.md). +(No related policy) ++**Severity**: High ++**Type**: Control plane ++### [Privileged containers should be avoided](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/5d90913f-a1c5-4429-ad54-2c6c17fb3c73) ++**Description**: To prevent unrestricted host access, avoid privileged containers whenever possible. ++Privileged containers have all of the root capabilities of a host machine. They can be used as entry points for attacks and to spread malicious code or malware to compromised applications, hosts, and networks. +(Related policy: [Do not allow privileged containers in Kubernetes cluster](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f95edb821-ddaf-4404-9732-666045e056b4)). ++**Severity**: Medium ++**Type**: Kubernetes data plane ++### [Role-Based Access Control should be used on Kubernetes Services](https://portal.azure.com/#blade/M +Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/b0fdc63a-38e7-4bab-a7c4-2c2665abbaa9) ++**Description**: To provide granular filtering on the actions that users can perform, use [Role-Based Access Control (RBAC)](../aks/concepts-identity.md#azure-role-based-access-control) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. +(Related policy: [Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fac4a19c2-fa67-49b4-8ae5-0b2e78c49457)). ++**Severity**: High ++**Type**: Control plane ++### [Running containers as root user should be avoided](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9b795646-9130-41a4-90b7-df9eae2437c8) ++**Description**: Containers shouldn't run as root users in your Kubernetes cluster. Running a process as the root user inside a container runs it as root on the host. If there's a compromise, an attacker has root in the container, and any misconfigurations become easier to exploit. +(Related policy: [Kubernetes cluster pods and containers should only run with approved user and group IDs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff06ddb64-5fa3-4b77-b166-acb36f7f6042)). ++**Severity**: High ++**Type**: Kubernetes Data plane ++### [Services should listen on allowed ports only](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/add45209-73f6-4fa5-a5a5-74a451b07fbe) ++**Description**: To reduce the attack surface of your Kubernetes cluster, restrict access to the cluster by limiting services access to the configured ports. +(Related policy: [Ensure services listen only on allowed ports in Kubernetes cluster](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f233a2a17-77ca-4fb1-9b6b-69223d272a44)). ++**Severity**: Medium ++**Type**: Kubernetes data plane ++### [Usage of host networking and ports should be restricted](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ebc68898-5c0f-4353-a426-4a5f1e737b12) ++**Description**: Restrict pod access to the host network and the allowable host port range in a Kubernetes cluster. Pods created with the hostNetwork attribute enabled will share the node's network space. To avoid compromised container from sniffing network traffic, we recommend not putting your pods on the host network. If you need to expose a container port on the node's network, and using a Kubernetes Service node port does not meet your needs, another possibility is to specify a hostPort for the container in the pod spec. +(Related policy: [Kubernetes cluster pods should only use approved host network and port range](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f82985f06-dc18-4a48-bc1c-b9f4f0098cfe)). ++**Severity**: Medium ++**Type**: Kubernetes data plane ++### [Usage of pod HostPath volume mounts should be restricted to a known list to restrict node access from compromised containers](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f0debc84-981c-4a0d-924d-aa4bd7d55fef) ++**Description**: We recommend limiting pod HostPath volume mounts in your Kubernetes cluster to the configured allowed host paths. If there's a compromise, the container node access from the containers should be restricted. +(Related policy: [Kubernetes cluster pod hostPath volumes should only use allowed host paths](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f098fc59e-46c7-4d99-9b16-64990e543d75)). ++**Severity**: Medium ++**Type**: Kubernetes Data plane ++++## AWS container recommendations ++### [[Preview] Container images in AWS registry should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2a139383-ec7e-462a-90ac-b1b60e87d576) ++**Description**: Defender for Cloud scans your registry images for known vulnerabilities (CVEs) and provides detailed findings for each scanned image. Scanning and remediating vulnerabilities for container images in the registry helps maintain a secure and reliable software supply chain, reduces the risk of security incidents, and ensures compliance with industry standards. ++Recommendation [AWS registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/AwsContainerRegistryRecommendationDetailsBlade/assessmentKey/c27441ae-775c-45be-8ffa-655de37362ce) will be removed by the new recommendation is generally available. ++The new recommendation is in preview and not used for secure score calculation. ++**Severity**: High ++**Type**: Vulnerability Assessment ++### [[Preview] Containers running in AWS should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d5d1e526-363a-4223-b860-f4b6e710859f) ++**Description**: Defender for Cloud creates an inventory of all container workloads currently running in your Kubernetes clusters and provides vulnerability reports for those workloads by matching the images and the vulnerability reports created for the registry images. Scanning and remediating vulnerabilities of container workloads is critical to ensure a robust and secure software supply chain, reduce the risk of security incidents, and ensures compliance with industry standards. ++Recommendation [AWS running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/AwsContainersRuntimeRecommendationDetailsBlade/assessmentKey/682b2595-d045-4cff-b5aa-46624eb2dd8f) will be removed when the new recommendation is generally available. ++The new recommendation is in preview and not used for secure score calculation. ++**Severity**: High ++**Type**: Vulnerability Assessment ++### [EKS clusters should grant the required AWS permissions to Microsoft Defender for Cloud](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/7d3a977e-46f1-419a-9046-4bd44db80aac) ++**Description**: Microsoft Defender for Containers provides protections for your EKS clusters. + To monitor your cluster for security vulnerabilities and threats, Defender for Containers needs permissions for your AWS account. These permissions are used to enable Kubernetes control plane logging on your cluster and establish a reliable pipeline between your cluster and Defender for Cloud's backend in the cloud. + Learn more about [Microsoft Defender for Cloud's security features for containerized environments](defender-for-containers-introduction.md). ++**Severity**: High ++### [EKS clusters should have Microsoft Defender's extension for Azure Arc installed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/38307993-84fb-4636-8ce7-3a64466bb5cc) ++**Description**: Microsoft Defender's [cluster extension](../azure-arc/kubernetes/extensions.md) provides security capabilities for your EKS clusters. The extension collects data from a cluster and its nodes to identify security vulnerabilities and threats. + The extension works with [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md). +Learn more about [Microsoft Defender for Cloud's security features for containerized environments](defender-for-containers-introduction.md?tabs=defender-for-container-arch-aks). ++**Severity**: High ++### [Microsoft Defender for Containers should be enabled on AWS connectors](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/11d0f4af-6924-4a2e-8b66-781a4553c828) ++**Description**: Microsoft Defender for Containers provides real-time threat protection for containerized environments and generates alerts about suspicious activities. +Use this information to harden the security of Kubernetes clusters and remediate security issues. ++When you enable Microsoft Defender for Containers and deploy Azure Arc to your EKS clusters, the protections - and charges - will begin. If you don't deploy Azure Arc on a cluster, Defender for Containers won't protect it, and no charges are incurred for this Microsoft Defender plan for that cluster. ++**Severity**: High ++### Data plane recommendations ++All the [Kubernetes data plane security recommendations](kubernetes-workload-protections.md#view-and-configure-the-bundle-of-recommendations) are supported for AWS after you [enable Azure Policy for Kubernetes](kubernetes-workload-protections.md#enable-kubernetes-data-plane-hardening). +++## GCP container recommendations ++### [Advanced configuration of Defender for Containers should be enabled on GCP connectors](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/b7683ca3-3a11-49b6-b9d4-a112713edfa3) ++**Description**: Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. To ensure you the solution is provisioned properly, and the full set of capabilities are available, enable all advanced configuration settings. ++**Severity**: High +++### [[Preview] Container images in GCP registry should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/24e37609-dcf5-4a3b-b2b0-b7d76f2e4e04) ++**Description**: Defender for Cloud scans your registry images for known vulnerabilities (CVEs) and provides detailed findings for each scanned image. Scanning and remediating vulnerabilities for container images in the registry helps maintain a secure and reliable software supply chain, reduces the risk of security incidents, and ensures compliance with industry standards. ++Recommendation [GCP registry container images should have vulnerability findings resolved (powered by Microsoft Defender vulnerability Management](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/GcpContainerRegistryRecommendationDetailsBlade/assessmentKey/5cc3a2c1-8397-456f-8792-fe9d0d4c9145) will be removed when the new recommendation is generally available. ++The new recommendation is in preview and not used for secure score calculation. ++**Severity**: High ++**Type**: Vulnerability Assessment ++### [[Preview] Containers running in GCP should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c7c1d31d-a604-4b86-96df-63448618e165) ++**Description**: Defender for Cloud creates an inventory of all container workloads currently running in your Kubernetes clusters and provides vulnerability reports for those workloads by matching the images and the vulnerability reports created for the registry images. Scanning and remediating vulnerabilities of container workloads is critical to ensure a robust and secure software supply chain, reduce the risk of security incidents, and ensures compliance with industry standards. ++Recommendation [GCP running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/GcpContainersRuntimeRecommendationDetailsBlade/assessmentKey/e538731a-80c8-4317-a119-13075e002516) will be removed when the new recommendation is generally available. ++The new recommendation is in preview and not used for secure score calculation. ++**Severity**: High ++**Type**: Vulnerability Assessment ++### [GKE clusters should have Microsoft Defender's extension for Azure Arc installed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/0faf27b6-f1d5-4f50-b22a-5d129cba0113) ++**Description**: Microsoft Defender's [cluster extension](../azure-arc/kubernetes/extensions.md) provides security capabilities for your GKE clusters. The extension collects data from a cluster and its nodes to identify security vulnerabilities and threats. + The extension works with [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md). +Learn more about [Microsoft Defender for Cloud's security features for containerized environments](defender-for-containers-introduction.md?tabs=defender-for-container-arch-aks). ++**Severity**: High ++### [GKE clusters should have the Azure Policy extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/6273e20b-8814-4fda-a297-42a70b16fcbf) ++**Description**: Azure Policy extension for Kubernetes extends [Gatekeeper](https://github.com/open-policy-agent/gatekeeper) v3, an admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/) (OPA), to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. + The extension works with [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md). ++**Severity**: High ++### [Microsoft Defender for Containers should be enabled on GCP connectors](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d42ac63d-0592-43b2-8bfa-ff9199da595e) ++**Description**: Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. Enable Containers plan on your GCP connector, to harden the security of Kubernetes clusters and remediate security issues. Learn more about Microsoft Defender for Containers. ++**Severity**: High ++### [GKE cluster's auto repair feature should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/6aeb69dc-0d01-4228-88e9-7e610891d5dd) ++**Description**: This recommendation evaluates the management property of a node pool for the key-value pair, ```key: autoRepair, value: true```. ++**Severity**: Medium ++### [GKE cluster's auto upgrade feature should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1680e053-2e9b-4e77-a1c7-793ae286155e) ++**Description**: This recommendation evaluates the management property of a node pool for the key-value pair, ```key: autoUpgrade, value: true```. ++**Severity**: High ++### [Monitoring on GKE clusters should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/6a7b7361-5100-4a8c-b23e-f712d7dad39b) ++**Description**: This recommendation evaluates whether the monitoringService property of a cluster contains the location Cloud Monitoring should use to write metrics. ++**Severity**: Medium ++### [Logging for GKE clusters should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/fa160a2c-e976-41cb-acff-1e1e3f1ed032) ++**Description**: This recommendation evaluates whether the loggingService property of a cluster contains the location Cloud Logging should use to write logs. ++**Severity**: High ++### [GKE web dashboard should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d8fa5c03-a8e8-467b-992c-ad8b2db0f55e) ++**Description**: This recommendation evaluates the kubernetesDashboard field of the addonsConfig property for the key-value pair, 'disabled': false. ++**Severity**: High ++### [Legacy Authorization should be disabled on GKE clusters](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/bd1096e1-73cf-41ab-8f2a-257b78aed9dc) ++**Description**: This recommendation evaluates the legacyAbac property of a cluster for the key-value pair, 'enabled': true. ++**Severity**: High ++### [Control Plane Authorized Networks should be enabled on GKE clusters](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/24df9ba4-8c98-42f2-9f64-50b095eca06f) ++**Description**: This recommendation evaluates the masterAuthorizedNetworksConfig property of a cluster for the key-value pair, 'enabled': false. ++**Severity**: High ++### [GKE clusters should have alias IP ranges enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/49016ecd-d4d6-4f48-a64f-42af93e15120) ++**Description**: This recommendation evaluates whether the useIPAliases field of the ipAllocationPolicy in a cluster is set to false. ++**Severity**: Low ++### [GKE clusters should have Private clusters enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d3e70cff-e4db-47b1-b646-0ac5ed8ada36) ++**Description**: This recommendation evaluates whether the enablePrivateNodes field of the privateClusterConfig property is set to false. ++**Severity**: High ++### [Network policy should be enabled on GKE clusters](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/fd06513a-1e03-4d40-9159-243f76dcdcb7) ++**Description**: This recommendation evaluates the networkPolicy field of the addonsConfig property for the key-value pair, 'disabled': true. ++**Severity**: Medium ++### Data plane recommendations ++All the [Kubernetes data plane security recommendations](kubernetes-workload-protections.md#view-and-configure-the-bundle-of-recommendations) are supported for GCP after you [enable Azure Policy for Kubernetes](kubernetes-workload-protections.md#enable-kubernetes-data-plane-hardening). ++## Related content ++- [Learn about security recommendations](security-policy-concept.md) +- [Review security recommendations](review-security-recommendations.md) |
defender-for-cloud | Recommendations Reference Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-data.md | + + Title: Reference table for all data security recommendations in Microsoft Defender for Cloud +description: This article lists all Microsoft Defender for Cloud data security recommendations that help you harden and protect your resources. +++ Last updated : 03/13/2024+++ai-usage: ai-assisted +++# Data security recommendations ++This article lists all the data security recommendations you might see in Microsoft Defender for Cloud. ++The recommendations that appear in your environment are based on the resources that you're protecting and on your customized configuration. ++To learn about actions that you can take in response to these recommendations, see [Remediate recommendations in Defender for Cloud](implement-security-recommendations.md). +++> [!TIP] +> If a recommendation description says *No related policy*, usually it's because that recommendation is dependent on a different recommendation. +> +> For example, the recommendation *Endpoint protection health failures should be remediated* relies on the recommendation that checks whether an endpoint protection solution is installed (*Endpoint protection solution should be installed*). The underlying recommendation *does* have a policy. +> Limiting policies to only foundational recommendations simplifies policy management. +++++## Azure data recommendations ++### [Azure Cosmos DB should disable public network access](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/334a182c-7c2c-41bc-ae1e-55327891ab50) ++**Description**: Disabling public network access improves security by ensuring that your Cosmos DB account isn't exposed on the public internet. Creating private endpoints can limit exposure of your Cosmos DB account. [Learn more](../cosmos-db/how-to-configure-private-endpoints.md#blocking-public-network-access-during-account-creation). +(Related policy: [Azure Cosmos DB should disable public network access](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f797b37f7-06b8-444c-b1ad-fc62867f335a)). ++**Severity**: Medium ++### [(Enable if required) Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/814df446-7128-eff0-9177-fa52ac035b74) ++**Description**: Recommendations to use customer-managed keys for encryption of data at rest are not assessed by default, but are available to enable for applicable scenarios. Data is encrypted automatically using platform-managed keys, so the use of customer-managed keys should only be applied when obligated by compliance or restrictive policy requirements. +To enable this recommendation, navigate to your Security Policy for the applicable scope, and update the *Effect* parameter for the corresponding policy to audit or enforce the use of customer-managed keys. Learn more in [Manage security policies](tutorial-security-policy.md). +Use customer-managed keys to manage the encryption at rest of your Azure Cosmos DB. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys (CMK) are commonly required to meet regulatory compliance standards. CMKs enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about CMK encryption at <https://aka.ms/cosmosdb-cmk>. +(Related policy: [Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1f905d99-2ab7-462c-a6b0-f709acca6c8f)). ++**Severity**: Low ++++++### [(Enable if required) Azure Machine Learning workspaces should be encrypted with a customer-managed key (CMK)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/bbd14f11-6228-4588-82a4-517b8d77b23f) ++**Description**: Recommendations to use customer-managed keys for encryption of data at rest are not assessed by default, but are available to enable for applicable scenarios. Data is encrypted automatically using platform-managed keys, so the use of customer-managed keys should only be applied when obligated by compliance or restrictive policy requirements. +To enable this recommendation, navigate to your Security Policy for the applicable scope, and update the *Effect* parameter for the corresponding policy to audit or enforce the use of customer-managed keys. Learn more in [Manage security policies](tutorial-security-policy.md). +Manage encryption at rest of your Azure Machine Learning workspace data with customer-managed keys (CMK). By default, customer data is encrypted with service-managed keys, but CMKs are commonly required to meet regulatory compliance standards. CMKs enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about CMK encryption at <https://aka.ms/azureml-workspaces-cmk>. +(Related policy: [Azure Machine Learning workspaces should be encrypted with a customer-managed key (CMK)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fba769a63-b8cc-4b2d-abf6-ac33c7204be8)). ++**Severity**: Low +++### [Azure SQL Database should be running TLS version 1.2 or newer](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/8e9a37b9-2828-4c8f-a24e-7b0ab0e89c78) ++**Description**: Setting TLS version to 1.2 or newer improves security by ensuring your Azure SQL Database can only be accessed from clients using TLS 1.2 or newer. Using versions of TLS less than 1.2 is not recommended since they have well documented security vulnerabilities. +(Related policy: [Azure SQL Database should be running TLS version 1.2 or newer](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f32e6bbec-16b6-44c2-be37-c5b672d103cf)). ++**Severity**: Medium ++### [Azure SQL Managed Instances should disable public network access](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/a2624c52-2937-400c-af9d-3bf2d97382bf) ++**Description**: Disabling public network access (public endpoint) on Azure SQL Managed Instances improves security by ensuring that they can only be accessed from inside their virtual networks or via Private Endpoints. Learn more about [public network access](https://aka.ms/mi-public-endpoint). +(Related policy: [Azure SQL Managed Instances should disable public network access](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f9dfea752-dd46-4766-aed1-c355fa93fb91)). ++**Severity**: Medium ++### [Cosmos DB accounts should use private link](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/80dc29d6-9887-4071-a66c-e763376c2de3) ++**Description**: Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. When private endpoints are mapped to your Cosmos DB account, data leakage risks are reduced. Learn more about [private links](../cosmos-db/how-to-configure-private-endpoints.md). +(Related policy: [Cosmos DB accounts should use private link](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f58440f8a-10c5-4151-bdce-dfbaad4a20b7)). ++**Severity**: Medium +++### [(Enable if required) Cognitive Services accounts should enable data encryption with a customer-managed key (CMK)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/18bf29b3-a844-e170-2826-4e95d0ba4dc9) ++**Description**: Recommendations to use customer-managed keys for encryption of data at rest are not assessed by default, but are available to enable for applicable scenarios. Data is encrypted automatically using platform-managed keys, so the use of customer-managed keys should only be applied when obligated by compliance or restrictive policy requirements. +To enable this recommendation, navigate to your Security Policy for the applicable scope, and update the *Effect* parameter for the corresponding policy to audit or enforce the use of customer-managed keys. Learn more in [Manage security policies](tutorial-security-policy.md). +Customer-managed keys (CMK) are commonly required to meet regulatory compliance standards. CMKs enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about CMK encryption at <https://aka.ms/cosmosdb-cmk>. +(Related policy: [Cognitive Services accounts should enable data encryption with a customer-managed key?(CMK)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f67121cc7-ff39-4ab8-b7e3-95b84dab487d)) ++**Severity**: Low ++### [(Enable if required) MySQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/6b51b7f7-cbed-75bf-8a02-43384bf47562) ++**Description**: Recommendations to use customer-managed keys for encryption of data at rest are not assessed by default, but are available to enable for applicable scenarios. Data is encrypted automatically using platform-managed keys, so the use of customer-managed keys should only be applied when obligated by compliance or restrictive policy requirements. +To enable this recommendation, navigate to your Security Policy for the applicable scope, and update the *Effect* parameter for the corresponding policy to audit or enforce the use of customer-managed keys. Learn more in [Manage security policies](tutorial-security-policy.md). +Use customer-managed keys to manage the encryption at rest of your MySQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys (CMK) are commonly required to meet regulatory compliance standards. CMKs enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. +(Related policy: [Bring your own key data protection should be enabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f83cef61d-dbd1-4b20-a4fc-5fbc7da10833)). ++**Severity**: Low ++### [(Enable if required) PostgreSQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/19d45f8f-245c-852e-dbf9-d4aab4758b1f) ++**Description**: Recommendations to use customer-managed keys for encryption of data at rest are not assessed by default, but are available to enable for applicable scenarios. Data is encrypted automatically using platform-managed keys, so the use of customer-managed keys should only be applied when obligated by compliance or restrictive policy requirements. +To enable this recommendation, navigate to your Security Policy for the applicable scope, and update the *Effect* parameter for the corresponding policy to audit or enforce the use of customer-managed keys. Learn more in [Manage security policies](tutorial-security-policy.md). +Use customer-managed keys to manage the encryption at rest of your PostgreSQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys (CMK) are commonly required to meet regulatory compliance standards. CMKs enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. +(Related policy: [Bring your own key data protection should be enabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f18adea5e-f416-4d0f-8aa8-d24321e3e274)). ++**Severity**: Low ++### [(Enable if required) SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/06ac6ef4-1e66-1334-5418-6e79ab444ce0) ++**Description**: Recommendations to use customer-managed keys for encryption of data at rest are not assessed by default, but are available to enable for applicable scenarios. Data is encrypted automatically using platform-managed keys, so the use of customer-managed keys should only be applied when obligated by compliance or restrictive policy requirements. +To enable this recommendation, navigate to your Security Policy for the applicable scope, and update the *Effect* parameter for the corresponding policy to audit or enforce the use of customer-managed keys. Learn more in [Manage security policies](tutorial-security-policy.md). +Implementing Transparent Data Encryption (TDE) with your own key provides you with increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. +(Related policy: [SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f048248b0-55cd-46da-b1ff-39efd52db260)). ++**Severity**: Low ++### [(Enable if required) SQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1a93e945-3675-aef6-075d-c661498e1046) ++**Description**: Recommendations to use customer-managed keys for encryption of data at rest are not assessed by default, but are available to enable for applicable scenarios. Data is encrypted automatically using platform-managed keys, so the use of customer-managed keys should only be applied when obligated by compliance or restrictive policy requirements. +To enable this recommendation, navigate to your Security Policy for the applicable scope, and update the *Effect* parameter for the corresponding policy to audit or enforce the use of customer-managed keys. Learn more in [Manage security policies](tutorial-security-policy.md). +Implementing Transparent Data Encryption (TDE) with your own key provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. +(Related policy: [SQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0d134df8-db83-46fb-ad72-fe0c9428c8dd)). ++**Severity**: Low ++### [(Enable if required) Storage accounts should use customer-managed key (CMK) for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ca98bba7-719e-48ee-e193-0b76766cdb07) ++**Description**: Recommendations to use customer-managed keys for encryption of data at rest are not assessed by default, but are available to enable for applicable scenarios. Data is encrypted automatically using platform-managed keys, so the use of customer-managed keys should only be applied when obligated by compliance or restrictive policy requirements. +To enable this recommendation, navigate to your Security Policy for the applicable scope, and update the *Effect* parameter for the corresponding policy to audit or enforce the use of customer-managed keys. Learn more in [Manage security policies](tutorial-security-policy.md). +Secure your storage account with greater flexibility using customer-managed keys (CMKs). When you specify a CMK, that key is used to protect and control access to the key that encrypts your data. Using CMKs provides additional capabilities to control rotation of the key encryption key or cryptographically erase data. +(Related policy: [Storage accounts should use customer-managed key (CMK) for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6fac406b-40ca-413b-bf8e-0bf964659c25)). ++**Severity**: Low ++### [All advanced threat protection types should be enabled in SQL managed instance advanced data security settings](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ebe970fe-9c27-4dd7-a165-1e943d565e10) ++**Description**: It is recommended to enable all advanced threat protection types on your SQL managed instances. Enabling all types protects against SQL injection, database vulnerabilities, and any other anomalous activities. +(No related policy) ++**Severity**: Medium ++### [All advanced threat protection types should be enabled in SQL server advanced data security settings](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f7010359-8d21-4598-a9f2-c3e81a17141e) ++**Description**: It is recommended to enable all advanced threat protection types on your SQL servers. Enabling all types protects against SQL injection, database vulnerabilities, and any other anomalous activities. +(No related policy) ++**Severity**: Medium ++### [API Management services should use a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/74e7dcff-317f-9635-41d2-ead5019acc99) ++**Description**: Azure Virtual Network deployment provides enhanced security, isolation, and allows you to place your API Management service in a non-internet routable network that you control access to. These networks can then be connected to your on-premises networks using various VPN technologies, which enable access to your backend services within the network and/or on-premises. The developer portal and API gateway can be configured to be accessible either from the Internet or only within the virtual network. +(Related policy: [API Management services should use a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fef619a2c-cc4d-4d03-b2ba-8c94a834d85b)). ++**Severity**: Medium ++### [App Configuration should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8318c3a1-fcac-2e1d-9582-50912e5578e5) ++**Description**: Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your app configuration instances instead of the entire service, you'll also be protected against data leakage risks. Learn more at: <https://aka.ms/appconfig/private-endpoint>. +(Related policy: [App Configuration should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fca610c1d-041c-4332-9d88-7ed3094967c7)). ++**Severity**: Medium ++### [Audit retention for SQL servers should be set to at least 90 days](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/620671b8-6661-273a-38ac-4574967750ec) ++**Description**: Audit SQL servers configured with an auditing retention period of less than 90 days. +(Related policy: [SQL servers should be configured with 90 days auditing retention or higher.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f89099bee-89e0-4b26-a5f4-165451757743)) ++**Severity**: Low ++### [Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/94208a8b-16e8-4e5b-abbd-4e81c9d02bee) ++**Description**: Enable auditing on your SQL Server to track database activities across all databases on the server and save them in an audit log. +(Related policy: [Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9)). ++**Severity**: Low ++### [Auto provisioning of the Log Analytics agent should be enabled on subscriptions](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/af849052-4299-0692-acc0-bffcbe9e440c) ++**Description**: To monitor for security vulnerabilities and threats, Microsoft Defender for Cloud collects data from your Azure virtual machines. Data is collected by the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA), which reads various security-related configurations and event logs from the machine and copies the data to your Log Analytics workspace for analysis. We recommend enabling auto provisioning to automatically deploy the agent to all supported Azure VMs and any new ones that are created. +(Related policy: [Auto provisioning of the Log Analytics agent should be enabled on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f475aae12-b88a-4572-8b36-9b712b2b3a17)). ++**Severity**: Low ++### [Azure Cache for Redis should reside within a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/be264018-593c-1162-bd5e-b74a39396652) ++**Description**: Azure Virtual Network (VNet) deployment provides enhanced security and isolation for your Azure Cache for Redis, as well as subnets, access control policies, and other features to further restrict access. When an Azure Cache for Redis instance is configured with a VNet, it is not publicly addressable and can only be accessed from virtual machines and applications within the VNet. +(Related policy: [Azure Cache for Redis should reside within a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7d092e0a-7acd-40d2-a975-dca21cae48c4)). ++**Severity**: Medium ++### [Azure Database for MySQL should have an Azure Active Directory administrator provisioned](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/8af8a87b-7aa6-4c83-b22b-36801896177b/) ++**Description**: Provision an Azure AD administrator for your Azure Database for MySQL to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services +(Related policy: [An Azure Active Directory administrator should be provisioned for MySQL servers](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f146412e9-005c-472b-9e48-c87b72ac229e)). ++**Severity**: Medium ++### [Azure Database for PostgreSQL should have an Azure Active Directory administrator provisioned](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/b20d1b00-11a8-4ce7-b477-4ea6e147c345) ++**Description**: Provision an Azure AD administrator for your Azure Database for PostgreSQL to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services +(Related policy: [An Azure Active Directory administrator should be provisioned for PostgreSQL servers](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb4dec045-250a-48c2-b5cc-e0c4eec8b5b4)). ++**Severity**: Medium ++### [Azure Cosmos DB accounts should have firewall rules](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/276b1952-c364-852b-11e5-657f0fa34dc6) ++**Description**: Firewall rules should be defined on your Azure Cosmos DB accounts to prevent traffic from unauthorized sources. Accounts that have at least one IP rule defined with the virtual network filter enabled are deemed compliant. Accounts disabling public access are also deemed compliant. +(Related policy: [Azure Cosmos DB accounts should have firewall rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f862e97cf-49fc-4a5c-9de4-40d4e2e7c8eb)). ++**Severity**: Medium ++### [Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/bef092f5-bea7-3df3-1ee8-4376dd9c111e) ++**Description**: Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid domains instead of the entire service, you'll also be protected against data leakage risks. Learn more at: <https://aka.ms/privateendpoints>. +(Related policy: [Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f9830b652-8523-49cc-b1b3-e17dce1127ca)). ++**Severity**: Medium ++### [Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/bdac9c7b-b9b8-f572-0450-f161c430861c) ++**Description**: Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your topics instead of the entire service, you'll also be protected against data leakage risks. Learn more at: <https://aka.ms/privateendpoints>. +(Related policy: [Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f4b90e17e-8448-49db-875e-bd83fb6f804f)). ++**Severity**: Medium ++### [Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/692343df-7e70-b082-7b0e-67f97146cea3) ++**Description**: Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure Machine Learning workspaces instead of the entire service, you'll also be protected against data leakage risks. Learn more at: <https://aka.ms/azureml-workspaces-privatelink>. +(Related policy: [Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f40cec1dd-a100-4920-b15b-3024fe8901ab)). ++**Severity**: Medium ++### [Azure SignalR Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/b6f84d18-0137-3176-6aa1-f4d9ac95155c) ++**Description**: Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your SignalR resources instead of the entire service, you'll also be protected against data leakage risks. Learn more at: <https://aka.ms/asrs/privatelink>. +(Related policy: [Azure SignalR Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f53503636-bcc9-4748-9663-5348217f160f)). ++**Severity**: Medium ++### [Azure Spring Cloud should use network injection](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4c768356-5ad2-e3cc-c799-252b27d3865a) ++**Description**: Azure Spring Cloud instances should use virtual network injection for the following purposes: 1. Isolate Azure Spring Cloud from Internet. 2. Enable Azure Spring Cloud to interact with systems in either on premises data centers or Azure service in other virtual networks. 3. Empower customers to control inbound and outbound network communications for Azure Spring Cloud. +(Related policy: [Azure Spring Cloud should use network injection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2faf35e2a4-ef96-44e7-a9ae-853dd97032c4)). ++**Severity**: Medium ++### [SQL servers should have an Azure Active Directory administrator provisioned](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f0553104-cfdb-65e6-759c-002812e38500) ++**Description**: Provision an Azure AD administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services. +(Related policy: [An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1f314764-cb73-4fc9-b863-8eca98ac36e9)). ++**Severity**: High ++### [Azure Synapse Workspace authentication mode should be Azure Active Directory Only](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/3320d1ac-0ebe-41ab-b96c-96fb91214c5c) ++**Description**: Azure Synapse Workspace authentication mode should be Azure Active Directory Only + Azure Active Directory only authentication methods improves security by ensuring that Synapse Workspaces exclusively require Azure AD identities for authentication. [Learn more](https://aka.ms/Synapse). +(Related policy: [Synapse Workspaces should use only Azure Active Directory identities for authentication](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2158ddbe-fefa-408e-b43f-d4faef8ff3b8)). ++**Severity**: Medium ++### [Code repositories should have code scanning findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c68a8c2a-6ed4-454b-9e37-4b7654f2165f) ++**Description**: Defender for DevOps has found vulnerabilities in code repositories. To improve the security posture of the repositories, it is highly recommended to remediate these vulnerabilities. +(No related policy) ++**Severity**: Medium ++### [Code repositories should have Dependabot scanning findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/822425e3-827f-4f35-bc33-33749257f851) ++**Description**: Defender for DevOps has found vulnerabilities in code repositories. To improve the security posture of the repositories, it is highly recommended to remediate these vulnerabilities. +(No related policy) ++**Severity**: Medium ++### [Code repositories should have infrastructure as code scanning findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2ebc815f-7bc7-4573-994d-e1cc46fb4a35) ++**Description**: Defender for DevOps has found infrastructure as code security configuration issues in repositories. The issues shown below have been detected in template files. To improve the security posture of the related cloud resources, it is highly recommended to remediate these issues. +(No related policy) ++**Severity**: Medium ++### [Code repositories should have secret scanning findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4e07c7d0-e06c-47d7-a4a9-8c7b748d1b27) ++**Description**: Defender for DevOps has found a secret in code repositories. This should be remediated immediately to prevent a security breach. Secrets found in repositories can be leaked or discovered by adversaries, leading to compromise of an application or service. For Azure DevOps, the Microsoft Security DevOps CredScan tool only scans builds on which it has been configured to run. Therefore, results might not reflect the complete status of secrets in your repositories. +(No related policy) ++**Severity**: High ++### [Cognitive Services accounts should enable data encryption](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/cdcf4f71-60d3-540b-91e3-aa19792da364) ++**Description**: This policy audits any Cognitive Services accounts that are not using data encryption. For each account with storage, you should enable data encryption with either customer managed or Microsoft managed key. +(Related policy: [Cognitive Services accounts should enable data encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2bdd0062-9d75-436e-89df-487dd8e4b3c7)). ++**Severity**: Low ++### [Cognitive Services accounts should have local authentication methods disabled](recommendations-reference-data.md) +++**Description**: Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: https://aka.ms/cs/auth. (Related policy: Cognitive Services accounts should have local authentication methods disabled). ++**Severity**: Low +++### [Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f738efb8-005f-680d-3d43-b3db762d6243) ++**Description**: Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. +(Related policy: [Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f037eea7a-bd0a-46c5-9a66-03aea78705d3)). ++**Severity**: Medium ++### [Cognitive Services accounts should use customer owned storage or enable data encryption](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/aa395469-1687-78a7-bf76-f4614ef72977) ++**Description**: This policy audits any Cognitive Services account not using customer owned storage nor data encryption. For each Cognitive Services account with storage, use either customer owned storage or enable data encryption. Aligns with Microsoft Cloud Security Benchmark. +(Related policy: [Cognitive Services accounts should use customer owned storage or enable data encryption.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f11566b39-f7f7-4b82-ab06-68d8700eb0a4)) ++**Severity**: Low ++### [Cognitive Services should use private link](recommendations-reference-data.md#cognitive-services-should-use-private-link) ++**Description**: Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Azure Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about [private links](../private-link/private-link-overview.md). (Related policy: Cognitive Services should use private link). ++**Severity**: Medium +++### [Diagnostic logs in Azure Data Lake Store should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ad5bbaeb-7632-5edf-f1c2-752075831ce8) ++**Description**: Enable logs and retain them for up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised. +(Related policy: [Diagnostic logs in Azure Data Lake Store should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f057ef27e-665e-4328-8ea3-04b3122bd9fb)). ++**Severity**: Low ++### [Diagnostic logs in Data Lake Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c6dad669-efd7-cd72-61c5-289935607791) ++**Description**: Enable logs and retain them for up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised. +(Related policy: [Diagnostic logs in Data Lake Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fc95c74d9-38fe-4f0d-af86-0c7d626a315c)). ++**Severity**: Low ++### [Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/3869fbd7-5d90-84e4-37bd-d9a7f4ce9a24) ++**Description**: To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Defender for Cloud. +(Related policy: [Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6e2593d9-add6-4083-9c9b-4b7d2188c899)). ++**Severity**: Low ++### [Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9f97e78d-88ee-a48d-abe2-5ef12954e7ea) ++**Description**: To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Defender for Cloud. +(Related policy: [Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0b15565f-aa9e-48ba-8619-45960f2c314d)). ++**Severity**: Medium ++### [Enforce SSL connection should be enabled for MySQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1f6d29f6-4edb-ea39-042b-de8f123ddd39) ++**Description**: Azure Database for MySQL supports connecting your Azure Database for MySQL server to client applications using Secure Sockets Layer (SSL). +Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. +This configuration enforces that SSL is always enabled for accessing your database server. +(Related policy: [Enforce SSL connection should be enabled for MySQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fe802a67a-daf5-4436-9ea6-f6d821dd0c5d)). ++**Severity**: Medium ++### [Enforce SSL connection should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1fde2073-a488-17e9-9534-5a3b23379b4b) ++**Description**: Azure Database for PostgreSQL supports connecting your Azure Database for PostgreSQL server to client applications using Secure Sockets Layer (SSL). +Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. +This configuration enforces that SSL is always enabled for accessing your database server. +(Related policy: [Enforce SSL connection should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fd158790f-bfb0-486c-8631-2dc6b4e8e6af)). ++**Severity**: Medium ++### [Function apps should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/afd071f0-ebaa-422b-bb2f-8a772a31db75) ++**Description**: Runtime vulnerability scanning for functions scans your function apps for security vulnerabilities and exposes detailed findings. Resolving the vulnerabilities can greatly improve your serverless applications security posture and protect them from attacks. +(No related policy) ++**Severity**: High ++### [Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2ce368b5-7882-89fd-6645-885b071a2409) ++**Description**: Azure Database for MariaDB allows you to choose the redundancy option for your database server. +It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery options in case of a region failure. +Configuring geo-redundant storage for backup is only allowed when creating a server. +(Related policy: [Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0ec47710-77ff-4a3d-9181-6aa50af424d0)). ++**Severity**: Low ++### [Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8ad68a2f-c6b1-97b5-41b5-174359a33688) ++**Description**: Azure Database for MySQL allows you to choose the redundancy option for your database server. +It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery options in case of a region failure. +Configuring geo-redundant storage for backup is only allowed when creating a server. +(Related policy: [Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f82339799-d096-41ae-8538-b108becf0970)). ++**Severity**: Low ++### [Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/95592ab0-ddc8-660d-67f3-6df1fadfe7ec) ++**Description**: Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. +It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery options in case of a region failure. +Configuring geo-redundant storage for backup is only allowed when creating a server. +(Related policy: [Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f48af4db5-9b8b-401c-8e74-076be876a430)). ++**Severity**: Low ++### [GitHub repositories should have Code scanning enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/6672df26-ff2e-4282-83c3-e2f20571bd11) ++**Description**: GitHub uses code scanning to analyze code in order to find security vulnerabilities and errors in code. Code scanning can be used to find, triage, and prioritize fixes for existing problems in your code. Code scanning can also prevent developers from introducing new problems. Scans can be scheduled for specific days and times, or scans can be triggered when a specific event occurs in the repository, such as a push. If code scanning finds a potential vulnerability or error in code, GitHub displays an alert in the repository. A vulnerability is a problem in a project's code that could be exploited to damage the confidentiality, integrity, or availability of the project. +(No related policy) ++**Severity**: Medium ++### [GitHub repositories should have Dependabot scanning enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/92643c1f-1a95-4b68-bbd2-5117f92d6e35) ++**Description**: GitHub sends Dependabot alerts when it detects vulnerabilities in code dependencies that affect repositories. A vulnerability is a problem in a project's code that could be exploited to damage the confidentiality, integrity, or availability of the project or other projects that use its code. Vulnerabilities vary in type, severity, and method of attack. When code depends on a package that has a security vulnerability, this vulnerable dependency can cause a range of problems. +(No related policy) ++**Severity**: Medium ++### [GitHub repositories should have Secret scanning enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1a600c61-6443-4ab4-bd28-7a6b6fb4691d) ++**Description**: GitHub scans repositories for known types of secrets, to prevent fraudulent use of secrets that were accidentally committed to repositories. Secret scanning will scan the entire Git history on all branches present in the GitHub repository for any secrets. Examples of secrets are tokens and private keys that a service provider can issue for authentication. If a secret is checked into a repository, anyone who has read access to the repository can use the secret to access the external service with those privileges. Secrets should be stored in a dedicated, secure location outside the repository for the project. +(No related policy) ++**Severity**: High ++### [Microsoft Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/58d72d9d-0310-4792-9a3b-6dd111093cdb) ++**Description**: Microsoft Defender for SQL is a unified package that provides advanced SQL security capabilities. +It includes functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate a threat to your database, and discovering and classifying sensitive data. ++Protections from this plan are charged as shown on the [Defender plans](https://aka.ms/pricing-security-center) page. If you don't have any Azure SQL Database servers in this subscription, you won't be charged. If you later create Azure SQL Database servers on this subscription, they'll automatically be protected and charges will begin. Learn about the [pricing details per region](https://aka.ms/pricing-security-center). ++Learn more in [Introduction to Microsoft Defender for SQL](defender-for-sql-introduction.md). +(Related policy: [Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2f7fe3b40f-802b-4cdd-8bd4-fd799c948cc2)). ++**Severity**: High ++### [Microsoft Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/aae10e53-8403-3576-5d97-3b00f97332b2) ++**Description**: Microsoft Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Defender for DNS alerts you about suspicious activity at the DNS layer. Learn more in [Introduction to Microsoft Defender for DNS](defender-for-dns-introduction.md). Enabling this Defender plan results in charges. Learn about the pricing details per region on Defender for Cloud's pricing page: [Defender for Cloud Pricing](https://azure.microsoft.com/services/defender-for-cloud/#pricing). +(No related policy) ++**Severity**: High ++### [Microsoft Defender for open-source relational databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/b6a28450-dd5d-4ba4-8806-245e20ef6632) ++**Description**: Microsoft Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. Learn more in [Introduction to Microsoft Defender for open-source relational databases](defender-for-databases-introduction.md). ++Enabling this plan will result in charges for protecting your open-source relational databases. If you don't have any open-source relational databases in this subscription, no charges will be incurred. If you create any open-source relational databases on this subscription in the future, they will automatically be protected and charges will begin at that time. +(No related policy) ++**Severity**: High ++### [Microsoft Defender for Resource Manager should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f0fb2a7e-16d5-849f-be57-86db712e9bd0) ++**Description**: Microsoft Defender for Resource Manager automatically monitors the resource management operations in your organization. Defender for Cloud detects threats and alerts you about suspicious activity. Learn more in [Introduction to Microsoft Defender for Resource Manager](defender-for-resource-manager-introduction.md). Enabling this Defender plan results in charges. Learn about the pricing details per region on Defender for Cloud's pricing page: [Defender for Cloud Pricing](https://azure.microsoft.com/services/defender-for-cloud/#pricing). +(No related policy) ++**Severity**: High ++### [Microsoft Defender for SQL on machines should be enabled on workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e9c320f1-03a0-4d2b-9a37-84b3bdc2e281) ++**Description**: Microsoft Defender for servers brings threat detection and advanced defenses for your Windows and Linux machines. +With this Defender plan enabled on your subscriptions but not on your workspaces, you're paying for the full capability of Microsoft Defender for servers but missing out on some of the benefits. +When you enable Microsoft Defender for servers on a workspace, all machines reporting to that workspace will be billed for Microsoft Defender for servers - even if they're in subscriptions without Defender plans enabled. Unless you also enable Microsoft Defender for servers on the subscription, those machines won't be able to take advantage of just-in-time VM access, adaptive application controls, and network detections for Azure resources. +Learn more in [Introduction to Microsoft Defender for servers](defender-for-servers-introduction.md). +(No related policy) ++**Severity**: Medium ++### [Microsoft Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/6ac66a74-761f-4a59-928a-d373eea3f028) ++**Description**: Microsoft Defender for SQL is a unified package that provides advanced SQL security capabilities. +It includes functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate a threat to your database, and discovering and classifying sensitive data. ++Remediating this recommendation will result in charges for protecting your SQL servers on machines. If you don't have any SQL servers on machines in this subscription, no charges will be incurred. +If you create any SQL servers on machines on this subscription in the future, they will automatically be protected and charges will begin at that time. +[Learn more about Microsoft Defender for SQL servers on machines.](/azure/azure-sql/database/advanced-data-security) +(Related policy: [Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2f6581d072-105e-4418-827f-bd446d56421b)). ++**Severity**: High ++### [Microsoft Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/400a6682-992c-4726-9549-629fbc3b988f) ++**Description**: Microsoft Defender for SQL is a unified package that provides advanced SQL security capabilities. It surfaces and mitigates potential database vulnerabilities, and detects anomalous activities that could indicate a threat to your database. Microsoft Defender for SQL is billed as shown on [pricing details per region](https://aka.ms/pricing-security-center). +(Related policy: [Advanced data security should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9)). ++**Severity**: High ++### [Microsoft Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ff6dbca8-d93c-49fc-92af-dc25da7faccd) ++**Description**: Microsoft Defender for SQL is a unified package that provides advanced SQL security capabilities. It surfaces and mitigates potential database vulnerabilities, and detects anomalous activities that could indicate a threat to your database. Microsoft Defender for SQL is billed as shown on [pricing details per region](https://aka.ms/pricing-security-center). +(Related policy: [Advanced data security should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9)). ++**Severity**: High ++### [Microsoft Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1be22853-8ed1-4005-9907-ddad64cb1417) ++**Description**: Microsoft Defender for storage detects unusual and potentially harmful attempts to access or exploit storage accounts. ++Protections from this plan are charged as shown on the **Defender plans** page. If you don't have any Azure Storage accounts in this subscription, you won't be charged. If you later create Azure Storage accounts on this subscription, they'll automatically be protected and charges will begin. Learn about the [pricing details per region](https://aka.ms/pricing-security-center). +Learn more in [Introduction to Microsoft Defender for Storage](defender-for-storage-introduction.md). +(Related policy: [Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicyDefinitions%2f308fbb08-4ab8-4e67-9b29-592e93fb94fa)). ++**Severity**: High ++### [Network Watcher should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f1f2f7dc-7bd5-18bf-c403-cbbdb7ec3d68) ++**Description**: Network Watcher is a regional service that enables you to monitor and diagnose conditions at a network scenario level in, to, and from Azure. Scenario level monitoring enables you to diagnose problems at an end-to-end network level view. Network diagnostic and visualization tools available with Network Watcher help you understand, diagnose, and gain insights to your network in Azure. +(Related policy: [Network Watcher should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb6e2945c-0b7b-40f5-9233-7a5323b5cdc6)). ++**Severity**: Low ++### [Private endpoint connections on Azure SQL Database should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/75396512-3323-9be4-059d-32ecb113c3de) ++**Description**: Private endpoint connections enforce secure communication by enabling private connectivity to Azure SQL Database. +(Related policy: [Private endpoint connections on Azure SQL Database should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7698e800-9299-47a6-b3b6-5a0fee576eed)). ++**Severity**: Medium ++### [Private endpoint should be enabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ca9b93fe-6f1f-676c-2f31-d20f88fdbe56) ++**Description**: Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MariaDB. +Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. +(Related policy: [Private endpoint should be enabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0a1302fb-a631-4106-9753-f3d494733990)). ++**Severity**: Medium ++### [Private endpoint should be enabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/cec4922b-1eb3-cb74-660b-ffad9b9ac642) ++**Description**: Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MySQL. +Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. +(Related policy: [Private endpoint should be enabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7595c971-233d-4bcf-bd18-596129188c49)). ++**Severity**: Medium ++### [Private endpoint should be enabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c5b83aed-f53d-5201-8ffb-1f9938de410a) ++**Description**: Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for PostgreSQL. +Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. +(Related policy: [Private endpoint should be enabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0564d078-92f5-4f97-8398-b9f58a51f70b)). ++**Severity**: Medium ++### [Public network access on Azure SQL Database should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/22e93e92-4a31-b4cd-d640-3ef908430aa6) ++**Description**: Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. +(Related policy: [Public network access on Azure SQL Database should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1b8ca024-1d5c-4dec-8995-b1a932b41780)). ++**Severity**: Medium ++### [Public network access should be disabled for Cognitive Services accounts](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/684a5b6d-a270-61ce-306e-5cea400dc3a7) ++**Description**: This policy audits any Cognitive Services account in your environment with public network access enabled. Public network access should be disabled so that only connections from private endpoints are allowed. +(Related policy: [Public network access should be disabled for Cognitive Services accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0725b4dd-7e76-479c-a735-68e7ee23d5ca)). ++**Severity**: Medium ++### [Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ab153e43-2fb5-0670-2117-70340851ea9b) ++**Description**: Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. +(Related policy: [Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ffdccbe47-f3e3-4213-ad5d-ea459b2fa077)). ++**Severity**: Medium ++### [Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d5d090f1-7d5c-9b38-7344-0ede8343276d) ++**Description**: Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. +(Related policy: [Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fd9844e8a-1437-4aeb-a32c-0c992f056095)). ++**Severity**: Medium ++### [Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/b34f9fe7-80cd-6fb3-2c5b-951993746ca8) ++**Description**: Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. +(Related policy: [Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb52376f7-9612-48a1-81cd-1ffe4b61032c)). ++**Severity**: Medium ++### [Redis Cache should allow access only via SSL](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/35b25be2-d08a-e340-45ed-f08a95d804fc) ++**Description**: Enable only connections via SSL to Redis Cache. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking. +(Related policy: [Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f22bee202-a82f-4305-9a2a-6d7f44d4dedb)). ++**Severity**: High ++### [SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/82e20e14-edc5-4373-bfc4-f13121257c37) ++**Description**: SQL Vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. [Learn more](https://aka.ms/SQL-Vulnerability-Assessment/) +(Related policy: [Vulnerabilities on your SQL databases should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2ffeedbf84-6b99-488c-acc2-71c829aa5ffc)). ++**Severity**: High ++### [SQL managed instances should have vulnerability assessment configured](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c42fc28d-1703-45fc-aaa5-39797f570513) ++**Description**: Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. +(Related policy: [Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1b7aa243-30e4-4c9e-bca8-d0d3022b634a)). ++**Severity**: High ++### [SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f97aa83c-9b63-4f9a-99f6-b22c4398f936) ++**Description**: SQL Vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. [Learn more](https://aka.ms/explore-vulnerability-assessment-reports/) +(Related policy: [Vulnerabilities on your SQL servers on machine should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fmicrosoft.authorization%2fpolicydefinitions%2f6ba6d016-e7c3-4842-b8f2-4992ebc0d72d)). ++**Severity**: High ++### [SQL servers should have an Azure Active Directory administrator provisioned](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f0553104-cfdb-65e6-759c-002812e38500) ++**Description**: Provision an Azure AD administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services. +(Related policy: [An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f1f314764-cb73-4fc9-b863-8eca98ac36e9)). ++**Severity**: High ++### [SQL servers should have vulnerability assessment configured](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1db4f204-cb5a-4c9c-9254-7556403ce51c) ++**Description**: Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. +(Related policy: [Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9)). ++**Severity**: High ++### [Storage account should use a private link connection](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/cdc78c07-02b0-4af0-1cb2-cb7c672a8b0a) ++**Description**: Private links enforce secure communication, by providing private connectivity to the storage account +(Related policy: [Storage account should use a private link connection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6edd7eda-6dd8-40f7-810d-67160c639cd9)). ++**Severity**: Medium ++### [Storage accounts should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/47bb383c-8e25-95f0-c2aa-437add1d87d3) ++**Description**: To benefit from new capabilities in Azure Resource Manager, you can migrate existing deployments from the Classic deployment model. Resource Manager enables security enhancements such as: stronger access control (RBAC), better auditing, ARM-based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication, and support for tags and resource groups for easier security management. [Learn more](../virtual-machines/windows/migration-classic-resource-manager-overview.md) +(Related policy: [Storage accounts should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f37e0d2fe-28a5-43d6-a273-67d37d1f5606)). ++**Severity**: Low ++### [Storage accounts should prevent shared key access](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/3b363842-30f5-4056-980d-3a40fa5de8b3) ++**Description**: Audit requirement of Azure Active Directory (Azure AD) to authorize requests for your storage account. By default, requests can be authorized with either Azure Active Directory credentials, or by using the account access key for Shared Key authorization. Of these two types of authorization, Azure AD provides superior security and ease of use over shared Key, and is recommended by Microsoft. +(Related policy: [policy](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%8c6a50c6-9ffd-4ae7-986f-5fa6111f9a54)) ++**Severity**: Medium ++### [Storage accounts should restrict network access using virtual network rules](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ad4f3ff1-30eb-5042-16ed-27198f640b8d) ++**Description**: Protect your storage accounts from potential threats using virtual network rules as a preferred method instead of IP-based filtering. Disabling IP-based filtering prevents public IPs from accessing your storage accounts. +(Related policy: [Storage accounts should restrict network access using virtual network rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2a1a9cdf-e04d-429a-8416-3bfb72a1b26f)). ++**Severity**: Medium ++### [Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/77758c9d-8a56-5f54-6ff7-69a762ca6004) ++**Description**: To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Defender for Cloud. +(Related policy: [Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7)) ++**Severity**: Low ++### [Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/651967bf-044e-4bde-8376-3e08e0600105) ++**Description**: Enable transparent data encryption to protect data-at-rest and meet compliance requirements +(Related policy: [Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f17k78e20-9358-41c9-923c-fb736d382a12)). ++**Severity**: Low ++### [VM Image Builder templates should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f6b0e473-eb23-c3be-fe61-2ae3e8309530) ++**Description**: Audit VM Image Builder templates that do not have a virtual network configured. When a virtual network is not configured, a public IP is created and used instead, which might directly expose resources to the internet and increase the potential attack surface. +(Related policy: [VM Image Builder templates should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2154edb9-244f-4741-9970-660785bccdaa)). ++**Severity**: Medium ++### [Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/efe75f01-6fff-5d9d-08e6-092b98d3fb3f) ++**Description**: Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries/regions, IP address ranges, and other http(s) parameters via custom rules. +(Related policy: [Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f564feb30-bf6a-4854-b4bb-0d2d2d1e6c66)). ++**Severity**: Low ++### [Web Application Firewall (WAF) should be enabled for Azure Front Door Service service](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/0c02a769-03f1-c4d7-85a5-db5dca505c49) ++**Description**: Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries/regions, IP address ranges, and other http(s) parameters via custom rules. +(Related policy: [Web Application Firewall (WAF) should be enabled for Azure Front Door Service?service](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f055aa869-bc98-4af8-bafc-23f1ab6ffe2c)) ++**Severity**: Low +++## AWS data recommendations ++### [Amazon Aurora clusters should have backtracking enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d0ef47dc-95aa-4765-a075-72c07df8acff) ++**Description**: This control checks whether Amazon Aurora clusters have backtracking enabled. +Backups help you to recover more quickly from a security incident. They also strengthen the resilience of your systems. Aurora backtracking reduces the time to recover a database to a point in time. It doesn't require a database restore to do so. +For more information about backtracking in Aurora, see [Backtracking an Aurora DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Managing.Backtrack.html) in the Amazon Aurora User Guide. ++**Severity**: Medium ++### [Amazon EBS snapshots shouldn't be publicly restorable](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/02e8de17-1a01-45cb-b906-6d07a78f4b3c) ++**Description**: Amazon EBS snapshots shouldn't be publicly restorable by everyone unless explicitly allowed, to avoid accidental exposure of data. Additionally, permission to change Amazon EBS configurations should be restricted to authorized AWS accounts only. ++**Severity**: High ++### [Amazon ECS task definitions should have secure networking modes and user definitions](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/0dc124a8-2a69-47c5-a4e1-678d725a33ab) ++**Description**: This control checks whether an active Amazon ECS task definition that has host networking mode also has privileged or user container definitions. + The control fails for task definitions that have host network mode and container definitions where privileged=false or is empty and user=root or is empty. +If a task definition has elevated privileges, it is because the customer specifically opted in to that configuration. + This control checks for unexpected privilege escalation when a task definition has host networking enabled but the customer didn't opt in to elevated privileges. ++**Severity**: High ++### [Amazon Elasticsearch Service domains should encrypt data sent between nodes](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9b63a099-6c0c-4354-848b-17de1f3c8ae3) ++**Description**: This control checks whether Amazon ES domains have node-to-node encryption enabled. HTTPS (TLS) can be used to help prevent potential attackers from eavesdropping on or manipulating network traffic using person-in-the-middle or similar attacks. Only encrypted connections over HTTPS (TLS) should be allowed. Enabling node-to-node encryption for Amazon ES domains ensures that intra-cluster communications are encrypted in transit. There can be a performance penalty associated with this configuration. You should be aware of and test the performance trade-off before enabling this option. ++**Severity**: Medium ++### [Amazon Elasticsearch Service domains should have encryption at rest enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/cf747c91-14f3-4b30-aafe-eb12c18fd030) ++**Description**: It's important to enable encryptions rest of Amazon ES domains to protect sensitive data ++**Severity**: Medium ++### [Amazon RDS database should be encrypted using customer managed key](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9137f5de-aac8-4cee-a22f-8d81f19be67f) ++**Description**: This check identifies RDS databases that are encrypted with default KMS keys and not with customer managed keys. As a leading practice, use customer managed keys to encrypt the data on your RDS databases and maintain control of your keys and data on sensitive workloads. ++**Severity**: Medium ++### [Amazon RDS instance should be configured with automatic backup settings](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/894259c2-c1d5-47dc-b5c6-b242d5c76fdf) ++**Description**: This check identifies RDS instances, which aren't set with the automatic backup setting. If Automatic Backup is set, RDS creates a storage volume snapshot of your DB instance, backing up the entire DB instance and not just individual databases, which provide for point-in-time recovery. The automatic backup happens during the specified backup window time and keeps the backups for a limited period of time as defined in the retention period. It's recommended to set automatic backups for your critical RDS servers that help in the data restoration process. ++**Severity**: Medium ++### [Amazon Redshift clusters should have audit logging enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e2a0ec17-447b-44b6-8646-c0b5584b6b0a) ++**Description**: This control checks whether an Amazon Redshift cluster has audit logging enabled. +Amazon Redshift audit logging provides additional information about connections and user activities in your cluster. This data can be stored and secured in Amazon S3 and can be helpful in security audits and investigations. For more information, see [Database audit logging](https://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html) in the *Amazon Redshift Cluster Management Guide*. ++**Severity**: Medium ++### [Amazon Redshift clusters should have automatic snapshots enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/7a152832-6600-49d1-89be-82e474190e13) ++**Description**: This control checks whether Amazon Redshift clusters have automated snapshots enabled. It also checks whether the snapshot retention period is greater than or equal to seven. +Backups help you to recover more quickly from a security incident. They strengthen the resilience of your systems. Amazon Redshift takes periodic snapshots by default. This control checks whether automatic snapshots are enabled and retained for at least seven days. For more information about Amazon Redshift automated snapshots, see [Automated snapshots](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-snapshots.html#about-automated-snapshots) in the *Amazon Redshift Cluster Management Guide*. ++**Severity**: Medium ++### [Amazon Redshift clusters should prohibit public access](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/7f5ac036-11e1-4cda-89b5-a115b9ae4f72) ++**Description**: We recommend Amazon Redshift clusters to avoid public accessibility by evaluating the 'publiclyAccessible' field in the cluster configuration item. ++**Severity**: High ++### [Amazon Redshift should have automatic upgrades to major versions enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/176f9062-64d0-4edd-bb0f-915012a6ef16) ++**Description**: This control checks whether automatic major version upgrades are enabled for the Amazon Redshift cluster. +Enabling automatic major version upgrades ensures that the latest major version updates to Amazon Redshift clusters are installed during the maintenance window. + These updates might include security patches and bug fixes. Keeping up to date with patch installation is an important step in securing systems. ++**Severity**: Medium ++### [Amazon SQS queues should be encrypted at rest](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/340a07a1-7d68-4562-ac25-df77c214fe13) ++**Description**: This control checks whether Amazon SQS queues are encrypted at rest. +Server-side encryption (SSE) allows you to transmit sensitive data in encrypted queues. To protect the content of messages in queues, SSE uses keys managed in AWS KMS. +For more information, see [Encryption at rest](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-server-side-encryption.html) in the Amazon Simple Queue Service Developer Guide. ++**Severity**: Medium ++### [An RDS event notifications subscription should be configured for critical cluster events](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/65659c22-6588-405b-b118-614c2b4ead5b) ++**Description**: This control checks whether an Amazon RDS event subscription exists that has notifications enabled for the following source type: + Event category key-value pairs. DBCluster: [Maintenance and failure]. + RDS event notifications use Amazon SNS to make you aware of changes in the availability or configuration of your RDS resources. These notifications allow for rapid response. +For more information about RDS event notifications, see [Using Amazon RDS event notification](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html) in the Amazon RDS User Guide. ++**Severity**: Low ++### [An RDS event notifications subscription should be configured for critical database instance events](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ff4f3ab3-8ed7-4b4f-a721-4c3b66a59140) ++**Description**: This control checks whether an Amazon RDS event subscription exists with notifications enabled for the following source type: + Event category key-value pairs. ```DBInstance```: [Maintenance, configuration change and failure]. +RDS event notifications use Amazon SNS to make you aware of changes in the availability or configuration of your RDS resources. These notifications allow for rapid response. +For more information about RDS event notifications, see [Using Amazon RDS event notification](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html) in the Amazon RDS User Guide. ++**Severity**: Low ++### [An RDS event notifications subscription should be configured for critical database parameter group events](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c6f24bb0-b696-451c-a26e-0cc9ea8e97e3) ++**Description**: This control checks whether an Amazon RDS event subscription exists with notifications enabled for the following source type: + Event category key-value pairs. DBParameterGroup: ["configuration","change"]. + RDS event notifications use Amazon SNS to make you aware of changes in the availability or configuration of your RDS resources. These notifications allow for rapid response. +For more information about RDS event notifications, see [Using Amazon RDS event notification](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html) in the Amazon RDS User Guide. ++**Severity**: Low ++### [An RDS event notifications subscription should be configured for critical database security group events](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ab5c51fb-ecdb-46de-b8df-c28ae46ce5bc) ++**Description**: This control checks whether an Amazon RDS event subscription exists with notifications enabled for the following source type: Event category key-value pairs.DBSecurityGroup: [Configuration, change, failure]. + RDS event notifications use Amazon SNS to make you aware of changes in the availability or configuration of your RDS resources. These notifications allow for a rapid response. +For more information about RDS event notifications, see [Using Amazon RDS event notification](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html) in the Amazon RDS User Guide. ++**Severity**: Low ++### [API Gateway REST and WebSocket API logging should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2cac0072-6f56-46f0-9518-ddec3660ee56) ++**Description**: This control checks whether all stages of an Amazon API Gateway REST or WebSocket API have logging enabled. + The control fails if logging isn't enabled for all methods of a stage or if logging Level is neither ERROR nor INFO. + API Gateway REST or WebSocket API stages should have relevant logs enabled. API Gateway REST and WebSocket API execution logging provides detailed records of requests made to API Gateway REST and WebSocket API stages. + The stages include API integration backend responses, Lambda authorizer responses, and the requestId for AWS integration endpoints. ++**Severity**: Medium ++### [API Gateway REST API cache data should be encrypted at rest](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1a0ce4e0-b61e-4ec7-ab65-aeaff3893bd3) ++**Description**: This control checks whether all methods in API Gateway REST API stages that have cache enabled are encrypted. The control fails if any method in an API Gateway REST API stage is configured to cache and the cache isn't encrypted. + Encrypting data at rest reduces the risk of data stored on disk being accessed by a user not authenticated to AWS. It adds another set of access controls to limit unauthorized users ability access the data. For example, API permissions are required to decrypt the data before it can be read. + API Gateway REST API caches should be encrypted at rest for an added layer of security. ++**Severity**: Medium ++### [API Gateway REST API stages should be configured to use SSL certificates for backend authentication](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ec268d38-c94b-4df3-8b4e-5248fcaaf3fc) ++**Description**: This control checks whether Amazon API Gateway REST API stages have SSL certificates configured. + Backend systems use these certificates to authenticate that incoming requests are from API Gateway. + API Gateway REST API stages should be configured with SSL certificates to allow backend systems to authenticate that requests originate from API Gateway. ++**Severity**: Medium ++### [API Gateway REST API stages should have AWS X-Ray tracing enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/5cbaff4f-f8d5-49fe-9fdc-63c4507ac670) ++**Description**: This control checks whether AWS X-Ray active tracing is enabled for your Amazon API Gateway REST API stages. + X-Ray active tracing enables a more rapid response to performance changes in the underlying infrastructure. Changes in performance could result in a lack of availability of the API. + X-Ray active tracing provides real-time metrics of user requests that flow through your API Gateway REST API operations and connected services. ++**Severity**: Low ++### [API Gateway should be associated with an AWS WAF web ACL](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d69eb8b0-79ba-4963-a683-a96a8ea787e2) ++**Description**: This control checks whether an API Gateway stage uses an AWS WAF web access control list (ACL). + This control fails if an AWS WAF web ACL isn't attached to a REST API Gateway stage. + AWS WAF is a web application firewall that helps protect web applications and APIs from attacks. It enables you to configure an ACL, which is a set of rules that allow, block, or count web requests based on customizable web security rules and conditions that you define. + Ensure that your API Gateway stage is associated with an AWS WAF web ACL to help protect it from malicious attacks. ++**Severity**: Medium ++### [Application and Classic Load Balancers logging should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4ba5c359-495f-4ba6-9897-7fdbc0aed675) ++**Description**: This control checks whether the Application Load Balancer and the Classic Load Balancer have logging enabled. The control fails if `access_logs.s3.enabled` is false. +Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client's IP address, latencies, request paths, and server responses. You can use access logs to analyze traffic patterns and to troubleshoot issues. +To learn more, see [Access logs for your Classic Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/access-log-collection.html) in User Guide for Classic Load Balancers. ++**Severity**: Medium ++### [Attached EBS volumes should be encrypted at-rest](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/0bde343a-0681-4ee2-883a-027cc1e655b8) ++**Description**: This control checks whether the EBS volumes that are in an attached state are encrypted. To pass this check, EBS volumes must be in use and encrypted. If the EBS volume isn't attached, then it isn't subject to this check. +For an added layer of security of your sensitive data in EBS volumes, you should enable EBS encryption at rest. Amazon EBS encryption offers a straightforward encryption solution for your EBS resources that doesn't require you to build, maintain, and secure your own key management infrastructure. It uses AWS KMS customer master keys (CMK) when creating encrypted volumes and snapshots. +To learn more about Amazon EBS encryption, see [Amazon EBS encryption](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html) in the Amazon EC2 User Guide for Linux Instances. ++**Severity**: Medium ++### [AWS Database Migration Service replication instances shouldn't be public](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/132a70b8-ffda-457a-b7a3-e6f2e01fc0af) ++**Description**: To protect your replicated instances from threats. A private replication instance should have a private IP address that you can't access outside of the replication network. + A replication instance should have a private IP address when the source and target databases are in the same network, and the network is connected to the replication instance's VPC using a VPN, AWS Direct Connect, or VPC peering. + You should also ensure that access to your AWS DMS instance configuration is limited to only authorized users. + To do this, restrict users' IAM permissions to modify AWS DMS settings and resources. ++**Severity**: High ++### [Classic Load Balancer listeners should be configured with HTTPS or TLS termination](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/773667f7-6511-4aec-ae9c-e3286c56a254) ++**Description**: This control checks whether your Classic Load Balancer listeners are configured with HTTPS or TLS protocol for front-end (client to load balancer) connections. The control is applicable if a Classic Load Balancer has listeners. If your Classic Load Balancer doesn't have a listener configured, then the control doesn't report any findings. +The control passes if the Classic Load Balancer listeners are configured with TLS or HTTPS for front-end connections. +The control fails if the listener isn't configured with TLS or HTTPS for front-end connections. +Before you start to use a load balancer, you must add one or more listeners. A listener is a process that uses the configured protocol and port to check for connection requests. Listeners can support both HTTP and HTTPS/TLS protocols. You should always use an HTTPS or TLS listener, so that the load balancer does the work of encryption and decryption in transit. ++**Severity**: Medium ++### [Classic Load Balancers should have connection draining enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dd60e31e-073a-42b6-9b23-db7ca86fd5e0) ++**Description**: This control checks whether Classic Load Balancers have connection draining enabled. +Enabling connection draining on Classic Load Balancers ensures that the load balancer stops sending requests to instances that are deregistering or unhealthy. It keeps the existing connections open. This is useful for instances in Auto Scaling groups, to ensure that connections aren't severed abruptly. ++**Severity**: Medium ++### [CloudFront distributions should have AWS WAF enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/0e0d5964-2895-45b1-b646-fcded8d567be) ++**Description**: This control checks whether CloudFront distributions are associated with either AWS WAF or AWS WAFv2 web ACLs. The control fails if the distribution isn't associated with a web ACL. +AWS WAF is a web application firewall that helps protect web applications and APIs from attacks. It allows you to configure a set of rules, called a web access control list (web ACL), that allow, block, or count web requests based on customizable web security rules and conditions that you define. Ensure your CloudFront distribution is associated with an AWS WAF web ACL to help protect it from malicious attacks. ++**Severity**: Medium ++### [CloudFront distributions should have logging enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/88114970-36db-42b3-9549-20608b1ab8ad) ++**Description**: This control checks whether server access logging is enabled on CloudFront distributions. The control fails if access logging isn't enabled for a distribution. + CloudFront access logs provide detailed information about every user request that CloudFront receives. Each log contains information such as the date and time the request was received, the IP address of the viewer that made the request, the source of the request, and the port number of the request from the viewer. +These logs are useful for applications such as security and access audits and forensics investigation. For more information on how to analyze access logs, see Querying Amazon CloudFront logs in the Amazon Athena User Guide. ++**Severity**: Medium ++### [CloudFront distributions should require encryption in transit](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a67adff8-625f-4891-9f61-43f837d18ad2) ++**Description**: This control checks whether an Amazon CloudFront distribution requires viewers to use HTTPS directly or whether it uses redirection. The control fails if ViewerProtocolPolicy is set to allow-all for defaultCacheBehavior or for cacheBehaviors. +HTTPS (TLS) can be used to help prevent potential attackers from using person-in-the-middle or similar attacks to eavesdrop on or manipulate network traffic. Only encrypted connections over HTTPS (TLS) should be allowed. Encrypting data in transit can affect performance. You should test your application with this feature to understand the performance profile and the impact of TLS. ++**Severity**: Medium ++### [CloudTrail logs should be encrypted at rest using KMS CMKs](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/190f732b-c68e-4816-9961-aba074272627) ++**Description**: We recommended configuring CloudTrail to use SSE-KMS. +Configuring CloudTrail to use SSE-KMS provides more confidentiality controls on log data as a given user must have S3 read permission on the corresponding log bucket and must be granted decrypt permission by the CMK policy. ++**Severity**: Medium ++### [Connections to Amazon Redshift clusters should be encrypted in transit](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/036bb56b-c442-4352-bb4c-5bd0353ad314) ++**Description**: This control checks whether connections to Amazon Redshift clusters are required to use encryption in transit. The check fails if the Amazon Redshift cluster parameter require_SSL isn't set to *1*. +TLS can be used to help prevent potential attackers from using person-in-the-middle or similar attacks to eavesdrop on or manipulate network traffic. Only encrypted connections over TLS should be allowed. Encrypting data in transit can affect performance. You should test your application with this feature to understand the performance profile and the impact of TLS. ++**Severity**: Medium ++### [Connections to Elasticsearch domains should be encrypted using TLS 1.2](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/effb5011-f8db-45ac-b981-b5bdfd7beb88) ++**Description**: This control checks whether connections to Elasticsearch domains are required to use TLS 1.2. The check fails if the Elasticsearch domain TLSSecurityPolicy isn't Policy-Min-TLS-1-2-2019-07. +HTTPS (TLS) can be used to help prevent potential attackers from using person-in-the-middle or similar attacks to eavesdrop on or manipulate network traffic. Only encrypted connections over HTTPS (TLS) should be allowed. Encrypting data in transit can affect performance. You should test your application with this feature to understand the performance profile and the impact of TLS. TLS 1.2 provides several security enhancements over previous versions of TLS. ++**Severity**: Medium ++### [DynamoDB tables should have point-in-time recovery enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/cc873508-40c1-41b6-8507-8a431d74f831) ++**Description**: This control checks whether point-in-time recovery (PITR) is enabled for an Amazon DynamoDB table. + Backups help you to recover more quickly from a security incident. They also strengthen the resilience of your systems. DynamoDB point-in-time recovery automates backups for DynamoDB tables. It reduces the time to recover from accidental delete or write operations. + DynamoDB tables that have PITR enabled can be restored to any point in time in the last 35 days. ++**Severity**: Medium ++### [EBS default encryption should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/56406d4c-87b4-4aeb-b1cc-7f6312d78e0a) ++**Description**: This control checks whether account-level encryption is enabled by default for Amazon Elastic Block Store(Amazon EBS). + The control fails if the account level encryption isn't enabled. +When encryption is enabled for your account, Amazon EBS volumes and snapshot copies are encrypted at rest. This adds another layer of protection for your data. +For more information, see [Encryption by default](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html#encryption-by-default) in the Amazon EC2 User Guide for Linux Instances. ++The following instance types don't support encryption: R1, C1, and M1. ++**Severity**: Medium ++### [Elastic Beanstalk environments should have enhanced health reporting enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4170067b-345d-47ed-ab4a-c6b6046881f1) ++**Description**: This control checks whether enhanced health reporting is enabled for your AWS Elastic Beanstalk environments. +Elastic Beanstalk enhanced health reporting enables a more rapid response to changes in the health of the underlying infrastructure. These changes could result in a lack of availability of the application. +Elastic Beanstalk enhanced health reporting provides a status descriptor to gauge the severity of the identified issues and identify possible causes to investigate. The Elastic Beanstalk health agent, included in supported Amazon Machine Images (AMIs), evaluates logs and metrics of environment EC2 instances. ++**Severity**: Low ++### [Elastic Beanstalk managed platform updates should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/820f6c6e-f73f-432c-8c60-cae1794ea150) ++**Description**: This control checks whether managed platform updates are enabled for the Elastic Beanstalk environment. +Enabling managed platform updates ensures that the latest available platform fixes, updates, and features for the environment are installed. Keeping up to date with patch installation is an important step in securing systems. ++**Severity**: High ++### [Elastic Load Balancer shouldn't have ACM certificate expired or expiring in 90 days.](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a5e0d700-3de1-469a-96d2-6536d9a92604) ++**Description**: This check identifies Elastic Load Balancers (ELB) which are using ACM certificates expired or expiring in 90 days. AWS Certificate Manager (ACM) is the preferred tool to provision, manage, and deploy your server certificates. With ACM. You can request a certificate or deploy an existing ACM or external certificate to AWS resources. As a best practice, it's recommended to reimport expiring/expired certificates while preserving the ELB associations of the original certificate. ++**Severity**: High ++### [Elasticsearch domain error logging to CloudWatch Logs should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f48af569-2e67-464b-9a62-b8df0f85bc5e) ++**Description**: This control checks whether Elasticsearch domains are configured to send error logs to CloudWatch Logs. +You should enable error logs for Elasticsearch domains and send those logs to CloudWatch Logs for retention and response. Domain error logs can assist with security and access audits, and can help to diagnose availability issues. ++**Severity**: Medium ++### [Elasticsearch domains should be configured with at least three dedicated master nodes](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/b4b9a67c-c315-4f9b-b06b-04867a453aab) ++**Description**: This control checks whether Elasticsearch domains are configured with at least three dedicated master nodes. This control fails if the domain doesn't use dedicated master nodes. This control passes if Elasticsearch domains have five dedicated master nodes. However, using more than three master nodes might be unnecessary to mitigate the availability risk, and will result in more cost. +An Elasticsearch domain requires at least three dedicated master nodes for high availability and fault-tolerance. Dedicated master node resources can be strained during data node blue/green deployments because there are more nodes to manage. Deploying an Elasticsearch domain with at least three dedicated master nodes ensures sufficient master node resource capacity and cluster operations if a node fails. ++**Severity**: Medium ++### [Elasticsearch domains should have at least three data nodes](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/994cbcb3-43d4-419d-b5c4-9adc558f3ca2) ++**Description**: This control checks whether Elasticsearch domains are configured with at least three data nodes and zoneAwarenessEnabled is true. +An Elasticsearch domain requires at least three data nodes for high availability and fault-tolerance. Deploying an Elasticsearch domain with at least three data nodes ensures cluster operations if a node fails. ++**Severity**: Medium ++### [Elasticsearch domains should have audit logging enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/12ebb4cd-34b6-4c3a-bee9-7e35f4f6caff) ++**Description**: This control checks whether Elasticsearch domains have audit logging enabled. This control fails if an Elasticsearch domain doesn't have audit logging enabled. +Audit logs are highly customizable. They allow you to track user activity on your Elasticsearch clusters, including authentication successes and failures, requests to OpenSearch, index changes, and incoming search queries. ++**Severity**: Medium ++### [Enhanced monitoring should be configured for RDS DB instances and clusters](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/93e5a579-dd2f-4a56-b827-ebbfe7376b16) ++**Description**: This control checks whether enhanced monitoring is enabled for your RDS DB instances. +In Amazon RDS, Enhanced Monitoring enables a more rapid response to performance changes in underlying infrastructure. These performance changes could result in a lack of availability of the data. Enhanced Monitoring provides real-time metrics of the operating system that your RDS DB instance runs on. An agent is installed on the instance. The agent can obtain metrics more accurately than is possible from the hypervisor layer. +Enhanced Monitoring metrics are useful when you want to see how different processes or threads on a DB instance use the CPU. For more information, see [Enhanced Monitoring](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.OS.html) in the *Amazon RDS User Guide*. ++**Severity**: Low ++### [Ensure rotation for customer created CMKs is enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/66748314-d51c-4d9c-b789-eebef29a7039) ++**Description**: AWS Key Management Service (KMS) allows customers to rotate the backing key, which is key material stored within the KMS that is tied to the key ID of the Customer Created customer master key (CMK). + It's the backing key that is used to perform cryptographic operations such as encryption and decryption. + Automated key rotation currently retains all prior backing keys so that decryption of encrypted data can take place transparently. It's recommended that CMK key rotation be enabled. + Rotating encryption keys helps reduce the potential impact of a compromised key as data encrypted with a new key can't be accessed with a previous key that might have been exposed. ++**Severity**: Medium ++### [Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/759e80dc-92c2-4afd-afa3-c01294999363) ++**Description**: S3 Bucket Access Logging generates a log that contains access records Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket for each request made to your S3 bucket. + An access log record contains details about the request, such as the request type, the resources specified in the request worked, and the time and date the request was processed. +It's recommended that bucket access logging be enabled on the CloudTrail S3 bucket. +By enabling S3 bucket logging on target S3 buckets, it's possible to capture all events, which might affect objects within target buckets. Configuring logs to be placed in a separate bucket allows access to log information, which can be useful in security and incident response workflows. ++**Severity**: Low ++### [Ensure the S3 bucket used to store CloudTrail logs isn't publicly accessible](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a41f2846-4a59-44e9-89bb-1f62d4b03a85) ++**Description**: CloudTrail logs a record of every API call made in your AWS account. These log files are stored in an S3 bucket. + It's recommended that the bucket policy, or access control list (ACL), applied to the S3 bucket that CloudTrail logs to prevent public access to the CloudTrail logs. +Allowing public access to CloudTrail log content might aid an adversary in identifying weaknesses in the affected account's use or configuration. ++**Severity**: High ++### [IAM shouldn't have expired SSL/TLS certificates](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/03a8f33c-b01c-4dfc-b627-f98114715ae0) ++**Description**: This check identifies expired SSL/TLS certificates. To enable HTTPS connections to your website or application in AWS, you need an SSL/TLS server certificate. You can use ACM or IAM to store and deploy server certificates. Removing expired SSL/TLS certificates eliminates the risk that an invalid certificate will be deployed accidentally to a resource such as AWS Elastic Load Balancer (ELB), which can damage the credibility of the application/website behind the ELB. This check generates alerts if there are any expired SSL/TLS certificates stored in AWS IAM. As a best practice, it's recommended to delete expired certificates. ++**Severity**: High ++### [Imported ACM certificates should be renewed after a specified time period](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/0e68b4d8-1a5e-47fc-a3eb-b3542fea43f1) ++**Description**: This control checks whether ACM certificates in your account are marked for expiration within 30 days. It checks both imported certificates and certificates provided by AWS Certificate Manager. +ACM can automatically renew certificates that use DNS validation. For certificates that use email validation, you must respond to a domain validation email. + ACM also doesn't automatically renew certificates that you import. You must renew imported certificates manually. +For more information about managed renewal for ACM certificates, see [Managed renewal for ACM certificates](https://docs.aws.amazon.com/acm/latest/userguide/managed-renewal.html) in the AWS Certificate Manager User Guide. ++**Severity**: Medium ++### [Over-provisioned identities in accounts should be investigated to reduce the Permission Creep Index (PCI)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2482620f-f324-4add-af68-2e01e27485e9) ++**Description**: Over-provisioned identities in accounts should be investigated to reduce the Permission Creep Index (PCI) and to safeguard your infrastructure. Reduce the PCI by removing the unused high risk permission assignments. High PCI reflects risk associated with the identities with permissions that exceed their normal or required usage. ++**Severity**: Medium ++### [RDS automatic minor version upgrades should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d352afac-cebc-4e02-b474-7ef402fb1d65) ++**Description**: This control checks whether automatic minor version upgrades are enabled for the RDS database instance. +Enabling automatic minor version upgrades ensures that the latest minor version updates to the relational database management system (RDBMS) are installed. These upgrades might include security patches and bug fixes. Keeping up to date with patch installation is an important step in securing systems. ++**Severity**: High ++### [RDS cluster snapshots and database snapshots should be encrypted at rest](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4f4fbc5e-0b10-4208-b52f-1f47f1c73b6a) ++**Description**: This control checks whether RDS DB snapshots are encrypted. +This control is intended for RDS DB instances. However, it can also generate findings for snapshots of Aurora DB instances, Neptune DB instances, and Amazon DocumentDB clusters. If these findings aren't useful, then you can suppress them. +Encrypting data at rest reduces the risk that an unauthenticated user gets access to data that is stored on disk. Data in RDS snapshots should be encrypted at rest for an added layer of security. ++**Severity**: Medium ++### [RDS clusters should have deletion protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9e769650-868c-46f5-b8c0-1a8ba12a4c92) ++**Description**: This control checks whether RDS clusters have deletion protection enabled. +This control is intended for RDS DB instances. However, it can also generate findings for Aurora DB instances, Neptune DB instances, and Amazon DocumentDB clusters. If these findings aren't useful, then you can suppress them. +Enabling cluster deletion protection is another layer of protection against accidental database deletion or deletion by an unauthorized entity. +When deletion protection is enabled, an RDS cluster can't be deleted. Before a deletion request can succeed, deletion protection must be disabled. ++**Severity**: Low ++### [RDS DB clusters should be configured for multiple Availability Zones](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/cdf441dd-0ab7-4ef2-a643-de12725e5d5d) ++**Description**: RDS DB clusters should be configured for multiple the data that is stored. + Deployment to multiple Availability Zones allows for automate Availability Zones to ensure availability of ed failover in the event of an Availability Zone availability issue and during regular RDS maintenance events. ++**Severity**: Medium ++### [RDS DB clusters should be configured to copy tags to snapshots](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/b9ed02d0-afca-4bed-838d-70bf31ecf19a) ++**Description**: Identification and inventory of your IT assets is a crucial aspect of governance and security. + You need to have visibility of all your RDS DB clusters so that you can assess their security posture and act on potential areas of weakness. + Snapshots should be tagged in the same way as their parent RDS database clusters. + Enabling this setting ensures that snapshots inherit the tags of their parent database clusters. ++**Severity**: Low ++### [RDS DB instances should be configured to copy tags to snapshots](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/fcd891e5-c6a2-41ce-bca6-f49ec582f3ce) ++**Description**: This control checks whether RDS DB instances are configured to copy all tags to snapshots when the snapshots are created. +Identification and inventory of your IT assets is a crucial aspect of governance and security. + You need to have visibility of all your RDS DB instances so that you can assess their security posture and take action on potential areas of weakness. + Snapshots should be tagged in the same way as their parent RDS database instances. Enabling this setting ensures that snapshots inherit the tags of their parent database instances. ++**Severity**: Low ++### [RDS DB instances should be configured with multiple Availability Zones](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/70ebbd01-cd79-4bc8-ae85-49f47ccdd5ad) ++**Description**: This control checks whether high availability is enabled for your RDS DB instances. + RDS DB instances should be configured for multiple Availability Zones (AZs). This ensures the availability of the data stored. Multi-AZ deployments allow for automated failover if there's an issue with Availability Zone availability and during regular RDS maintenance. ++**Severity**: Medium ++### [RDS DB instances should have deletion protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8e1f7933-faa9-4379-a9bd-697740dedac8) ++**Description**: This control checks whether your RDS DB instances that use one of the listed database engines have deletion protection enabled. +Enabling instance deletion protection is another layer of protection against accidental database deletion or deletion by an unauthorized entity. +While deletion protection is enabled, an RDS DB instance can't be deleted. Before a deletion request can succeed, deletion protection must be disabled. ++**Severity**: Low ++### [RDS DB instances should have encryption at rest enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/bfa7d2aa-f362-11eb-9a03-0242ac130003) ++**Description**: This control checks whether storage encryption is enabled for your Amazon RDS DB instances. +This control is intended for RDS DB instances. However, it can also generate findings for Aurora DB instances, Neptune DB instances, and Amazon DocumentDB clusters. If these findings aren't useful, then you can suppress them. + For an added layer of security for your sensitive data in RDS DB instances, you should configure your RDS DB instances to be encrypted at rest. To encrypt your RDS DB instances and snapshots at rest, enable the encryption option for your RDS DB instances. Data that is encrypted at rest includes the underlying storage for DB instances, its automated backups, read replicas, and snapshots. +RDS encrypted DB instances use the open standard AES-256 encryption algorithm to encrypt your data on the server that hosts your RDS DB instances. After your data is encrypted, Amazon RDS handles authentication of access and decryption of your data transparently with a minimal impact on performance. You don't need to modify your database client applications to use encryption. +Amazon RDS encryption is currently available for all database engines and storage types. Amazon RDS encryption is available for most DB instance classes. To learn about DB instance classes that don't support Amazon RDS encryption, see [Encrypting Amazon RDS resources](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html) in the *Amazon RDS User Guide*. ++**Severity**: Medium ++### [RDS DB Instances should prohibit public access](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/72f3b7f1-76b8-4cf5-8da5-4ba5745b512c) ++**Description**: We recommend that you also ensure that access to your RDS instance's configuration is limited to authorized users only, by restricting users' IAM permissions to modify RDS instances' settings and resources. ++**Severity**: High ++### [RDS snapshots should prohibit public access](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f64521fc-a9f1-4d43-b667-8d94b4a202af) ++**Description**: We recommend only allowing authorized principals to access the snapshot and change Amazon RDS configuration. ++**Severity**: High ++### [Remove unused Secrets Manager secrets](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/bfa82db5-c112-44f0-89e6-a9adfb9a4028) ++**Description**: This control checks whether your secrets have been accessed within a specified number of days. The default value is 90 days. If a secret wasn't accessed within the defined number of days, this control fails. +Deleting unused secrets is as important as rotating secrets. Unused secrets can be abused by their former users, who no longer need access to these secrets. Also, as more users get access to a secret, someone might have mishandled and leaked it to an unauthorized entity, which increases the risk of abuse. Deleting unused secrets helps revoke secret access from users who no longer need it. It also helps to reduce the cost of using Secrets Manager. Therefore, it's essential to routinely delete unused secrets. ++**Severity**: Medium ++### [S3 buckets should have cross-region replication enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/35713036-bd12-4646-9b92-4c56a761a710) ++**Description**: Enabling S3 cross-region replication ensures that multiple versions of the data are available in different distinct Regions. + This allows you to protect your S3 bucket against DDoS attacks and data corruption events. ++**Severity**: Low ++### [S3 buckets should have server-side encryption enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/3cb793ab-20d3-4677-9723-024c8fed0c23) ++**Description**: Enable server-side encryption to protect data in your S3 buckets. + Encrypting the data can prevent access to sensitive data in the event of a data breach. ++**Severity**: Medium ++### [Secrets Manager secrets configured with automatic rotation should rotate successfully](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/bec42e2d-956b-4940-a37d-7c1b1e8c525f) ++**Description**: This control checks whether an AWS Secrets Manager secret rotated successfully based on the rotation schedule. The control fails if **RotationOccurringAsScheduled** is **false**. The control doesn't evaluate secrets that don't have rotation configured. +Secrets Manager helps you improve the security posture of your organization. Secrets include database credentials, passwords, and third-party API keys. You can use Secrets Manager to store secrets centrally, encrypt secrets automatically, control access to secrets, and rotate secrets safely and automatically. +Secrets Manager can rotate secrets. You can use rotation to replace long-term secrets with short-term ones. Rotating your secrets limits how long an unauthorized user can use a compromised secret. For this reason, you should rotate your secrets frequently. +In addition to configuring secrets to rotate automatically, you should ensure that those secrets rotate successfully based on the rotation schedule. +To learn more about rotation, see [Rotating your AWS Secrets Manager secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html) in the AWS Secrets Manager User Guide. ++**Severity**: Medium ++### [Secrets Manager secrets should be rotated within a specified number of days](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/323f0eb4-ea19-4b55-83e9-d104009616b4) ++**Description**: This control checks whether your secrets have been rotated at least once within 90 days. +Rotating secrets can help you to reduce the risk of an unauthorized use of your secrets in your AWS account. Examples include database credentials, passwords, third-party API keys, and even arbitrary text. If you don't change your secrets for a long period of time, the secrets are more likely to be compromised. +As more users get access to a secret, it can become more likely that someone mishandled and leaked it to an unauthorized entity. Secrets can be leaked through logs and cache data. They can be shared for debugging purposes and not changed or revoked once the debugging completes. For all these reasons, secrets should be rotated frequently. +You can configure your secrets for automatic rotation in AWS Secrets Manager. With automatic rotation, you can replace long-term secrets with short-term ones, significantly reducing the risk of compromise. +Security Hub recommends that you enable rotation for your Secrets Manager secrets. To learn more about rotation, see [Rotating your AWS Secrets Manager secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html) in the AWS Secrets Manager User Guide. ++**Severity**: Medium ++### [SNS topics should be encrypted at rest using AWS KMS](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/90917e06-2781-4857-9d74-9043c6475d03) ++**Description**: This control checks whether an SNS topic is encrypted at rest using AWS KMS. +Encrypting data at rest reduces the risk of data stored on disk being accessed by a user not authenticated to AWS. It also adds another set of access controls to limit the ability of unauthorized users to access the data. +For example, API permissions are required to decrypt the data before it can be read. SNS topics should be [encrypted at-rest](https://docs.aws.amazon.com/sns/latest/dg/sns-server-side-encryption.html) for an added layer of security. For more information, see Encryption at rest in the Amazon Simple Notification Service Developer Guide. ++**Severity**: Medium ++### [VPC flow logging should be enabled in all VPCs](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/3428e584-0fa6-48c0-817e-6d689d7bb879) ++**Description**: VPC Flow Logs provide visibility into network traffic that passes through the VPC and can be used to detect anomalous traffic or insight during security events. ++**Severity**: Medium +++++## GCP data recommendations ++### [Ensure '3625 (trace flag)' database flag for Cloud SQL SQL Server instance is set to 'off'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/631246fb-7192-4709-a0b3-b83e65e6b550) ++**Description**: It's recommended to set "3625 (trace flag)" database flag for Cloud SQL SQL Server instance to "off." + Trace flags are frequently used to diagnose performance issues or to debug stored procedures or complex computer systems, but they might also be recommended by Microsoft Support to address behavior that is negatively impacting a specific workload. + All documented trace flags and those recommended by Microsoft Support are fully supported in a production environment when used as directed. + "3625(trace log)" Limits the amount of information returned to users who aren't members of the sysadmin fixed server role, by masking the parameters of some error messages using '******.' + This can help prevent disclosure of sensitive information. Hence this is recommended to disable this flag. + This recommendation is applicable to SQL Server database instances. ++**Severity**: Medium ++### [Ensure 'external scripts enabled' database flag for Cloud SQL SQL Server instance is set to 'off'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/98b8908a-18b9-46ea-8c52-3f8db1da996f) ++**Description**: It's recommended to set "external scripts enabled" database flag for Cloud SQL SQL Server instance to off. + "external scripts enabled" enable the execution of scripts with certain remote language extensions. + This property is OFF by default. + When Advanced Analytics Services is installed, setup can optionally set this property to true. + As the "External Scripts Enabled" feature allows scripts external to SQL such as files located in an R library to be executed, which could adversely affect the security of the system, hence this should be disabled. + This recommendation is applicable to SQL Server database instances. ++**Severity**: High ++### [Ensure 'remote access' database flag for Cloud SQL SQL Server instance is set to 'off'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dddbbe7d-7e32-47d8-b319-39cbb70b8f88) ++**Description**: It's recommended to set "remote access" database flag for Cloud SQL SQL Server instance to "off." + The "remote access" option controls the execution of stored procedures from local or remote servers on which instances of SQL Server are running. + This default value for this option is 1. + This grants permission to run local stored procedures from remote servers or remote stored procedures from the local server. + To prevent local stored procedures from being run from a remote server or remote stored procedures from being run on the local server, this must be disabled. + The Remote Access option controls the execution of local stored procedures on remote servers or remote stored procedures on local server. + 'Remote access' functionality can be abused to launch a Denial-of-Service (DoS) attack on remote servers by off-loading query processing to a target, hence this should be disabled. + This recommendation is applicable to SQL Server database instances. ++**Severity**: High ++### [Ensure 'skip_show_database' database flag for Cloud SQL Mysql instance is set to 'on'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9e5b33de-bcfa-4044-93ce-4937bf8f0bbd) ++**Description**: It's recommended to set "skip_show_database" database flag for Cloud SQL Mysql instance to "on." + 'skip_show_database' database flag prevents people from using the SHOW DATABASES statement if they don't have the SHOW DATABASES privilege. + This can improve security if you have concerns about users being able to see databases belonging to other users. + Its effect depends on the SHOW DATABASES privilege: If the variable value is ON, the SHOW DATABASES statement is permitted only to users who have the SHOW DATABASES privilege, and the statement displays all database names. + If the value is OFF, SHOW DATABASES is permitted to all users, but displays the names of only those databases for which the user has the SHOW DATABASES or other privilege. + This recommendation is applicable to Mysql database instances. ++**Severity**: Low ++### [Ensure that a Default Customer-managed encryption key (CMEK) is specified for all BigQuery Data Sets](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f024ea22-7e48-4b3b-a824-d61794c14bb4) ++**Description**: BigQuery by default encrypts the data as rest by employing Envelope Encryption using Google managed cryptographic keys. + The data is encrypted using the data encryption keys and data encryption keys themselves are further encrypted using key encryption keys. +This is seamless and does not require any additional input from the user. +However, if you want to have greater control, Customer-managed encryption keys (CMEK) can be used as encryption key management solution for BigQuery Data Sets. +BigQuery by default encrypts the data as rest by employing Envelope Encryption using Google managed cryptographic keys. + This is seamless and doesn't require any additional input from the user. +For greater control over the encryption, customer-managed encryption keys (CMEK) can be used as encryption key management solution for BigQuery Data Sets. + Setting a Default Customer-managed encryption key (CMEK) for a data set ensure any tables created in future will use the specified CMEK if none other is provided. ++Google doesn't store your keys on its servers and can't access your protected data unless you provide the key. ++This also means that if you forget or lose your key, there's no way for Google to recover the key or to recover any data encrypted with the lost key. ++**Severity**: Medium ++### [Ensure that all BigQuery Tables are encrypted with Customer-managed encryption key (CMEK)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f4cfc689-cac8-4f45-8355-652dcda3ec55) ++**Description**: BigQuery by default encrypts the data as rest by employing Envelope Encryption using Google managed cryptographic keys. + The data is encrypted using the data encryption keys and data encryption keys themselves are further encrypted using key encryption keys. + This is seamless and does not require any additional input from the user. + However, if you want to have greater control, Customer-managed encryption keys (CMEK) can be used as encryption key management solution for BigQuery Data Sets. + If CMEK is used, the CMEK is used to encrypt the data encryption keys instead of using google-managed encryption keys. +BigQuery by default encrypts the data as rest by employing Envelope Encryption using Google managed cryptographic keys. +This is seamless and doesn't require any additional input from the user. +For greater control over the encryption, customer-managed encryption keys (CMEK) can be used as encryption key management solution for BigQuery tables. +The CMEK is used to encrypt the data encryption keys instead of using google-managed encryption keys. + BigQuery stores the table and CMEK association and the encryption/decryption is done automatically. +Applying the Default Customer-managed keys on BigQuery data sets ensures that all the new tables created in the future will be encrypted using CMEK but existing tables need to be updated to use CMEK individually. ++Google doesn't store your keys on its servers and can't access your protected data unless you provide the key. + This also means that if you forget or lose your key, there's no way for Google to recover the key or to recover any data encrypted with the lost key. ++**Severity**: Medium ++### [Ensure that BigQuery datasets are not anonymously or publicly accessible](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dab1eea3-7693-4da3-af1b-2f73832655fa) ++**Description**: It's recommended that the IAM policy on BigQuery datasets doesn't allow anonymous and/or public access. + Granting permissions to allUsers or allAuthenticatedUsers allows anyone to access the dataset. +Such access might not be desirable if sensitive data is being stored in the dataset. + Therefore, ensure that anonymous and/or public access to a dataset isn't allowed. ++**Severity**: High ++### [Ensure that Cloud SQL database instances are configured with automated backups](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/afaac6e6-6240-48a2-9f62-4e257b851311) ++**Description**: It's recommended to have all SQL database instances set to enable automated backups. + Backups provide a way to restore a Cloud SQL instance to recover lost data or recover from a problem with that instance. + Automated backups need to be set for any instance that contains data that should be protected from loss or damage. + This recommendation is applicable for SQL Server, PostgreSql, MySql generation 1, and MySql generation 2 instances. ++**Severity**: High ++### [Ensure that Cloud SQL database instances are not opened to the world](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/de78ebca-1ec6-4872-8061-8fcfb27752fc) ++**Description**: Database Server should accept connections only from trusted Network(s)/IP(s) and restrict access from the world. + To minimize attack surface on a Database server instance, only trusted/known and required IP(s) should be approved to connect to it. + An authorized network shouldn't have IPs/networks configured to 0.0.0.0/0, which will allow access to the instance from anywhere in the world. Note that authorized networks apply only to instances with public IPs. ++**Severity**: High ++### [Ensure that Cloud SQL database instances do not have public IPs](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1658239d-caf7-471d-83c5-2e4c44afdcff) ++**Description**: It's recommended to configure Second Generation Sql instance to use private IPs instead of public IPs. + To lower the organization's attack surface, Cloud SQL databases shouldn't have public IPs. + Private IPs provide improved network security and lower latency for your application. ++**Severity**: High ++### [Ensure that Cloud Storage bucket is not anonymously or publicly accessible](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d8305d96-2aa5-458d-92b7-f8418f5f3328) ++**Description**: It's recommended that IAM policy on Cloud Storage bucket doesn't allows anonymous or public access. +Allowing anonymous or public access grants permissions to anyone to access bucket content. + Such access might not be desired if you're storing any sensitive data. + Hence, ensure that anonymous or public access to a bucket isn't allowed. ++**Severity**: High ++### [Ensure that Cloud Storage buckets have uniform bucket-level access enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/64b5cdbc-0633-49af-b63d-a9dc90560196) ++**Description**: It's recommended that uniform bucket-level access is enabled on Cloud Storage buckets. + It's recommended to use uniform bucket-level access to unify and simplify how you grant access to your Cloud Storage resources. + Cloud Storage offers two systems for granting users permission to access your buckets and objects: + Cloud Identity and Access Management (Cloud IAM) and Access Control Lists (ACLs). + These systems act in parallel - in order for a user to access a Cloud Storage resource, only one of the systems needs to grant the user permission. + Cloud IAM is used throughout Google Cloud and allows you to grant a variety of permissions at the bucket and project levels. + ACLs are used only by Cloud Storage and have limited permission options, but they allow you to grant permissions on a per-object basis. ++ In order to support a uniform permissioning system, Cloud Storage has uniform bucket-level access. + Using this feature disables ACLs for all Cloud Storage resources: + access to Cloud Storage resources then is granted exclusively through Cloud IAM. + Enabling uniform bucket-level access guarantees that if a Storage bucket isn't publicly accessible, +no object in the bucket is publicly accessible either. ++**Severity**: Medium ++### [Ensure that Compute instances have Confidential Computing enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/171e9492-73a7-43de-adce-6bd0a3cf6045) ++**Description**: Google Cloud encrypts data at-rest and in-transit, but customer data must be decrypted for processing. Confidential Computing is a breakthrough technology that encrypts data in-use-while it's being processed. + Confidential Computing environments keep data encrypted in memory and elsewhere outside the central processing unit (CPU). +Confidential VMs leverage the Secure Encrypted Virtualization (SEV) feature of AMD EPYC CPUs. + Customer data will stay encrypted while it's used, indexed, queried, or trained on. + Encryption keys are generated in hardware, per VM, and not exportable. Thanks to built-in hardware optimizations of both performance and security, there's no significant performance penalty to Confidential Computing workloads. +Confidential Computing enables customers' sensitive code and other data encrypted in memory during processing. Google doesn't have access to the encryption keys. +Confidential VM can help alleviate concerns about risk related to either dependency on Google infrastructure or Google insiders' access to customer data in the clear. ++**Severity**: High ++### [Ensure that retention policies on log buckets are configured using Bucket Lock](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/07ca1398-d477-400a-a9fc-4cfc78f723f9) ++**Description**: Enabling retention policies on log buckets will protect logs stored in cloud storage buckets from being overwritten or accidentally deleted. + It's recommended to set up retention policies and configure Bucket Lock on all storage buckets that are used as log sinks. + Logs can be exported by creating one or more sinks that include a log filter and a destination. As Stackdriver Logging receives new log entries, they're compared against each sink. + If a log entry matches a sink's filter, then a copy of the log entry is written to the destination. + Sinks can be configured to export logs in storage buckets. + It's recommended to configure a data retention policy for these cloud storage buckets and to lock the data retention policy; thus permanently preventing the policy from being reduced or removed. + This way, if the system is ever compromised by an attacker or a malicious insider who wants to cover their tracks, the activity logs are definitely preserved for forensics and security investigations. ++**Severity**: Low ++### [Ensure that the Cloud SQL database instance requires all incoming connections to use SSL](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/13872d43-aac6-4018-9c89-507b8fe9be54) ++**Description**: It's recommended to enforce all incoming connections to SQL database instance to use SSL. + SQL database connections if successfully trapped (MITM); can reveal sensitive data like credentials, database queries, query outputs etc. + For security, it's recommended to always use SSL encryption when connecting to your instance. + This recommendation is applicable for Postgresql, MySql generation 1, and MySql generation 2 instances. ++**Severity**: High ++### [Ensure that the 'contained database authentication' database flag for Cloud SQL on the SQL Server instance is set to 'off'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/658ce98f-ecf1-4c14-967f-3c4faf130fbf) ++**Description**: It's recommended to set "contained database authentication" database flag for Cloud SQL on the SQL Server instance is set to "off." + A contained database includes all database settings and metadata required to define the database and has no configuration dependencies on the instance of the Database Engine where the database is installed. + Users can connect to the database without authenticating a login at the Database Engine level. + Isolating the database from the Database Engine makes it possible to easily move the database to another instance of SQL Server. + Contained databases have some unique threats that should be understood and mitigated by SQL Server Database Engine administrators. + Most of the threats are related to the USER WITH PASSWORD authentication process, which moves the authentication boundary from the Database Engine level to the database level, hence this is recommended to disable this flag. + This recommendation is applicable to SQL Server database instances. ++**Severity**: Medium ++### [Ensure that the 'cross db ownership chaining' database flag for Cloud SQL SQL Server instance is set to 'off'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/26973a34-79a6-46a0-874f-358c8c00af05) ++**Description**: It's recommended to set "cross db ownership chaining" database flag for Cloud SQL SQL Server instance to "off." + Use the "cross db ownership" for chaining option to configure cross-database ownership chaining for an instance of Microsoft SQL Server. + This server option allows you to control cross-database ownership chaining at the database level or to allow cross-database ownership chaining for all databases. + Enabling "cross db ownership" isn't recommended unless all of the databases hosted by the instance of SQL Server must participate in cross-database ownership chaining and you're aware of the security implications of this setting. + This recommendation is applicable to SQL Server database instances. ++**Severity**: Medium ++### [Ensure that the 'local_infile' database flag for a Cloud SQL Mysql instance is set to 'off'](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/633a87f4-bd71-45ce-9eca-c6bb8cbe8b21) ++**Description**: It's recommended to set the local_infile database flag for a Cloud SQL MySQL instance to off. +The local_infile flag controls the server-side LOCAL capability for LOAD DATA statements. Depending on the local_infile setting, the server refuses or permits local data loading by clients that have LOCAL enabled on the client side. +To explicitly cause the server to refuse LOAD DATA LOCAL statements (regardless of how client programs and libraries are configured at build time or runtime), start ```mysqld``` with local_infile disabled. local_infile can also be set at runtime. +Due to security issues associated with the local_infile flag, it's recommended to disable it. This recommendation is applicable to MySQL database instances. ++**Severity**: Medium ++### [Ensure that the log metric filter and alerts exist for Cloud Storage IAM permission changes](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2e14266c-76ea-4479-915e-4edaae7d78ec) ++**Description**: It's recommended that a metric filter and alarm be established for Cloud Storage Bucket IAM changes. +Monitoring changes to cloud storage bucket permissions might reduce the time needed to detect and correct permissions on sensitive cloud storage buckets and objects inside the bucket. ++**Severity**: Low ++### [Ensure that the log metric filter and alerts exist for SQL instance configuration changes](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/9dce022e-f7f9-4725-8a63-c0d4a868b4d3) ++**Description**: It's recommended that a metric filter and alarm be established for SQL instance configuration changes. +Monitoring changes to SQL instance configuration changes might reduce the time needed to detect and correct misconfigurations done on the SQL server. +Below are a few of the configurable options that might impact the security posture of an SQL instance: ++- Enable auto backups and high availability: Misconfiguration might adversely impact business continuity, disaster recovery, and high availability +- Authorize networks: Misconfiguration might increase exposure to untrusted networks ++**Severity**: Low ++### [Ensure that there are only GCP-managed service account keys for each service account](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/6991b2e9-ae9e-4e99-acb6-037c4b575215) ++**Description**: User managed service accounts shouldn't have user-managed keys. + Anyone who has access to the keys will be able to access resources through the service account. GCP-managed keys are used by Cloud Platform services such as App Engine and Compute Engine. These keys can't be downloaded. Google will keep the keys and automatically rotate them on an approximately weekly basis. + User-managed keys are created, downloadable, and managed by users. They expire 10 years from creation. +For user-managed keys, the user has to take ownership of key management activities, which include: ++- Key storage +- Key distribution +- Key revocation +- Key rotation +- Key protection from unauthorized users +- Key recovery ++Even with key owner precautions, keys can be easily leaked by less than optimum common development practices like checking keys into the source code or leaving them in the Downloads directory, or accidentally leaving them on support blogs/channels. It's recommended to prevent user-managed service account keys. ++**Severity**: Low ++### [Ensure 'user connections' database flag for Cloud SQL SQL Server instance is set as appropriate](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/91f55b07-083c-4ec5-a2be-4b52bbc2e2df) ++**Description**: It's recommended to set "user connections" database flag for Cloud SQL SQL Server instance according to organization-defined value. + The "user connections" option specifies the maximum number of simultaneous user connections that are allowed on an instance of SQL Server. + The actual number of user connections allowed also depends on the version of SQL Server that you're using, and also the limits of your application or applications and hardware. + SQL Server allows a maximum of 32,767 user connections. + Because user connections are a dynamic (self-configuring) option, SQL Server adjusts the maximum number of user connections automatically as needed, up to the maximum value allowable. + For example, if only 10 users are logged in, 10 user connection objects are allocated. + In most cases, you don't have to change the value for this option. + The default is 0, which means that the maximum (32,767) user connections are allowed. + This recommendation is applicable to SQL Server database instances. ++**Severity**: Low ++### [Ensure 'user options' database flag for Cloud SQL SQL Server instance is not configured](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/fab1e680-86f0-4616-bee9-1b7394e49ade) ++**Description**: It's recommended that, "user options" database flag for Cloud SQL SQL Server instance shouldn't be configured. + The "user options" option specifies global defaults for all users. + A list of default query processing options is established for the duration of a user's work session. + The user options setting allows you to change the default values of the SET options (if the server's default settings aren't appropriate). + A user can override these defaults by using the SET statement. + You can configure user options dynamically for new logins. + After you change the setting of user options, new login sessions use the new setting; current login sessions aren't affected. + This recommendation is applicable to SQL Server database instances. ++**Severity**: Low ++### [Logging for GKE clusters should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/fa160a2c-e976-41cb-acff-1e1e3f1ed032) ++**Description**: This recommendation evaluates whether the loggingService property of a cluster contains the location Cloud Logging should use to write logs. ++**Severity**: High ++### [Object versioning should be enabled on storage buckets where sinks are configured](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e836b239-c7dc-476a-9a85-829b565cbc59) ++**Description**: This recommendation evaluates whether the enabled field in the bucket's versioning property is set to true. ++**Severity**: High ++### [Over-provisioned identities in projects should be investigated to reduce the Permission Creep Index (PCI)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/a6cd9b98-3b29-4213-b880-43f0b0897b83) ++**Description**: Over-provisioned identities in projects should be investigated to reduce the Permission Creep Index (PCI) and to safeguard your infrastructure. Reduce the PCI by removing the unused high risk permission assignments. High PCI reflects risk associated with the identities with permissions that exceed their normal or required usage. ++**Severity**: Medium ++### [Projects that have cryptographic keys should not have users with Owner permissions](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/986fe72e-466a-462d-a06e-c77b439c53c0) ++**Description**: This recommendation evaluates the IAM allow policy in project metadata for principals assigned roles/Owner. ++**Severity**: Medium ++### [Storage buckets used as a log sink should not be publicly accessible](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/76261631-76ea-4bd4-b064-34a619be1de0) ++**Description**: This recommendation evaluates the IAM policy of a bucket for the principals allUsers or allAuthenticatedUsers, which grant public access. ++**Severity**: High ++## Related content ++- [Learn about security recommendations](security-policy-concept.md) +- [Review security recommendations](review-security-recommendations.md) |
defender-for-cloud | Recommendations Reference Deprecated | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-deprecated.md | + + Title: Reference table for all deprecated security recommendations in Microsoft Defender for Cloud +description: This article lists all Microsoft Defender for Cloud deprecated security recommendations that help you harden and protect your resources. +++ Last updated : 03/13/2024+++ai-usage: ai-assisted +++# Deprecated security recommendations ++This article lists all the deprecated security recommendations in Microsoft Defender for Cloud. +++## Azure deprecated recommendations ++### Access to App Services should be restricted ++**Description & related policy**: Restrict access to your App Services by changing the networking configuration, to deny inbound traffic from ranges that are too broad. +(Related policy: [Preview]: Access to App Services should be restricted). ++**Severity**: High ++### [Endpoint protection health issues on machines should be resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/37a3689a-818e-4a0e-82ac-b1392b9bb000) ++**Description**: Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. See the documentation for the [endpoint protection solutions supported by Defender for Cloud](/azure/defender-for-cloud/supported-machines-endpoint-solutions-clouds#supported-endpoint-protection-solutions-) and the [endpoint protection assessments](/azure/defender-for-cloud/endpoint-protection-recommendations-technical). +(No related policy) ++**Severity**: Medium ++### [Endpoint protection should be installed on machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4fb67663-9ab9-475d-b026-8c544cced439) ++**Description**: To protect machines from threats and vulnerabilities, install a supported endpoint protection solution. +Learn more about how endpoint protection for machines is evaluated in [Endpoint protection assessment and recommendations in Microsoft Defender for Cloud](/azure/defender-for-cloud/endpoint-protection-recommendations-technical). +(No related policy) ++**Severity**: High ++### Install Azure Security Center for IoT security module to get more visibility into your IoT devices ++**Description & related policy**: Install Azure Security Center for IoT security module to get more visibility into your IoT devices. ++**Severity**: Low ++### Java should be updated to the latest version for function apps ++**Description & related policy**: Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. +Using the latest Java version for function apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version. +(Related policy: Ensure that 'Java version' is the latest, if used as a part of the Function app). ++**Severity**: Medium ++### Java should be updated to the latest version for web apps ++**Description & related policy**: Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. +Using the latest Java version for web apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version. +(Related policy: Ensure that 'Java version' is the latest, if used as a part of the Web app). ++**Severity**: Medium ++### Monitoring agent should be installed on your machines ++**Description & related policy**: This action installs a monitoring agent on the selected virtual machines. Select a workspace for the agent to report to. +(No related policy) ++**Severity**: High ++### PHP should be updated to the latest version for web apps ++**Description & related policy**: Periodically, newer versions are released for PHP software either due to security flaws or to include additional functionality. +Using the latest PHP version for web apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version. +(Related policy: Ensure that 'PHP version' is the latest, if used as a part of the WEB app). ++**Severity**: Medium ++### Pod Security Policies should be defined to reduce the attack vector by removing unnecessary application privileges (Preview) ++**Description & related policy**: Define Pod Security Policies to reduce the attack vector by removing unnecessary application privileges. It is recommended to configure pod security policies so pods can only access resources which they are allowed to access. +(Related policy: [Preview]: Pod Security Policies should be defined on Kubernetes Services). ++**Severity**: Medium ++### [Public network access should be disabled for Cognitive Services accounts](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/684a5b6d-a270-61ce-306e-5cea400dc3a7) ++**Description**: This policy audits any Cognitive Services account in your environment with public network access enabled. Public network access should be disabled so that only connections from private endpoints are allowed. +(Related policy: [Public network access should be disabled for Cognitive Services accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f0725b4dd-7e76-479c-a735-68e7ee23d5ca)). ++**Severity**: Medium ++### Python should be updated to the latest version for function apps ++**Description & related policy**: Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. +Using the latest Python version for function apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version. +(Related policy: Ensure that 'Python version' is the latest, if used as a part of the Function app). ++**Severity**: Medium ++### Python should be updated to the latest version for web apps ++**Description & related policy**: Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. +Using the latest Python version for web apps is recommended to benefit from security fixes, if any, and/or new functionalities of the latest version. +(Related policy: Ensure that 'Python version' is the latest, if used as a part of the Web app). ++**Severity**: Medium ++### The rules for web applications on IaaS NSGs should be hardened ++**Description & related policy**: Harden the network security group (NSG) of your virtual machines that are running web applications, with NSG rules that are overly permissive with regard to web application ports. +(Related policy: The NSGs rules for web applications on IaaS should be hardened). ++**Severity**: High ++### Your machines should be restarted to apply system updates ++**Description & related policy**: Restart your machines to apply the system updates and secure the machine from vulnerabilities. +(Related policy: System updates should be installed on your machines). ++**Severity**: Medium ++++++++++++++++++++++++## Related content ++- [Learn about security recommendations](security-policy-concept.md) +- [Review security recommendations](review-security-recommendations.md) |
defender-for-cloud | Recommendations Reference Devops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-devops.md | Learn more about [DevOps security](defender-for-devops-introduction.md) benefits DevOps recommendations don't affect your [secure score](secure-score-security-controls.md). To decide which recommendations to resolve first, look at the severity of each recommendation and its potential impact on your secure score. -## DevOps recommendations -### Azure DevOps recommendations +## Azure DevOps recommendations ### [Azure DevOps repositories should have GitHub Advanced Security for Azure DevOps (GHAzDO) enabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/c7a934bf-7be6-407a-84d9-4f20e6e49592/showSecurityCenterCommandBar~/false) -**Description**: DevOps security in Defender for Cloud uses a central console to empower security teams with the ability to protect applications and resources from code to cloud across Azure DevOps. With enablement of GitHub Advanced Security for Azure DevOps (GHAzDO) repositories includes GitHub Advanced Security for Azure DevOps you get findings about secrets, dependencies, and code vulnerabilities in your Azure DevOps repositories surfaced in Microsoft Defender for Cloud. +**Description**: DevOps security in Defender for Cloud uses a central console to empower security teams with the ability to protect applications and resources from code to cloud across Azure DevOps. With enablement of GitHub Advanced Security for Azure DevOps (GHAzDO) repositories including GitHub Advanced Security for Azure DevOps, you get findings about secrets, dependencies, and code vulnerabilities in your Azure DevOps repositories surfaced in Microsoft Defender for Cloud. **Severity**: High ### [Azure DevOps repositories should have secret scanning findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/b5ef903f-8655-473b-9784-4f749eeb25c6/showSecurityCenterCommandBar~/false) -**Description**: Secrets were found in code repositories. Remediate immediately to prevent a security breach. Secrets found in repositories can be leaked or discovered by adversaries, leading to compromise of an application or service. Note: The Microsoft Security DevOps credential scanning tool only scans builds on which it is configured to run. Therefore, results might not reflect the complete status of secrets in your repositories. +**Description**: Secrets were found in code repositories. Remediate immediately to prevent a security breach. Secrets found in repositories can leak, or be discovered by adversaries, leading to compromise of an application or service. The Microsoft Security DevOps credential scanning tool only scans builds on which it is configured to run. Therefore, results might not reflect the complete status of secrets in your repositories. **Severity**: High DevOps recommendations don't affect your [secure score](secure-score-security-co ### [Azure DevOps repositories should have dependency vulnerability scanning findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/2ea72208-8558-4011-8dcd-d93375a4003d/showSecurityCenterCommandBar~/false) -**Description**: Dependency vulnerabilities have been found in code repositories. To improve the security posture of the repositories, it is highly recommended to remediate these vulnerabilities. +**Description**: Dependency vulnerabilities found in code repositories. To improve the security posture of the repositories, it is highly recommended to remediate these vulnerabilities. **Severity**: Medium ### [Azure DevOps repositories should have infrastructure as code scanning findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/6588c4d4-fbbb-4fb8-be45-7c2de7dc1b3b/showSecurityCenterCommandBar~/false) -**Description**: Infrastructure as code security configuration issues have been found in repositories. The issues shown below have been detected in template files. To improve the security posture of the related cloud resources, it is highly recommended to remediate these issues. +**Description**: Infrastructure as code security configuration issues found in repositories. The issues were detected in template files. To improve the security posture of the related cloud resources, it is highly recommended to remediate these issues. **Severity**: Medium -### [Azure DevOps build pipelines shouldn't have secrets available to builds of forks](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/d5711372-9b5f-4926-a711-13dcf51565a6) +### [Azure DevOps pipelines shouldn't have secrets available to builds of forks](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/d5711372-9b5f-4926-a711-13dcf51565a6) **Description**: In public repositories, it's possible that people from outside the organization create forks and run builds on the forked repository. In such a case, if this setting is enabled, outsiders can get access to build pipeline secrets that were meant to be internal. DevOps recommendations don't affect your [secure score](secure-score-security-co ### [Azure DevOps service connections shouldn't grant access to all pipelines](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/9245366d-393f-49c5-b8e6-258b1b1c2daa) -**Description**: Service connections are used to create connections from Azure Pipelines to external and remote services for executing tasks in a job. Pipeline permissions control which pipelines are authorized to use the service connection. To support security of the pipeline operations, service connections shouldn't be granted access to all YAML pipelines. This helps to maintain the principle of least privilege because a vulnerability in components used by one pipeline can be leveraged by an attacker to attack other pipelines having access to critical resources. +**Description**: Service connections are used to create connections from Azure Pipelines to external and remote services for executing tasks in a job. Pipeline permissions control which pipelines are authorized to use the service connection. To support security of the pipeline operations, service connections shouldn't be granted access to all YAML pipelines. This helps to maintain the principle of least privilege because a vulnerability in components used by one pipeline can be used by an attacker to attack other pipelines with access to critical resources. **Severity**: High DevOps recommendations don't affect your [secure score](secure-score-security-co ### [(Preview) Azure DevOps repositories should have API security testing findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/d42301a5-4d23-4457-97c8-f2f2e9eb979e) -**Description**: API security vulnerabilities have been found in code repositories. To improve the security posture of the repositories, it is highly recommended to remediate these vulnerabilities. +**Description**: API security vulnerabilities found in code repositories. To improve the security posture of the repositories, it is highly recommended to remediate these vulnerabilities. **Severity**: Medium ### [(Preview) Azure DevOps repositories should require minimum two-reviewer approval for code pushes](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/470742ea-324a-406c-b91f-fc1da6a27c0c) -**Description**: To prevent unintended or malicious changes from being directly committed, it's important to implement protection policies for the default branch in Azure DevOps repositories. We recommend requiring at least two code reviewers to approve pull requests before the code is merged with the default branch. By requiring approval from a minimum number of two reviewers, you can reduce the risk of unauthorized modifications, which could lead to system instability or security vulnerabilities. +**Description**: To prevent unintended or malicious changes from being directly committed, it's important to implement protection policies for the default branch in Azure DevOps repositories. We recommend requiring at least two code reviewers to approve pull requests before the code is merged with the default branch. By requiring approval from a minimum number of two reviewers, you can reduce the risk of unauthorized modifications, which could lead to system instability or security vulnerabilities.<br/><br/> This recommendation is provided in Defender for Cloud foundational security posture, if you have connected Azure DevOps to Defender for Cloud. **Severity**: High ### [(Preview) Azure DevOps repositories should not allow requestors to approve their own Pull Requests](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/98b5895a-0ad8-4ed9-8c9d-d654f5bda816) -**Description**: To prevent unintended or malicious changes from being directly committed, it's important to implement protection policies for the default branch in Azure DevOps repositories. We recommend prohibiting pull request creators from approving their own submissions to ensure that every change undergoes objective review by someone other than the author. By doing this, you can reduce the risk of unauthorized modifications, which could lead to system instability or security vulnerabilities. +**Description**: To prevent unintended or malicious changes from being directly committed, it's important to implement protection policies for the default branch in Azure DevOps repositories. We recommend prohibiting pull request creators from approving their own submissions to ensure that every change undergoes objective review by someone other than the author. By doing this, you can reduce the risk of unauthorized modifications, which could lead to system instability or security vulnerabilities.<br/><br/> This recommendation is provided in Defender for Cloud foundational security posture, if you have connected Azure DevOps to Defender for Cloud. **Severity**: High -### GitHub recommendations +## GitHub recommendations ++### [GitHub organizations should not make action secrets accessible to all repositories](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6331fad3-a7a2-497d-b616-52672057e0f3) ++**Description**: For secrets used in GitHub Action workflows that are stored at the GitHub organization-level, you can use access policies to control which repositories can use organizational secrets. Organization-level secrets let you share secrets between multiple repositories. This reduces the need to create duplicate secrets. However, once a secret is made accessible to a repository, anyone with write access on repository can access the secret from any branch in a workflow. To reduce the attack surface, ensure that the secret is accessible from selected repositories only.<br/><br/> This recommendation is provided in Defender for Cloud foundational security posture, if you have connected Azure DevOps to Defender for Cloud. ++**Severity**: High ### [GitHub repositories should have secret scanning enabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/b6ad173c-0cc6-4d44-b954-8217c8837a8e/showSecurityCenterCommandBar~/false) DevOps recommendations don't affect your [secure score](secure-score-security-co ### [GitHub repositories should have secret scanning findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/dd98425c-1407-40cc-8a2c-da5d0a2f80da/showSecurityCenterCommandBar~/false) -**Description**: Secrets have been found in code repositories. This should be remediated immediately to prevent a security breach. Secrets found in repositories can be leaked or discovered by adversaries, leading to compromise of an application or service. +**Description**: Secrets found in code repositories. This should be remediated immediately to prevent a security breach. Secrets found in repositories can be leaked or discovered by adversaries, leading to compromise of an application or service. **Severity**: High ### [GitHub repositories should have code scanning findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/18aa4e75-776a-4296-97f0-fe1cf10d679c/showSecurityCenterCommandBar~/false) -**Description**: Vulnerabilities have been found in code repositories. To improve the security posture of the repositories, it is highly recommended to remediate these vulnerabilities. +**Description**: Vulnerabilities found in code repositories. To improve the security posture of the repositories, it is highly recommended to remediate these vulnerabilities. **Severity**: Medium DevOps recommendations don't affect your [secure score](secure-score-security-co ### [GitHub repositories should have infrastructure as code scanning findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/d9be0ff8-3eb0-4348-82f6-c1e735f85983/showSecurityCenterCommandBar~/false) -**Description**: Infrastructure as code security configuration issues have been found in repositories. The issues shown below have been detected in template files. To improve the security posture of the related cloud resources, it is highly recommended to remediate these issues. +**Description**: Infrastructure as code security configuration issues were found in repositories. The issues were detected in template files. To improve the security posture of the related cloud resources, it is highly recommended to remediate these issues. **Severity**: Medium DevOps recommendations don't affect your [secure score](secure-score-security-co ### [(Preview) GitHub repositories should have API security testing findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/7ad00833-a0f0-47b9-b377-5665bd5d9074/showSecurityCenterCommandBar~/false) -**Description**: API security vulnerabilities have been found in code repositories. To improve the security posture of the repositories, it is highly recommended to remediate these vulnerabilities. +**Description**: API security vulnerabilities were found in code repositories. To improve the security posture of the repositories, it is highly recommended to remediate these vulnerabilities. **Severity**: Medium ### [(Preview) GitHub organizations should not make action secrets accessible to all repositories](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6331fad3-a7a2-497d-b616-52672057e0f3) -**Description**: For secrets used in GitHub Action workflows that are stored at the GitHub organization-level, you can use access policies to control which repositories can use organization secrets. Organization-level secrets let you share secrets between multiple repositories, which reduces the need for creating duplicate secrets. However, once a secret is made accessible to a repository, anyone with write access on repository can access the secret from any branch in a workflow. To reduce the attack surface, ensure that the secret is accessible from selected repositories only. +**Description**: For secrets used in GitHub Action workflows that are stored at the GitHub organization-level, you can use access policies to control which repositories can use organization secrets. Organization-level secrets let you share secrets between multiple repositories, reducing the need to create duplicate secrets. However, when a secret is made accessible to a repository, anyone with write access on repository can access the secret from any branch in a workflow. To reduce the attack surface, ensure that the secret is accessible from selected repositories only. **Severity**: High DevOps recommendations don't affect your [secure score](secure-score-security-co ### [GitLab projects should have secret scanning findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/867001c3-2d01-4db7-b513-5cb97638f23d/showSecurityCenterCommandBar~/false) -**Description**: Secrets have been found in code repositories. This should be remediated immediately to prevent a security breach. Secrets found in repositories can be leaked or discovered by adversaries, leading to compromise of an application or service. +**Description**: Secrets were found in code repositories. This should be remediated immediately to prevent a security breach. Secrets found in repositories can be leaked or discovered by adversaries, leading to compromise of an application or service. **Severity**: High ### [GitLab projects should have code scanning findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/cd3e4ff3-b1bc-4a42-b10d-e2f9f99e2991/showSecurityCenterCommandBar~/false) -**Description**: Vulnerabilities have been found in code repositories. To improve the security posture of the repositories, it is highly recommended to remediate these vulnerabilities. +**Description**: Vulnerabilities were found in code repositories. To improve the security posture of the repositories, it is highly recommended to remediate these vulnerabilities. **Severity**: Medium DevOps recommendations don't affect your [secure score](secure-score-security-co ### [GitLab projects should have infrastructure as code scanning findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsWithRulesBlade/assessmentKey/ec1bface-60ff-46b6-b1dc-67171a4882d5/showSecurityCenterCommandBar~/false) -**Description**: Infrastructure as code security configuration issues have been found in repositories. The issues shown below have been detected in template files. To improve the security posture of the related cloud resources, it is highly recommended to remediate these issues. +**Description**: Infrastructure as code security configuration issues were found in repositories. The issues shown were detected in template files. To improve the security posture of the related cloud resources, it is highly recommended to remediate these issues. **Severity**: Medium DevOps recommendations don't affect your [secure score](secure-score-security-co ### [Code repositories should have infrastructure as code scanning findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/2ebc815f-7bc7-4573-994d-e1cc46fb4a35/showSecurityCenterCommandBar~/false) -**Description**: DevOps security in Defender for Cloud has found infrastructure as code security configuration issues in repositories. The issues shown below have been detected in template files. To improve the security posture of the related cloud resources, it is highly recommended to remediate these issues. +**Description**: DevOps security in Defender for Cloud has found infrastructure as code security configuration issues in repositories. The issues shown were detected in template files. To improve the security posture of the related cloud resources, it is highly recommended to remediate these issues. (No related policy) **Severity**: Medium DevOps recommendations don't affect your [secure score](secure-score-security-co ## Related content -- [What are security policies, initiatives, and recommendations?](security-policy-concept.md)-- [Review your security recommendations](review-security-recommendations.md)+- [Learn about security recommendations](security-policy-concept.md) +- [Review security recommendations](review-security-recommendations.md) |
defender-for-cloud | Recommendations Reference Gcp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-gcp.md | - Title: Reference table for all security recommendations for GCP resources -description: This article lists all Microsoft Defender for Cloud security recommendations that help you harden and protect your Google Cloud Platform (GCP) resources. - Previously updated : 06/09/2024--ai-usage: ai-assisted ---# Security recommendations for Google Cloud Platform (GCP) resources --This article lists all the recommendations you might see in Microsoft Defender for Cloud if you connect a Google Cloud Platform (GCP) account by using the **Environment settings** page. The recommendations that appear in your environment are based on the resources that you're protecting and on your customized configuration. --To learn about actions that you can take in response to these recommendations, see [Remediate recommendations in Defender for Cloud](implement-security-recommendations.md). --Your secure score is based on the number of security recommendations you completed. To decide which recommendations to resolve first, look at the severity of each recommendation and its potential effect on your secure score. --## GCP Compute recommendations --### [Compute Engine VMs should use the Container-Optimized OS](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/3e33004b-f0b8-488d-85ed-61336c7ad4ca) --**Description**: This recommendation evaluates the config property of a node pool for the key-value pair, 'imageType': 'COS.' --**Severity**: Low --### [EDR configuration issues should be resolved on GCP virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f36a15fb-61a6-428c-b719-6319538ecfbc) --**Description**: To protect virtual machines from the latest threats and vulnerabilities, resolve a |