Updates from: 01/08/2024 02:07:30
Service Microsoft Docs article Related commit history on GitHub Change details
azure-cache-for-redis Cache Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-private-link.md
You can restrict public access to the private endpoint of your cache by disablin
> > When using the Basic tier, you might experience data loss when you delete and recreate a private endpoint.
+## Scope of availability
+
+|Tier | Basic, Standard, Premium |Enterprise, Enterprise Flash |
+||||
+|Available | Yes | Yes |
+ ## Prerequisites - Azure subscription - [create one for free](https://azure.microsoft.com/free/) > [!IMPORTANT]
-> Currently, portal console support, and persistence to firewall storage accounts are not supported.
-> When using private link, you cannot export or import a cache that to a [storage account](/azure/storage/common/storage-network-security) that has firewall enabled.
+> Currently, the [portal-based redis console](cache-configure.md#redis-console) is not supported with private link.
+>
+
+> [!IMPORTANT]
+> When using private link, you cannot export or import data to a to a storage account that has firewall enabled unless you're using [managed identity to autenticate to the storage account](cache-managed-identity.md).
+> For more information, see [How to export if I have firewall enabled on my storage account?](cache-how-to-import-export-data.md#how-to-export-if-i-have-firewall-enabled-on-my-storage-account)
> ## Create a private endpoint with a new Azure Cache for Redis instance
az network private-endpoint delete --name MyPrivateEndpoint --resource-group MyR
### How do I connect to my cache with private endpoint?
-Your application should connect to `<cachename>.redis.cache.windows.net` on port `6380`. We recommend avoiding the use of `<cachename>.privatelink.redis.cache.windows.net` in configuration or connection string.
+For **Basic, Standard, and Premium tier** caches, your application should connect to `<cachename>.redis.cache.windows.net` on port `6380`. A private DNS zone, named `*.privatelink.redis.cache.windows.net`, is automatically created in your subscription. The private DNS zone is vital for establishing the TLS connection with the private endpoint. We recommend avoiding the use of `<cachename>.privatelink.redis.cache.windows.net` in configuration or connection string.
-A private DNS zone, named `*.privatelink.redis.cache.windows.net`, is automatically created in your subscription. The private DNS zone is vital for establishing the TLS connection with the private endpoint.
+For **Enterprise and Enterprise Flash** tier caches, your application should connect to `<cachename>.<region>.redisenterprise.cache.azure.net` on port `10000`.
For more information, see [Azure services DNS zone configuration](../private-link/private-endpoint-dns.md). ### Why can't I connect to a private endpoint? -- Private endpoints can't be used with your cache instance if your cache is already a VNet injected cache.
+- Private endpoints can't be used with your cache instance if your cache is already using the VNet injection network connection method.
- You have a limit of one private link for clustered caches. For all other caches, your limit is 100 private links.-- You try to [persist data to storage account](cache-how-to-premium-persistence.md) where firewall rules are applied might prevent you from creating the Private Link.
+- You try to [persist data to a storage account](cache-how-to-premium-persistence.md) with firewall rules and you're not using managed identity to connect to the storage account.
- You might not connect to your private endpoint if your cache instance is using an [unsupported feature](#what-features-arent-supported-with-private-endpoints). ### What features aren't supported with private endpoints? - Trying to connect from the Azure portal console is an unsupported scenario where you see a connection failure.-- Private links can't be added to caches that are already geo-replicated. To add a private link to a geo-replicated cache: 1. Unlink the geo-replication. 2. Add a Private Link. 3. Last, relink the geo-replication.
+- Private links can't be added to Premium tier caches that are already geo-replicated. To add a private link to a cache using [passive geo-replication](cache-how-to-geo-replication.md): 1. Unlink the geo-replication. 2. Add a Private Link. 3. Last, relink the geo-replication.
### How do I verify if my private endpoint is configured correctly?
To change the value in the Azure portal, follow these steps:
1. Select the **Enable public network access** button.
-To change the value through a RESTful API PATCH request, use the following code and edit the value to reflect the flag you want for your cache.
+You can also change the value through a RESTful API PATCH request. For example, use the following code for a Basic, Standard, or Premium tier cache and edit the value to reflect the flag you want for your cache.
```http PATCH https://management.azure.com/subscriptions/{subscription}/resourceGroups/{resourcegroup}/providers/Microsoft.Cache/Redis/{cache}?api-version=2020-06-01
To change the value through a RESTful API PATCH request, use the following code
} ```
- For more information, see [Redis - Update] (/rest/api/redis/Redis/Update?tabs=HTTP).
+ For more information, see [Redis - Update](/rest/api/redis/Redis/Update?tabs=HTTP).
### How can I migrate my VNet injected cache to a Private Link cache?
azure-functions Functions Create Maven Kotlin Intellij https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-maven-kotlin-intellij.md
description: Learn how to use IntelliJ to create a simple HTTP-triggered Kotlin
Previously updated : 03/25/2020 Last updated : 01/07/2024 ms.devlang: kotlin
azure-monitor Autoscale Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-best-practices.md
description: Autoscale patterns in the Web Apps feature of Azure App Service, Az
Previously updated : 09/13/2022 Last updated : 01/07/2024
azure-monitor Container Insights Custom Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-custom-metrics.md
This process assigns the *Monitoring Metrics Publisher* role to the cluster's se
### Prerequisites
-Before you update your cluster:
--- See the supported regions for custom metrics at [Supported regions](../essentials/metrics-custom-overview.md#supported-regions).-- Confirm that you're a member of the [Owner](../../role-based-access-control/built-in-roles.md#owner) role on the AKS cluster resource to enable collection of custom performance metrics for nodes and pods. This requirement doesn't apply to Azure Arc-enabled Kubernetes clusters.
+Before you update your cluster, confirm that you're a member of the [Owner](../../role-based-access-control/built-in-roles.md#owner) role on the AKS cluster resource to enable collection of custom performance metrics for nodes and pods. This requirement doesn't apply to Azure Arc-enabled Kubernetes clusters.
### Enablement options Use one of the following methods to enable custom metrics for either a single cluster or all clusters in your subscription.
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
The configuration change can take a few minutes to finish before it takes effect
### Prerequisites
- - You might need to enable collection of custom metrics for your cluster. See [Metrics collected by Container insights](container-insights-custom-metrics.md).
- - See the supported regions for custom metrics at [Supported regions](../essentials/metrics-custom-overview.md#supported-regions).
-
+You might need to enable collection of custom metrics for your cluster. See [Metrics collected by Container insights](container-insights-custom-metrics.md).
+
### Enable and configure metric alert rules #### [Azure portal](#tab/azure-portal)
azure-monitor Container Insights Region Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-region-mapping.md
Supported AKS regions are listed in [Products available by region](https://azure
|WestCentralUS<sup>1</sup>|EastUS | -
-## Custom metrics supported regions
-
-Collecting metrics from Azure Kubernetes Services (AKS) clusters nodes and pods are supported for publishing as custom metrics only in the following [Azure regions](../essentials/metrics-custom-overview.md#supported-regions).
- ## Next steps To begin monitoring your AKS cluster, review [How to enable the Container insights](container-insights-onboard.md) to understand the requirements and available methods to enable monitoring.
azure-monitor Container Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-troubleshoot.md
To view the non-AKS cluster in Container insights, read access is required on th
## Metrics aren't being collected
-1. Verify that the cluster is in a [supported region for custom metrics](../essentials/metrics-custom-overview.md#supported-regions).
- 1. Verify that the **Monitoring Metrics Publisher** role assignment exists by using the following CLI command: ``` azurecli
azure-monitor App Insights Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/app-insights-metrics.md
Application Insights log-based metrics let you analyze the health of your monito
* [Log-based metrics](../app/pre-aggregated-metrics-log-metrics.md#log-based-metrics) behind the scene are translated into [Kusto queries](/azure/kusto/query/) from stored events. * [Standard metrics](../app/pre-aggregated-metrics-log-metrics.md#pre-aggregated-metrics) are stored as pre-aggregated time series.
-Since *standard metrics* are pre-aggregated during collection, they have better performance at query time. This makes them a better choice for dashboarding and in real-time alerting. The *log-based metrics* have more dimensions, which makes them the superior option for data analysis and ad-hoc diagnostics. Use the [namespace selector](./metrics-custom-overview.md#namespace) to switch between log-based and standard metrics in [metrics explorer](./analyze-metrics.md).
+Since *standard metrics* are pre-aggregated during collection, they have better performance at query time. This makes them a better choice for dashboarding and in real-time alerting. The *log-based metrics* have more dimensions, which makes them the superior option for data analysis and ad-hoc diagnostics. Use the [namespace selector](./metrics-store-custom-rest-api.md#namespace) to switch between log-based and standard metrics in [metrics explorer](./analyze-metrics.md).
## Interpret and use queries from this article
azure-monitor Collect Custom Metrics Guestos Resource Manager Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-resource-manager-vm.md
If you're new to ARM templates, learn about [template deployments](../../azure-r
- Your subscription must be registered with [Microsoft.Insights](../../azure-resource-manager/management/resource-providers-and-types.md). - You need to have either [Azure PowerShell](/powershell/azure) or [Azure Cloud Shell](../../cloud-shell/overview.md) installed.-- Your VM resource must be in a [region that supports custom metrics](./metrics-custom-overview.md#supported-regions). ## Set up Azure Monitor as a data sink The Azure Diagnostics extension uses a feature called *data sinks* to route metrics and logs to different locations. The following steps show how to use an ARM template and PowerShell to deploy a VM by using the new Azure Monitor data sink.
azure-monitor Collect Custom Metrics Guestos Resource Manager Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-resource-manager-vmss.md
If you're new to Resource Manager templates, learn about [template deployments](
- You need to have [Azure PowerShell](/powershell/azure) installed, or you can use [Azure Cloud Shell](../../cloud-shell/overview.md). -- Your VM resource must be in a [region that supports custom metrics](./metrics-custom-overview.md#supported-regions).- ## Set up Azure Monitor as a data sink The Azure Diagnostics extension uses a feature called **data sinks** to route metrics and logs to different locations. The following steps show how to use a Resource Manager template and PowerShell to deploy a VM by using the new Azure Monitor data sink.
To deploy the Resource Manager template, use Azure PowerShell:
New-AzResourceGroup -Name "VMSSWADtestGrp" -Location "<Azure Region>" ```
- > [!NOTE]
- > Remember to use an Azure region that's enabled for custom metrics. Remember to use an [Azure region that's enabled for custom metrics](./metrics-custom-overview.md#supported-regions).
- 1. Run the following commands to deploy the VM: > [!NOTE]
azure-monitor Collect Custom Metrics Guestos Vm Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-vm-classic.md
The process that's outlined in this article only works on classic virtual machin
- You need to have either [Azure PowerShell](/powershell/azure) or [Azure Cloud Shell](../../cloud-shell/overview.md) installed. -- Your VM resource must be in a [region that supports custom metrics](./metrics-custom-overview.md#supported-regions).- ## Create a classic virtual machine and storage account 1. Create a classic VM by using the Azure portal.
azure-monitor Collect Custom Metrics Guestos Vm Cloud Service Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-vm-cloud-service-classic.md
The process that's outlined in this article works only for performance counters
- You need to have either [Azure PowerShell](/powershell/azure) or [Azure Cloud Shell](../../cloud-shell/overview.md) installed. -- Your Cloud Service must be in a [region that supports custom metrics](./metrics-custom-overview.md#supported-regions).- ## Provision a cloud service and storage account 1. Create and deploy a classic cloud service. A sample classic Cloud Services application and deployment can be found at [Get started with Azure Cloud Services and ASP.NET](../../cloud-services/cloud-services-dotnet-get-started.md).
azure-monitor Metrics Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-custom-overview.md
Previously updated : 06/01/2021 Last updated : 01/07/2024 # Custom metrics in Azure Monitor (preview)
-As you deploy resources and applications in Azure, start collecting telemetry to gain insights into their performance and health. Azure makes some metrics available to you out of the box. These metrics are called [standard or platform](./metrics-supported.md).
-
-Collect custom performance indicators or business-specific metrics to provide deeper insights. These *custom* metrics can be collected via your application telemetry, an agent that runs on your Azure resources, or even an outside-in monitoring system. They can then be submitted directly to Azure Monitor. Once custom metrics are published to Azure Monitor, you can browse, query, and alert on them for your Azure resources and applications along side the standard Azure metrics.
+Azure makes some metrics available to you out of the box. These metrics are called [standard or platform](./metrics-supported.md). Custom metrics are performance indicators or business-specific metrics that can be collected via your application's telemetry, the Azure Monitor Agent, a diagnostics extension that runs on your Azure resources, or an external monitoring system. Once custom metrics are published to Azure Monitor, you can browse, query, and alert on them along side the standard Azure metrics.
Azure Monitor custom metrics are currently in public preview.
Azure Monitor custom metrics are currently in public preview.
Custom metrics can be sent to Azure Monitor via several methods: -- Instrument your application by using the Azure Application Insights SDK and send custom telemetry to Azure Monitor.-- Install the Azure Monitor agent (preview) on your [Windows or Linux Azure VM](../agents/azure-monitor-agent-overview.md). Use a [data collection rule](../agents/data-collection-rule-azure-monitor-agent.md) to send performance counters to Azure Monitor metrics.
+- Use Azure Application Insights SDK to instrument your application by sending custom telemetry to Azure Monitor.
+- Install the [Azure Monitor Agent](../agents/azure-monitor-agent-overview.md) on your Windows or Linux Azure virtual machine or virtual machine scale set and use a [data collection rule](../agents/data-collection-rule-azure-monitor-agent.md) to send performance counters to Azure Monitor metrics.
- Install the Azure Diagnostics extension on your [Azure VM](../essentials/collect-custom-metrics-guestos-resource-manager-vm.md), [Virtual Machine Scale Set](../essentials/collect-custom-metrics-guestos-resource-manager-vmss.md), [classic VM](../essentials/collect-custom-metrics-guestos-vm-classic.md), or [classic cloud service](../essentials/collect-custom-metrics-guestos-vm-cloud-service-classic.md). Then send performance counters to Azure Monitor. - Install the [InfluxData Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md) on your Azure Linux VM. Send metrics by using the Azure Monitor output plug-in.-- Send custom metrics [directly to the Azure Monitor REST API](./metrics-store-custom-rest-api.md), `https://<azureregion>.monitoring.azure.com/<AzureResourceID>/metrics`.
+- Send custom metrics [directly to the Azure Monitor REST API](./metrics-store-custom-rest-api.md).
## Pricing model and retention
-For details on when billing is enabled for custom metrics and metrics queries, check the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/). In summary, there's no cost to ingest standard metrics (platform metrics) into an Azure Monitor metrics store, but custom metrics incur costs when they enter general availability. Queries to the metrics API do incur costs.
+In general, there's no cost to ingest standard metrics (platform metrics) into an Azure Monitor metrics store, but custom metrics incur costs when they enter general availability. Queries to the metrics API do incur costs. For details on when billing is enabled for custom metrics and metrics queries, check the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
Custom metrics are retained for the [same amount of time as platform metrics](../essentials/data-platform-metrics.md#retention-of-metrics). > [!NOTE] > Metrics sent to Azure Monitor via the Application Insights SDK are billed as ingested log data. They incur additional metrics charges only if the Application Insights feature [Enable alerting on custom metric dimensions](../app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) has been selected. This checkbox sends data to the Azure Monitor metrics database by using the custom metrics API to allow the more complex alerting. Learn more about the [Application Insights pricing model](../cost-usage.md) and [prices in your region](https://azure.microsoft.com/pricing/details/monitor/).
-## How to send custom metrics
-
-When you send custom metrics to Azure Monitor, each data point, or value, reported in the metrics must include the following information.
-
-### Authentication
-
-To submit custom metrics to Azure Monitor, the entity that submits the metric needs a valid Microsoft Entra token in the **Bearer** header of the request. Supported ways to acquire a valid bearer token include:
--- [Managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md). You can use a managed identity to give resources permissions to carry out certain operations. An example is allowing a resource to emit metrics about itself. A resource, or its managed identity, can be granted **Monitoring Metrics Publisher** permissions on another resource. With this permission, the managed identity can also emit metrics for other resources.-- [Microsoft Entra service principal](../../active-directory/develop/app-objects-and-service-principals.md). In this scenario, a Microsoft Entra application, or service, can be assigned permissions to emit metrics about an Azure resource. To authenticate the request, Azure Monitor validates the application token by using Microsoft Entra public keys. The existing **Monitoring Metrics Publisher** role already has this permission. It's available in the Azure portal.-
- The service principal, depending on what resources it emits custom metrics for, can be given the **Monitoring Metrics Publisher** role at the scope required. Examples are a subscription, resource group, or specific resource.
-
-> [!TIP]
-> When you request a Microsoft Entra token to emit custom metrics, ensure that the audience or resource that the token is requested for is `https://monitoring.azure.com/`. Be sure to include the trailing slash.
-
-### Subject
-
-The subject property captures which Azure resource ID the custom metric is reported for. This information is encoded in the URL of the API call. Each API can submit metric values for only a single Azure resource.
-
-> [!NOTE]
-> You can't emit custom metrics against the resource ID of a resource group or subscription.
-
-### Region
-
-The region property captures the Azure region where the resource you're emitting metrics for is deployed. Metrics must be emitted to the same Azure Monitor regional endpoint as the region where the resource is deployed. For example, custom metrics for a VM deployed in West US must be sent to the WestUS regional Azure Monitor endpoint. The region information is also encoded in the URL of the API call.
-
-> [!NOTE]
-> During the public preview, custom metrics are available in only a subset of Azure regions. A list of supported regions is documented in a [later section of this article](#supported-regions).
-
-### Timestamp
-
-Each data point sent to Azure Monitor must be marked with a timestamp. This timestamp captures the date and time at which the metric value is measured or collected. Azure Monitor accepts metric data with timestamps as far as 20 minutes in the past and 5 minutes in the future. The timestamp must be in ISO 8601 format.
-
-### Namespace
-
-Namespaces are a way to categorize or group similar metrics together. By using namespaces, you can achieve isolation between groups of metrics that might collect different insights or performance indicators. For example, you might have a namespace called **contosomemorymetrics** that tracks memory-use metrics which profile your app. Another namespace called **contosoapptransaction** might track all metrics about user transactions in your application.
-
-### Name
-
-The name property is the name of the metric that's being reported. Usually, the name is descriptive enough to help identify what's measured. An example is a metric that measures the number of memory bytes used on a VM. It might have a metric name like **Memory Bytes In Use**.
-
-### Dimension keys
-
-A dimension is a key/value pair that helps describe other characteristics about the metric that's being collected. By using the other characteristics, you can collect more information about the metric, which allows for deeper insights.
-
-For example, the **Memory Bytes In Use** metric might have a dimension key called **Process** that captures how many bytes of memory each process on a VM consumes. By using this key, you can filter the metric to see how much memory specific processes use or to identify the top five processes by memory usage.
-
-Dimensions are optional, and not all metrics have dimensions. A custom metric can have up to 10 dimensions.
-
-### Dimension values
-
-When you're reporting a metric data point, for each dimension key on the reported metric, there's a corresponding dimension value. For example, you might want to report the memory that ContosoApp uses on your VM:
-
-* The metric name would be **Memory Bytes in Use**.
-* The dimension key would be **Process**.
-* The dimension value would be **ContosoApp.exe**.
-
-When you're publishing a metric value, you can specify only a single dimension value per dimension key. If you collect the same memory utilization for multiple processes on the VM, you can report multiple metric values for that timestamp. Each metric value would specify a different dimension value for the **Process** dimension key.
-
-Although dimensions are optional, if a metric post defines dimension keys, corresponding dimension values are mandatory.
-
-### Metric values
-
-Azure Monitor stores all metrics at 1-minute granularity intervals. During a given minute, a metric might need to be sampled several times. An example is CPU utilization. Or a metric might need to be measured for many discrete events, such as sign-in transaction latencies.
-
-To limit the number of raw values that you have to emit and pay for in Azure Monitor, locally pre-aggregate and emit the aggregated values:
-
-* **Min**: The minimum observed value from all the samples and measurements during the minute.
-* **Max**: The maximum observed value from all the samples and measurements during the minute.
-* **Sum**: The summation of all the observed values from all the samples and measurements during the minute.
-* **Count**: The number of samples and measurements taken during the minute.
-
-For example, if there were four sign-in transactions to your app during a minute, the resulting measured latencies for each might be:
-
-|Transaction 1|Transaction 2|Transaction 3|Transaction 4|
-|||||
-|7 ms|4 ms|13 ms|16 ms|
-
-Then the resulting metric publication to Azure Monitor would be:
-
-* Min: 4
-* Max: 16
-* Sum: 40
-* Count: 4
-
-If your application can't pre-aggregate locally and needs to emit each discrete sample or event immediately upon collection, you can emit the raw measure values. For example, each time a sign-in transaction occurs on your app, you publish a metric to Azure Monitor with only a single measurement. So, for a sign-in transaction that took 12 milliseconds, the metric publication would be:
-
-* Min: 12
-* Max: 12
-* Sum: 12
-* Count: 1
-
-With this process, you can emit multiple values for the same metric/dimension combination during a given minute. Azure Monitor then takes all the raw values emitted for a given minute and aggregates them.
-
-### Sample custom metric publication
-
-In the following example, you create a custom metric called **Memory Bytes in Use** under the metric namespace **Memory Profile** for a virtual machine. The metric has a single dimension called **Process**. For the timestamp, metric values are emitted for two processes.
-
-```json
-{
- "time": "2018-08-20T11:25:20-7:00",
- "data": {
-
- "baseData": {
+## Custom metric definitions
- "metric": "Memory Bytes in Use",
- "namespace": "Memory Profile",
- "dimNames": [
- "Process"
- ],
- "series": [
- {
- "dimValues": [
- "ContosoApp.exe"
- ],
- "min": 10,
- "max": 89,
- "sum": 190,
- "count": 4
- },
- {
- "dimValues": [
- "SalesApp.exe"
- ],
- "min": 10,
- "max": 23,
- "sum": 86,
- "count": 4
- }
- ]
- }
- }
- }
-```
+Each metric data point published contains a namespace, name, and dimension information. The first time a custom metric is emitted to Azure Monitor, a metric definition is automatically created. This new metric definition is then discoverable on any resource that the metric is emitted from via the metric definitions. There's no need to predefine a custom metric in Azure Monitor before it's emitted.
> [!NOTE] > Application Insights, the diagnostics extension, and the InfluxData Telegraf agent are already configured to emit metric values against the correct regional endpoint and carry all the preceding properties in each emission.
-## Custom metric definitions
-
-Each metric data point published contains a namespace, name, and dimension information. The first time a custom metric is emitted to Azure Monitor, a metric definition is automatically created. This new metric definition is then discoverable on any resource that the metric is emitted from via the metric definitions. There's no need to predefine a custom metric in Azure Monitor before it's emitted.
-
-> [!NOTE]
-> Azure Monitor doesn't support defining **Units** for a custom metric.
## Using custom metrics
After custom metrics are submitted to Azure Monitor, you can browse through them
For more information on viewing metrics in the Azure portal, see [Analyze metrics with Azure Monitor metrics explorer](./analyze-metrics.md).
-## Supported regions
-
-During the public preview, the ability to publish custom metrics is available only in a subset of Azure regions. This restriction means that metrics can be published only for resources in one of the supported regions. For more information on Azure regions, see [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/).
-
-The following table lists supported Azure regions for custom metrics. It also lists the corresponding endpoints that metrics for resources in those regions should be published to. The Azure region code used in the endpoint prefix is just the name of the region with whitespace removed.
-
-|Azure region |Regional endpoint prefix|
-|||
-| All Public Cloud Regions | https://<azure_region_code>.monitoring.azure.com |
## Latency and storage retention
To understand the limit of 50,000 on time series, consider the following metric:
> *Server response time* with Dimensions: *Region*, *Department*, *CustomerID*
-With this metric, if you have 10 regions, 20 departments, and 100 customers, that gives you 10 x 20 x 100 = 20,000 time series.
+With this metric, if you have 10 regions, 20 departments, and 100 customers that gives you 10 x 20 x 100 = 20,000 time series.
If you have 100 regions, 200 departments, and 2,000 customers, that gives you 100 x 200 x 2,000 = 40 million time series, which is far over the limit just for this metric alone.
Follow the steps below to see your current total active time series metrics, and
1. Select the **Apply** button. 1. Choose either **Active Time Series**, **Active Time Series Limit**, or **Throttled Time Series**.
-There is a limit of 64 KB on the combined length of all custom metrics names, assuming utf-8 or 1 byte per character. If the 64-KB limit is exceeded, metadata for additional metrics won't be available. The metric names for additional custom metrics won't appear in the Azure portal in selection fields, and won't be returned by the API in requests for metric definitions. The metric data is still available and can be queried.
+There's a limit of 64 KB on the combined length of all custom metrics names, assuming utf-8 or 1 byte per character. If the 64-KB limit is exceeded, metadata for additional metrics won't be available. The metric names for additional custom metrics won't appear in the Azure portal in selection fields, and won't be returned by the API in requests for metric definitions. The metric data is still available and can be queried.
When the limit has been exceeded, reduce the number of metrics you're sending or shorten the length of their names. It then takes up to two days for the new metrics' names to appear.
But if high cardinality is essential for your scenario, the aggregated metrics a
Use custom metrics from various
+ - [Send custom metrics to the Azure Monitor using the REST API](./metrics-store-custom-rest-api.md)
- [Virtual machine](../essentials/collect-custom-metrics-guestos-resource-manager-vm.md) - [Virtual Machine Scale Set](../essentials/collect-custom-metrics-guestos-resource-manager-vmss.md) - [Azure virtual machine (classic)](../essentials/collect-custom-metrics-guestos-vm-classic.md)
+ - [Linux virtual machine using the Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md)\
- [Classic cloud service](../essentials/collect-custom-metrics-guestos-vm-cloud-service-classic.md)
azure-monitor Metrics Store Custom Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-store-custom-rest-api.md
Previously updated : 10/18/2023 Last updated : 01/07/2024 # Send custom metrics for an Azure resource to the Azure Monitor metrics store by using a REST API
This article shows you how to send custom metrics for Azure resources to the Azu
> [!NOTE] > The REST API only permits sending custom metrics for Azure resources. To send metrics for resources in other environments or on-premises, use [Application Insights](../app/api-custom-events-metrics.md).
-## Create and authorize a service principal to emit metrics
+## Send REST requests to ingest custom metrics
-A service principal is an application whose tokens can be used to authenticate and grant access to specific Azure resources by using Microsoft Entra ID (formerly _Azure Active Directory_). Resources include user apps, services, or automation tools.
+When you send custom metrics to Azure Monitor, each data point, or value, reported in the metrics must include the following information.
-1. [Create a Microsoft Entra application and service principal](../../active-directory/develop/howto-create-service-principal-portal.md) that can access resources.
++ [Authentication token](#authentication)++ [Subject](#subject)++ [Region](#region)++ [Timestamp](#timestamp)++ [Namespace](#namespace) ++ [Name](#name)++ [Dimension keys](#dimension-keys)++ [Dimension values](#dimension-values)++ [Metric values](#metric-values)
-1. Save the tenant ID, new client ID, and client secret value for your app for use in token requests.
-1. The app must be assigned the **Monitoring Metrics Publisher** role for the resources you want to emit metrics against. If you plan to use the app to emit custom metrics against many resources, you can assign the role at the resource group or subscription level. For more information, see [Assign Azure roles by using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+### Authentication
-## Get an authorization token
+To submit custom metrics to Azure Monitor, the entity that submits the metric needs a valid Microsoft Entra token in the **Bearer** header of the request. Supported ways to acquire a valid bearer token include:
-Send the following request in the command prompt or by using a client like Postman.
+- [Managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md). You can use a managed identity to give resources permissions to carry out certain operations. An example is allowing a resource to emit metrics about itself. A resource, or its managed identity, can be granted **Monitoring Metrics Publisher** permissions on another resource. With this permission, the managed identity can also emit metrics for other resources.
+- [Microsoft Entra service principal](../../active-directory/develop/app-objects-and-service-principals.md). In this scenario, a Microsoft Entra application, or service, can be assigned permissions to emit metrics about an Azure resource. To authenticate the request, Azure Monitor validates the application token by using Microsoft Entra public keys. The existing **Monitoring Metrics Publisher** role already has this permission. It's available in the Azure portal.
+
+ The service principal, depending on what resources it emits custom metrics for, can be given the **Monitoring Metrics Publisher** role at the scope required. Examples are a subscription, resource group, or specific resource.
+
+> [!TIP]
+> When you request a Microsoft Entra token to emit custom metrics, ensure that the audience or resource that the token is requested for is `https://monitoring.azure.com/`. Be sure to include the trailing slash.
+
+### Get an authorization token
+
+Once you have created your managed identity or service principal and assigned **Monitoring Metrics Publisher** permissions, you can get an authorization token by using the following request:
```console curl -X POST 'https://login.microsoftonline.com/<tennant ID>/oauth2/token' \
curl -X POST 'https://login.microsoftonline.com/<tennant ID>/oauth2/token' \
--data-urlencode 'grant_type=client_credentials' \ --data-urlencode 'client_id=<your apps client ID>' \ --data-urlencode 'client_secret=<your apps client secret' \data-urlencode 'resource=https://monitor.azure.com'
+--data-urlencode 'resource=https://monitoring.azure.com'
```
-The response body appears:
+The response body appears in the following format:
```JSON {
The response body appears:
Save the access token from the response for use in the following HTTP requests.
-## Send a metric via the REST API
-1. Paste the following JSON into a file. Save it asΓÇ»*custommetric.json* on your local computer. Update the time parameter so that it's within the last 20 minutes. You can't put a metric into the store that's more than 20 minutes old. The metrics store is optimized for alerting and real-time charting.
-
- ```JSON
- {
- "time": "2023-01-03T11:00:20",
- "data": {
- "baseData": {
- "metric": "QueueDepth",
- "namespace": "QueueProcessing",
- "dimNames": [
- "QueueName",
- "MessageType"
- ],
- "series": [
- {
- "dimValues": [
- "ImagesToProcess",
- "JPEG"
- ],
- "min": 3,
- "max": 20,
- "sum": 28,
- "count": 3
- }
- ]
- }
- }
- }
- ```
+### Subject
+
+The subject property captures which Azure resource ID the custom metric is reported for. This information is encoded in the URL of the API call. Each API can submit metric values for only a single Azure resource.
+
+> [!NOTE]
+> You can't emit custom metrics against the resource ID of a resource group or subscription.
+
+### Region
+
+The region property captures the Azure region where the resource you're emitting metrics for is deployed. Metrics must be emitted to the same Azure Monitor regional endpoint as the region where the resource is deployed. For example, custom metrics for a VM deployed in West US must be sent to the WestUS regional Azure Monitor endpoint. The region information is also encoded in the URL of the API call.
+
+### Timestamp
+
+Each data point sent to Azure Monitor must be marked with a timestamp. This timestamp captures the date and time at which the metric value is measured or collected. Azure Monitor accepts metric data with timestamps as far as 20 minutes in the past and 5 minutes in the future. The timestamp must be in ISO 8601 format.
+
+### Namespace
+
+Namespaces are a way to categorize or group similar metrics together. By using namespaces, you can achieve isolation between groups of metrics that might collect different insights or performance indicators. For example, you might have a namespace called **contosomemorymetrics** that tracks memory-use metrics which profile your app. Another namespace called **contosoapptransaction** might track all metrics about user transactions in your application.
+
+### Name
+
+The name property is the name of the metric that's being reported. Usually, the name is descriptive enough to help identify what's measured. An example is a metric that measures the number of memory bytes used on a VM. It might have a metric name like **Memory Bytes In Use**.
+
+### Dimension keys
+
+A dimension is a key/value pair that helps describe other characteristics about the metric that's being collected. By using the other characteristics, you can collect more information about the metric, which allows for deeper insights.
+
+For example, the **Memory Bytes In Use** metric might have a dimension key called **Process** that captures how many bytes of memory each process on a VM consumes. By using this key, you can filter the metric to see how much memory specific processes use or to identify the top five processes by memory usage.
+
+Dimensions are optional, and not all metrics have dimensions. A custom metric can have up to 10 dimensions.
+
+### Dimension values
+
+When you're reporting a metric data point, for each dimension key on the reported metric, there's a corresponding dimension value. For example, you might want to report the memory that ContosoApp uses on your VM:
+
+* The metric name would be **Memory Bytes in Use**.
+* The dimension key would be **Process**.
+* The dimension value would be **ContosoApp.exe**.
+
+When you're publishing a metric value, you can specify only a single dimension value per dimension key. If you collect the same memory utilization for multiple processes on the VM, you can report multiple metric values for that timestamp. Each metric value would specify a different dimension value for the **Process** dimension key.
+
+Although dimensions are optional, if a metric post defines dimension keys, corresponding dimension values are mandatory.
+
+### Metric values
+
+Azure Monitor stores all metrics at 1-minute granularity intervals. During a given minute, a metric might need to be sampled several times. An example is CPU utilization. Or a metric might need to be measured for many discrete events, such as sign-in transaction latencies.
+
+To limit the number of raw values that you have to emit and pay for in Azure Monitor, locally pre-aggregate and emit the aggregated values:
+
+* **Min**: The minimum observed value from all the samples and measurements during the minute.
+* **Max**: The maximum observed value from all the samples and measurements during the minute.
+* **Sum**: The summation of all the observed values from all the samples and measurements during the minute.
+* **Count**: The number of samples and measurements taken during the minute.
+
-1. Submit the following HTTP POST request by using the following variables:
- - **location**: Deployment region of the resource you're emitting metrics for.
- - **resourceId**: Resource ID of the Azure resource you're tracking the metric against.
- - **accessToken**: The authorization token acquired from the previous step.
+> [!NOTE]
+> Azure Monitor doesn't support defining **Units** for a custom metric.
++
+For example, if there were four sign-in transactions to your app during a minute, the resulting measured latencies for each might be:
+
+|Transaction 1|Transaction 2|Transaction 3|Transaction 4|
+|||||
+|7 ms|4 ms|13 ms|16 ms|
+
+Then the resulting metric publication to Azure Monitor would be:
+
+* Min: 4
+* Max: 16
+* Sum: 40
+* Count: 4
+
+If your application can't pre-aggregate locally and needs to emit each discrete sample or event immediately upon collection, you can emit the raw measure values. For example, each time a sign-in transaction occurs on your app, you publish a metric to Azure Monitor with only a single measurement. So, for a sign-in transaction that took 12 milliseconds, the metric publication would be:
+
+* Min: 12
+* Max: 12
+* Sum: 12
+* Count: 1
+
+With this process, you can emit multiple values for the same metric/dimension combination during a given minute. Azure Monitor then takes all the raw values emitted for a given minute and aggregates them.
+
+### Sample custom metric publication
+
+In the following example, create a custom metric called **Memory Bytes in Use** under the metric namespace **Memory Profile** for a virtual machine. The metric has a single dimension called **Process**. For the timestamp, metric values are emitted for two processes.
+
+Store the following JSON in a file called *custommetric.json* on your local computer. Update the time parameter so that it's within the last 20 minutes. You can't put a metric into the store that's more than 20 minutes old.
+
+```json
+{
+ "time": "2024-01-07T11:25:20-7:00",
+ "data": {
+
+ "baseData": {
+
+ "metric": "Memory Bytes in Use",
+ "namespace": "Memory Profile",
+ "dimNames": [
+ "Process"
+ ],
+ "series": [
+ {
+ "dimValues": [
+ "ContosoApp.exe"
+ ],
+ "min": 10,
+ "max": 89,
+ "sum": 190,
+ "count": 4
+ },
+ {
+ "dimValues": [
+ "SalesApp.exe"
+ ],
+ "min": 10,
+ "max": 23,
+ "sum": 86,
+ "count": 4
+ }
+ ]
+ }
+ }
+ }
+```
+
+Submit the following HTTP POST request by using the following variables:
++ `location`: Deployment region of the resource you're emitting metrics for.++ `resourceId`: Resource ID of the Azure resource you're tracking the metric against.++ `accessToken`: The authorization token acquired from the *Get an authorization token* step. ```console curl -X POST 'https://<location>/.monitoring.azure.com<resourceId>/metrics' \
Save the access token from the response for use in the following HTTP requests.
-d @custommetric.json ```
-1. Change the timestamp and values in the JSON file. The 'time' value in the JSON file is expected to be in UTC.
-
-1. Repeat the previous two steps a few times to create data for several minutes.
-
-## Troubleshooting
-
-If you receive an error message with some part of the process, consider the following troubleshooting information:
--- If you can't issue metrics against a subscription or resource group, or resource, check that your application or service principal has the **Monitoring Metrics Publisher** role assigned in **Access control (IAM)**.-- Check that the number of dimension names matches the number of values.-- Check that you aren't emitting metrics against a region that doesn't support custom metrics. For more information, see [supported regions](./metrics-custom-overview.md#supported-regions). ## View your metrics
If you receive an error message with some part of the process, consider the foll
1. In the **Scope** dropdown list, select the resource you send the metric for.
-1. In the **Metric Namespace** dropdown list, select **queueprocessing**.
+1. In the **Metric Namespace** dropdown list, select **Memory Profile**.
-1. In the **Metric** dropdown list, select **QueueDepth**.
+1. In the **Metric** dropdown list, select **Memory Bytes in Use**.
+
+## Troubleshooting
+
+If you receive an error message with some part of the process, consider the following troubleshooting information:
+
+- If you can't issue metrics against a subscription or resource group, or resource, check that your application or service principal has the **Monitoring Metrics Publisher** role assigned in **Access control (IAM)**.
+- Check that the number of dimension names matches the number of values.
+- Check that you're emitting metrics to the correct Azure Monitor regional endpoint. For example, if your resource is deployed in West US, you must emit metrics to the West US regional endpoint.
+- Check that the timestamp is within the last 20 minutes.
+- Check that the timestamp is in ISO 8601 format.
+- Check that the metric name is valid. For example, it can't contain spaces.
## Next steps
cosmos-db Query Metrics Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query-metrics-performance.md
Previously updated : 05/17/2019 Last updated : 1/5/2023 # Get SQL query execution metrics and analyze query performance using .NET SDK [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-This article presents how to profile SQL query performance on Azure Cosmos DB. This profiling can be done using `QueryMetrics` retrieved from the .NET SDK and is detailed here. [QueryMetrics](/dotnet/api/microsoft.azure.documents.querymetrics) is a strongly typed object with information about the backend query execution. These metrics are documented in more detail in the [Tune Query Performance](./query-metrics.md) article.
+This article presents how to profile SQL query performance on Azure Cosmos DB using [ServerSideCumulativeMetrics](/dotnet/api/microsoft.azure.cosmos.serversidecumulativemetrics) retrieved from the .NET SDK. `ServerSideCumulativeMetrics` is a strongly typed object with information about the backend query execution. It contains cumulative metrics that are aggregated across all physical partitions for the request, and a list of metrics for each physical partition. These metrics are documented in more detail in the [Tune Query Performance](./query-metrics.md#query-execution-metrics) article.
-## Set the FeedOptions parameter
+## Get query metrics
-All the overloads for [DocumentClient.CreateDocumentQuery](/dotnet/api/microsoft.azure.documents.client.documentclient.createdocumentquery) take in an optional [FeedOptions](/dotnet/api/microsoft.azure.documents.client.feedoptions) parameter. This option is what allows query execution to be tuned and parameterized.
-
-To collect the NoSQL query execution metrics, you must set the parameter [PopulateQueryMetrics](/dotnet/api/microsoft.azure.documents.client.feedoptions.populatequerymetrics#P:Microsoft.Azure.Documents.Client.FeedOptions.PopulateQueryMetrics) in the [FeedOptions](/dotnet/api/microsoft.azure.documents.client.feedoptions) to `true`. Setting `PopulateQueryMetrics` to true will make it so that the `FeedResponse` will contain the relevant `QueryMetrics`.
-
-## Get query metrics with AsDocumentQuery()
-The following code sample shows how to do retrieve metrics when using [AsDocumentQuery()](/dotnet/api/microsoft.azure.documents.linq.documentqueryable.asdocumentquery) method:
+Query metrics are available as a strongly typed object in the .NET SDK beginning in [version 3.36.0](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.36.0). Prior to this version, or if you're using a different SDK language, you can retrieve query metrics by parsing the `Diagnostics`. The following code sample shows how to retrieve `ServerSideCumulativeMetrics` from the `Diagnostics` in a [FeedResponse](/dotnet/api/microsoft.azure.cosmos.feedresponse-1):
```csharp
-// Initialize this DocumentClient and Collection
-DocumentClient documentClient = null;
-DocumentCollection collection = null;
+CosmosClient client = new CosmosClient(myCosmosEndpoint, myCosmosKey);
+Container container = client.GetDatabase(myDatabaseName).GetContainer(myContainerName);
-// Setting PopulateQueryMetrics to true in the FeedOptions
-FeedOptions feedOptions = new FeedOptions
-{
- PopulateQueryMetrics = true
-};
+QueryDefinition query = new QueryDefinition("SELECT TOP 5 * FROM c");
+FeedIterator<MyClass> feedIterator = container.GetItemQueryIterator<MyClass>(query);
-string query = "SELECT TOP 5 * FROM c";
-IDocumentQuery<dynamic> documentQuery = documentClient.CreateDocumentQuery(Collection.SelfLink, query, feedOptions).AsDocumentQuery();
-
-while (documentQuery.HasMoreResults)
+while (feedIterator.HasMoreResults)
{ // Execute one continuation of the query
- FeedResponse<dynamic> feedResponse = await documentQuery.ExecuteNextAsync();
-
- // This dictionary maps the partitionId to the QueryMetrics of that query
- IReadOnlyDictionary<string, QueryMetrics> partitionIdToQueryMetrics = feedResponse.QueryMetrics;
-
- // At this point you have QueryMetrics which you can serialize using .ToString()
- foreach (KeyValuePair<string, QueryMetrics> kvp in partitionIdToQueryMetrics)
- {
- string partitionId = kvp.Key;
- QueryMetrics queryMetrics = kvp.Value;
-
- // Do whatever logging you need
- DoSomeLoggingOfQueryMetrics(query, partitionId, queryMetrics);
- }
+ FeedResponse<MyClass> feedResponse = await feedIterator.ReadNextAsync();
+
+ // Retrieve the ServerSideCumulativeMetrics object from the FeedResponse
+ ServerSideCumulativeMetrics metrics = feedResponse.Diagnostics.GetQueryMetrics();
} ```
-## Aggregating QueryMetrics
-In the previous section, notice that there were multiple calls to [ExecuteNextAsync](/dotnet/api/microsoft.azure.documents.linq.idocumentquery-1.executenextasync) method. Each call returned a `FeedResponse` object that has a dictionary of `QueryMetrics`; one for every continuation of the query. The following example shows how to aggregate these `QueryMetrics` using LINQ:
+You can also get query metrics from the `FeedResponse` of a LINQ query using the `ToFeedIterator()` method:
```csharp
-List<QueryMetrics> queryMetricsList = new List<QueryMetrics>();
+FeedIterator<MyClass> feedIterator = container.GetItemLinqQueryable<MyClass>()
+ .Take(5)
+ .ToFeedIterator();
-while (documentQuery.HasMoreResults)
+while (feedIterator.HasMoreResults)
{
- // Execute one continuation of the query
- FeedResponse<dynamic> feedResponse = await documentQuery.ExecuteNextAsync();
-
- // This dictionary maps the partitionId to the QueryMetrics of that query
- IReadOnlyDictionary<string, QueryMetrics> partitionIdToQueryMetrics = feedResponse.QueryMetrics;
- queryMetricsList.AddRange(partitionIdToQueryMetrics.Values);
+ FeedResponse<MyClass> feedResponse = await feedIterator.ReadNextAsync();
+ ServerSideCumulativeMetrics metrics = feedResponse.Diagnostics.GetQueryMetrics();
}-
-// Aggregate the QueryMetrics using the + operator overload of the QueryMetrics class.
-QueryMetrics aggregatedQueryMetrics = queryMetricsList.Aggregate((curr, acc) => curr + acc);
-Console.WriteLine(aggregatedQueryMetrics);
```
-## Grouping query metrics by Partition ID
+### Cumulative Metrics
-You can group the `QueryMetrics` by the Partition ID. Grouping by Partition ID allows you to see if a specific Partition is causing performance issues when compared to others. The following example shows how to group `QueryMetrics` with LINQ:
+`ServerSideCumulativeMetrics` contains a `CumulativeMetrics` property that represents the query metrics aggregated over all partitions for the single round trip.
```csharp
-List<KeyValuePair<string, QueryMetrics>> partitionedQueryMetrics = new List<KeyValuePair<string, QueryMetrics>>();
-while (documentQuery.HasMoreResults)
-{
- // Execute one continuation of the query
- FeedResponse<dynamic> feedResponse = await documentQuery.ExecuteNextAsync();
-
- // This dictionary is maps the partitionId to the QueryMetrics of that query
- IReadOnlyDictionary<string, QueryMetrics> partitionIdToQueryMetrics = feedResponse.QueryMetrics;
- partitionedQueryMetrics.AddRange(partitionIdToQueryMetrics.ToList());
-}
+// Retrieve the ServerSideCumulativeMetrics object from the FeedResponse
+ServerSideCumulativeMetrics metrics = feedResponse.Diagnostics.GetQueryMetrics();
+
+// CumulativeMetrics is the metrics for this continuation aggregated over all partitions
+ServerSideMetrics cumulativeMetrics = metrics.CumulativeMetrics;
+```
-// Now we are able to group the query metrics by partitionId
-IEnumerable<IGrouping<string, KeyValuePair<string, QueryMetrics>>> groupedByQueryMetrics = partitionedQueryMetrics
- .GroupBy(kvp => kvp.Key);
+You can also aggregate these metrics across all round trips for the query. The following is an example of how to aggregate query execution time across all round trips for a given query using LINQ:
-// If we wanted to we could even aggregate the groupedby QueryMetrics
-foreach(IGrouping<string, KeyValuePair<string, QueryMetrics>> grouping in groupedByQueryMetrics)
+```csharp
+QueryDefinition query = new QueryDefinition("SELECT TOP 5 * FROM c");
+FeedIterator<MyClass> feedIterator = container.GetItemQueryIterator<MyClass>(query);
+
+List<ServerSideCumulativeMetrics> metrics = new List<ServerSideCumulativeMetrics>();
+TimeSpan cumulativeTime;
+while (feedIterator.HasMoreResults)
{
- string partitionId = grouping.Key;
- QueryMetrics aggregatedQueryMetricsForPartition = grouping
- .Select(kvp => kvp.Value)
- .Aggregate((curr, acc) => curr + acc);
- DoSomeLoggingOfQueryMetrics(query, partitionId, aggregatedQueryMetricsForPartition);
+ // Execute one continuation of the query
+ FeedResponse<MyClass> feedResponse = await feedIterator.ReadNextAsync();
+
+ // Store the ServerSideCumulativeMetrics object to aggregate values after all round trips
+ metrics.Add(response.Diagnostics.GetQueryMetrics());
}+
+// Aggregate values across trips for metrics of interest
+TimeSpan totalTripsExecutionTime = metrics.Aggregate(TimeSpan.Zero, (currentSum, next) => currentSum + next.CumulativeMetrics.TotalTime);
+DoSomeLogging(totalTripsExecutionTime);
```
-## LINQ on DocumentQuery
+### Partitioned Metrics
-You can also get the `FeedResponse` from a LINQ Query using the `AsDocumentQuery()` method:
+`ServerSideCumulativeMetrics` contains a `PartitionedMetrics` property that is a list of per-partition metrics for the round trip. If multiple physical partitions are reached in a single round trip, then metrics for each of them appear in the list. Partitioned metrics are represented as [ServerSidePartitionedMetrics](/dotnet/api/microsoft.azure.cosmos.serversidepartitionedmetrics) with a unique identifier for each physical partition.
```csharp
-IDocumentQuery<Document> linqQuery = client.CreateDocumentQuery(collection.SelfLink, feedOptions)
- .Take(1)
- .Where(document => document.Id == "42")
- .OrderBy(document => document.Timestamp)
- .AsDocumentQuery();
-FeedResponse<Document> feedResponse = await linqQuery.ExecuteNextAsync<Document>();
-IReadOnlyDictionary<string, QueryMetrics> queryMetrics = feedResponse.QueryMetrics;
+// Retrieve the ServerSideCumulativeMetrics object from the FeedResponse
+ServerSideCumulativeMetrics metrics = feedResponse.Diagnostics.GetQueryMetrics();
+
+// PartitionedMetrics is a list of per-partition metrics for this continuation
+List<ServerSidePartitionedMetrics> partitionedMetrics = metrics.PartitionedMetrics;
+```
+
+When accumulated over all round trips, per partition metrics allow you to see if a specific partition is causing performance issues when compared to others. The following is an example of how to group partition metrics for each trip using LINQ:
+
+```csharp
+QueryDefinition query = new QueryDefinition("SELECT TOP 5 * FROM c");
+FeedIterator<MyClass> feedIterator = container.GetItemQueryIterator<MyClass>(query);
+
+List<ServerSideCumulativeMetrics> metrics = new List<ServerSideCumulativeMetrics>();
+while (feedIterator.HasMoreResults)
+{
+ // Execute one continuation of the query
+ FeedResponse<MyClass> feedResponse = await feedIterator.ReadNextAsync();
+
+ // Store the ServerSideCumulativeMetrics object to aggregate values after all round trips
+ metrics.Add(response.Diagnostics.GetQueryMetrics());
+}
+
+// Group metrics by partition key range id
+var groupedPartitionMetrics = metrics.SelectMany(m => m.PartitionedMetrics).GroupBy(p => p.PartitionKeyRangeId);
+foreach(var partitionGroup in groupedPartitionMetrics)
+{
+ foreach(var tripMetrics in partitionGroup)
+ {
+ DoSomethingWithMetrics();
+ }
+}
```
-## Expensive Queries
+## Get the query request charge
-You can capture the request units consumed by each query to investigate expensive queries or queries that consume high throughput. You can get the request charge by using the [RequestCharge](/dotnet/api/microsoft.azure.documents.client.feedresponse-1.requestcharge) property in `FeedResponse`. To learn more about how to get the request charge using the Azure portal and different SDKs, see [find the request unit charge](find-request-unit-charge.md) article.
+You can capture the request units consumed by each query to investigate expensive queries or queries that consume high throughput. You can get the request charge by using the `RequestCharge` property in `FeedResponse`. To learn more about how to get the request charge using the Azure portal and different SDKs, see [find the request unit charge](find-request-unit-charge.md) article.
```csharp
-string query = "SELECT * FROM c";
-IDocumentQuery<dynamic> documentQuery = documentClient.CreateDocumentQuery(Collection.SelfLink, query, feedOptions).AsDocumentQuery();
+QueryDefinition query = new QueryDefinition("SELECT TOP 5 * FROM c");
+FeedIterator<MyClass> feedIterator = container.GetItemQueryIterator<MyClass>(query);
-while (documentQuery.HasMoreResults)
+while (feedIterator.HasMoreResults)
{ // Execute one continuation of the query
- FeedResponse<dynamic> feedResponse = await documentQuery.ExecuteNextAsync();
- double requestCharge = feedResponse.RequestCharge
-
+ FeedResponse<MyClass> feedResponse = await feedIterator.ReadNextAsync();
+ double requestCharge = feedResponse.RequestCharge;
+ // Log the RequestCharge how ever you want. DoSomeLogging(requestCharge); }
while (documentQuery.HasMoreResults)
## Get the query execution time
-When calculating the time required to execute a client-side query, make sure that you only include the time to call the `ExecuteNextAsync` method and not other parts of your code base. Just these calls help you in calculating how long the query execution took as shown in the following example:
+You can capture query execution time for each trip from the query metrics. When looking at request latency, it's important to differentiate query execution time from other sources of latency, such as network transit time. The following example shows how to get cumulative query execution time for each round trip:
```csharp
-string query = "SELECT * FROM c";
-IDocumentQuery<dynamic> documentQuery = documentClient.CreateDocumentQuery(Collection.SelfLink, query, feedOptions).AsDocumentQuery();
-Stopwatch queryExecutionTimeEndToEndTotal = new Stopwatch();
-while (documentQuery.HasMoreResults)
+QueryDefinition query = new QueryDefinition("SELECT TOP 5 * FROM c");
+FeedIterator<MyClass> feedIterator = container.GetItemQueryIterator<MyClass>(query);
+
+TimeSpan cumulativeTime;
+while (feedIterator.HasMoreResults)
{ // Execute one continuation of the query
- queryExecutionTimeEndToEndTotal.Start();
- FeedResponse<dynamic> feedResponse = await documentQuery.ExecuteNextAsync();
- queryExecutionTimeEndToEndTotal.Stop();
+ FeedResponse<MyClass> feedResponse = await feedIterator.ReadNextAsync();
+ ServerSideCumulativeMetrics metrics = response.Diagnostics.GetQueryMetrics();
+ cumulativeTime = metrics.CumulativeMetrics.TotalTime;
} // Log the elapsed time
-DoSomeLogging(queryExecutionTimeEndToEndTotal.Elapsed);
+DoSomeLogging(cumulativeTime);
```
-## Scan queries (commonly slow and expensive)
+## Get the index utilization
-A scan query refers to a query that wasn't served by the index, due to which, many documents are loaded before returning the result set.
+Looking at the index utilization can help you debug slow queries. Queries that can't use the index result in a full scan of all documents in a container before returning the result set.
-Below is an example of a scan query:
+Here's an example of a scan query:
```sql SELECT VALUE c.description
Output Document Count : 7
Output Document Size : 510 bytes Index Utilization : 0.00 % Total Query Execution Time : 4,500.34 milliseconds
- Query Preparation Times
- Query Compilation Time : 0.09 milliseconds
- Logical Plan Build Time : 0.05 milliseconds
- Physical Plan Build Time : 0.04 milliseconds
- Query Optimization Time : 0.01 milliseconds
- Index Lookup Time : 0.01 milliseconds
- Document Load Time : 4,177.66 milliseconds
- Runtime Execution Times
- Query Engine Times : 322.16 milliseconds
- System Function Execution Time : 85.74 milliseconds
- User-defined Function Execution Time : 0.00 milliseconds
- Document Write Time : 0.01 milliseconds
-Client Side Metrics
- Retry Count : 0
- Request Charge : 4,059.95 RUs
+Query Preparation Time : 0.2 milliseconds
+Index Lookup Time : 0.01 milliseconds
+Document Load Time : 4,177.66 milliseconds
+Runtime Execution Time : 407.9 milliseconds
+Document Write Time : 0.01 milliseconds
``` Note the following values from the query metrics output:
FROM c
WHERE c.description = "BABYFOOD, DESSERT, FRUIT DESSERT, WITHOUT ASCORBIC ACID, JUNIOR" ```
-This query is now able to be served from the index.
+This query is now able to be served from the index. Alternatively, you can use [computed properties](query/computed-properties.md) to index the results of system functions or complex calculations that would otherwise result in a full scan.
To learn more about tuning query performance, see the [Tune Query Performance](./query-metrics.md) article.
cosmos-db Query Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query-metrics.md
Previously updated : 04/04/2022 Last updated : 1/5/2023 ms.devlang: csharp
Azure Cosmos DB provides a [API for NoSQL for querying data](query/getting-started.md), without requiring schema or secondary indexes. This article provides the following information for developers: * High-level details on how Azure Cosmos DB's SQL query execution works
-* Details on query request and response headers, and client SDK options
* Tips and best practices for query performance
-* Examples of how to utilize SQL execution statistics to debug query performance
+* Examples of how to utilize SQL query execution metrics to debug query performance
## About SQL query execution
-In Azure Cosmos DB, you store data in containers, which can grow to any [storage size or request throughput](../partitioning-overview.md). Azure Cosmos DB seamlessly scales data across physical partitions under the covers to handle data growth or increase in provisioned throughput. You can issue SQL queries to any container using the REST API or one of the supported [SQL SDKs](sdk-dotnet-v2.md).
+In Azure Cosmos DB data is stored in containers, which can grow to any [storage size or request throughput](../partitioning-overview.md). Azure Cosmos DB seamlessly scales data across physical partitions under the covers to handle data growth or increases in provisioned throughput. You can issue SQL queries to any container using the REST API or one of the supported [SQL SDKs](sdk-dotnet-v3.md).
-A brief overview of partitioning: you define a partition key like "city", which determines how data is split across physical partitions. Data belonging to a single partition key (for example, "city" == "Seattle") is stored within a physical partition, but typically a single physical partition has multiple partition keys. When a partition reaches its storage size, the service seamlessly splits the partition into two new partitions, and divides the partition key evenly across these partitions. Since partitions are transient, the APIs use an abstraction of a "partition key range", which denotes the ranges of partition key hashes.
+A brief overview of partitioning: you define a partition key like "city", which determines how data is split across physical partitions. Data belonging to a single partition key (for example, "city" == "Seattle") is stored within a physical partition, and a single physical partition can store data from multiple partition keys. When a partition reaches its storage limit, the service seamlessly splits the partition into two new partitions. Data is distributed evenly across the new partitions, keeping all data for a single partition key together. Since partitions are transient, the APIs use an abstraction of a partition key range, which denotes the ranges of partition key hashes.
When you issue a query to Azure Cosmos DB, the SDK performs these logical steps: * Parse the SQL query to determine the query execution plan.
-* If the query includes a filter against the partition key, like `SELECT * FROM c WHERE c.city = "Seattle"`, it is routed to a single partition. If the query does not have a filter on partition key, then it is executed in all partitions, and results are merged client side.
-* The query is executed within each partition in series or parallel, based on client configuration. Within each partition, the query might make one or more round trips depending on the query complexity, configured page size, and provisioned throughput of the collection. Each execution returns the number of [request units](../request-units.md) consumed by query execution, and optionally, query execution statistics.
+* If the query includes a filter against the partition key, like `SELECT * FROM c WHERE c.city = "Seattle"`, it's routed to a single partition. If the query doesn't have a filter on the partition key, then it's executed in all partitions and results from each partition are merged client side.
+* The query is executed within each partition in series or parallel, based on client configuration. Within each partition, the query might make one or more round trips depending on the query complexity, configured page size, and provisioned throughput of the collection. Each execution returns the number of [request units](../request-units.md) consumed by query execution and query execution statistics.
* The SDK performs a summarization of the query results across partitions. For example, if the query involves an ORDER BY across partitions, then results from individual partitions are merge-sorted to return results in globally sorted order. If the query is an aggregation like `COUNT`, the counts from individual partitions are summed to produce the overall count.
-The SDKs provide various options for query execution. For example, in .NET these options are available in the `FeedOptions` class. The following table describes these options and how they impact query execution time.
+The SDKs provide various options for query execution. For example, in .NET these options are available in the [`QueryRequestOptions`](/dotnet/api/microsoft.azure.cosmos.queryrequestoptions) class. The following table describes these options and how they affect query execution time.
| Option | Description | | | -- |
-| `EnableCrossPartitionQuery` | Must be set to true for any query that requires to be executed across more than one partition. This is an explicit flag to enable you to make conscious performance tradeoffs during development time. |
-| `EnableScanInQuery` | Must be set to true if you have opted out of indexing, but want to run the query via a scan anyway. Only applicable if indexing for the requested filter path is disabled. |
-| `MaxItemCount` | The maximum number of items to return per round trip to the server. By setting to -1, you can let the server manage the number of items. Or, you can lower this value to retrieve only a small number of items per round trip.
-| `MaxBufferedItemCount` | This is a client-side option, and used to limit the memory consumption when performing cross-partition ORDER BY. A higher value helps reduce the latency of cross-partition sorting. |
-| `MaxDegreeOfParallelism` | Gets or sets the number of concurrent operations run client side during parallel query execution in the Azure Cosmos DB database service. A positive property value limits the number of concurrent operations to the set value. If it is set to less than 0, the system automatically decides the number of concurrent operations to run. |
-| `PopulateQueryMetrics` | Enables detailed logging of statistics of time spent in various phases of query execution like compilation time, index loop time, and document load time. You can share output from query statistics with Azure Support to diagnose query performance issues. |
-| `RequestContinuation` | You can resume query execution by passing in the opaque continuation token returned by any query. The continuation token encapsulates all state required for query execution. |
-| `ResponseContinuationTokenLimitInKb` | You can limit the maximum size of the continuation token returned by the server. You might need to set this if your application host has limits on response header size. Setting this may increase the overall duration and RUs consumed for the query. |
-
-For example, let's take an example query on partition key requested on a collection with `/city` as the partition key and provisioned with 100,000 RU/s of throughput. You request this query using `CreateDocumentQuery<T>` in .NET like the following:
-
-```cs
-IDocumentQuery<dynamic> query = client.CreateDocumentQuery(
- UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName),
- "SELECT * FROM c WHERE c.city = 'Seattle'",
- new FeedOptions
- {
- PopulateQueryMetrics = true,
- MaxItemCount = -1,
- MaxDegreeOfParallelism = -1,
- EnableCrossPartitionQuery = true
- }).AsDocumentQuery();
-
-FeedResponse<dynamic> result = await query.ExecuteNextAsync();
+| `EnableScanInQuery` | Only applicable if indexing for the requested filter path is disabled. Must be set to true if you opted out of indexing and want to run queries using a full scan. |
+| `MaxItemCount` | The maximum number of items to return per round trip to the server. You can set it to -1 to let the server manage the number of items to return. |
+| `MaxBufferedItemCount` | The maximum number of items that can be buffered client side during parallel query execution. A positive property value limits the number of buffered items to the set value. You can set it to less than 0 to let the system automatically decide the number of items to buffer. |
+| `MaxConcurrency` | Gets or sets the number of concurrent operations run client side during parallel query execution. A positive property value limits the number of concurrent operations to the set value. You can set it to less than 0 to let the system automatically decide the number of concurrent operations to run. |
+| `PopulateIndexMetrics` | Enables collection of [index metrics](./index-metrics.md) to understand how the query engine used existing indexes and how it could use potential new indexes. This option incurs overhead, so it should only be enabled when debugging slow queries. |
+| `ResponseContinuationTokenLimitInKb` | You can limit the maximum size of the continuation token returned by the server. You might need to set this if your application host has limits on response header size, but it can increase the overall duration and RUs consumed for the query. |
+
+For example, here's a query on a container partitioned by `/city` using the .NET SDK:
+
+```csharp
+QueryDefinition query = new QueryDefinition("SELECT * FROM c WHERE c.city = 'Seattle'");
+QueryRequestOptions options = new QueryRequestOptions()
+{
+ MaxItemCount = -1,
+ MaxBufferedItemCount = -1,
+ MaxConcurrency = -1,
+ PopulateIndexMetrics = true
+};
+FeedIterator<dynamic> feedIterator = container.GetItemQueryIterator<dynamic>(query);
+
+FeedResponse<dynamic> feedResponse = await feedIterator.ReadNextAsync();
```
-The SDK snippet shown above, corresponds to the following REST API request:
-
-```
-POST https://arramacquerymetrics-westus.documents.azure.com/dbs/db/colls/sample/docs HTTP/1.1
-x-ms-continuation:
-x-ms-documentdb-isquery: True
-x-ms-max-item-count: -1
-x-ms-documentdb-query-enablecrosspartition: True
-x-ms-documentdb-query-parallelizecrosspartitionquery: True
-x-ms-documentdb-query-iscontinuationexpected: True
-x-ms-documentdb-populatequerymetrics: True
-x-ms-date: Tue, 27 Jun 2017 21:52:18 GMT
-authorization: type%3dmaster%26ver%3d1.0%26sig%3drp1Hi83Y8aVV5V6LzZ6xhtQVXRAMz0WNMnUuvriUv%2b4%3d
-x-ms-session-token: 7:8,6:2008,5:8,4:2008,3:8,2:2008,1:8,0:8,9:8,8:4008
-Cache-Control: no-cache
-x-ms-consistency-level: Session
-User-Agent: documentdb-dotnet-sdk/1.14.1 Host/32-bit MicrosoftWindowsNT/6.2.9200.0
-x-ms-version: 2017-02-22
-Accept: application/json
-Content-Type: application/query+json
-Host: arramacquerymetrics-westus.documents.azure.com
-Content-Length: 52
-Expect: 100-continue
-
-{"query":"SELECT * FROM c WHERE c.city = 'Seattle'"}
-```
-
-Each query execution page corresponds to a REST API `POST` with the `Accept: application/query+json` header, and the SQL query in the body. Each query makes one or more round trips to the server with the `x-ms-continuation` token echoed between the client and server to resume execution. The configuration options in FeedOptions are passed to the server in the form of request headers. For example, `MaxItemCount` corresponds to `x-ms-max-item-count`.
-
-The request returns the following (truncated for readability) response:
-
-```
-HTTP/1.1 200 Ok
-Cache-Control: no-store, no-cache
-Pragma: no-cache
-Transfer-Encoding: chunked
-Content-Type: application/json
-Server: Microsoft-HTTPAPI/2.0
-Strict-Transport-Security: max-age=31536000
-x-ms-last-state-change-utc: Tue, 27 Jun 2017 21:01:57.561 GMT
-x-ms-resource-quota: documentSize=10240;documentsSize=10485760;documentsCount=-1;collectionSize=10485760;
-x-ms-resource-usage: documentSize=1;documentsSize=884;documentsCount=2000;collectionSize=1408;
-x-ms-item-count: 2000
-x-ms-schemaversion: 1.3
-x-ms-alt-content-path: dbs/db/colls/sample
-x-ms-content-path: +9kEANVq0wA=
-x-ms-xp-role: 1
-x-ms-documentdb-query-metrics: totalExecutionTimeInMs=33.67;queryCompileTimeInMs=0.06;queryLogicalPlanBuildTimeInMs=0.02;queryPhysicalPlanBuildTimeInMs=0.10;queryOptimizationTimeInMs=0.00;VMExecutionTimeInMs=32.56;indexLookupTimeInMs=0.36;documentLoadTimeInMs=9.58;systemFunctionExecuteTimeInMs=0.00;userFunctionExecuteTimeInMs=0.00;retrievedDocumentCount=2000;retrievedDocumentSize=1125600;outputDocumentCount=2000;writeOutputTimeInMs=18.10;indexUtilizationRatio=1.00
-x-ms-request-charge: 604.42
-x-ms-serviceversion: version=1.14.34.4
-x-ms-activity-id: 0df8b5f6-83b9-4493-abda-cce6d0f91486
-x-ms-session-token: 2:2008
-x-ms-gatewayversion: version=1.14.33.2
-Date: Tue, 27 Jun 2017 21:59:49 GMT
-```
-
-The key response headers returned from the query include the following:
-
-| Option | Description |
-| | -- |
-| `x-ms-item-count` | The number of items returned in the response. This is dependent on the supplied `x-ms-max-item-count`, the number of items that can be fit within the maximum response payload size, the provisioned throughput, and query execution time. |
-| `x-ms-continuation:` | The continuation token to resume execution of the query, if additional results are available. |
-| `x-ms-documentdb-query-metrics` | The query statistics for the execution. This is a delimited string containing statistics of time spent in the various phases of query execution. Returned if `x-ms-documentdb-populatequerymetrics` is set to `True`. |
-| `x-ms-request-charge` | The number of [request units](../request-units.md) consumed by the query. |
-
-For details on the REST API request headers and options, see [Querying resources using the REST API](/rest/api/cosmos-db/querying-cosmosdb-resources-using-the-rest-api).
+Each query execution corresponds to a REST API `POST` with headers set for the query request options and the SQL query in the body. For details on the REST API request headers and options, see [Querying resources using the REST API](/rest/api/cosmos-db/querying-cosmosdb-resources-using-the-rest-api).
## Best practices for query performance
-The following are the most common factors that impact Azure Cosmos DB query performance. We dig deeper into each of these topics in this article.
+
+The following factors commonly have the biggest effect on Azure Cosmos DB query performance. We dig deeper into each of these factors in this article.
| Factor | Tip | | | --| | Provisioned throughput | Measure RU per query, and ensure that you have the required provisioned throughput for your queries. | | Partitioning and partition keys | Favor queries with the partition key value in the filter clause for low latency. | | SDK and query options | Follow SDK best practices like direct connectivity, and tune client-side query execution options. |
-| Indexing Policy | Ensure that you have the required indexing paths/policy for the query. |
+| Network latency | Run your application in the same region as your Azure Cosmos DB account wherever possible to reduce latency. |
+| Indexing policy | Ensure that you have the required indexing paths/policy for the query. |
| Query execution metrics | Analyze the query execution metrics to identify potential rewrites of query and data shapes. | ### Provisioned throughput
-In Azure Cosmos DB, you create containers of data, each with reserved throughput expressed in request units (RU) per-second. A read of a 1-KB document is 1 RU, and every operation (including queries) is normalized to a fixed number of RUs based on its complexity. For example, if you have 1000 RU/s provisioned for your container, and you have a query like `SELECT * FROM c WHERE c.city = 'Seattle'` that consumes 5 RUs, then you can perform (1000 RU/s) / (5 RU/query) = 200 query/s such queries per second.
-If you submit more than 200 queries/sec, the service starts rate-limiting incoming requests above 200/s. The SDKs automatically handle this case by performing a backoff/retry, therefore you might notice a higher latency for these queries. Increasing the provisioned throughput to the required value improves your query latency and throughput.
+In Azure Cosmos DB, you create containers of data, each with reserved throughput expressed in request units (RU) per-second. A read of a 1-KB document is one RU, and every operation (including queries) is normalized to a fixed number of RUs based on its complexity. For example, if you have 1000 RU/s provisioned for your container, and you have a query like `SELECT * FROM c WHERE c.city = 'Seattle'` that consumes 5 RUs, then you can execute (1000 RU/s) / (5 RU/query) = 200 of these queries per second.
+
+If you submit more than 200 queries/sec (or some other operations that saturate all provisioned RUs), the service starts rate-limiting incoming requests. The SDKs automatically handle rate-limiting by performing a backoff/retry, therefore you might notice higher latency for these queries. Increasing the provisioned throughput to the required value improves your query latency and throughput.
To learn more about request units, see [Request units](../request-units.md). ### Partitioning and partition keys
-With Azure Cosmos DB, typically queries perform in the following order from fastest/most efficient to slower/less efficient.
-* GET on a single partition key and item key
+With Azure Cosmos DB, the following scenarios for reading data are ordered from what is typically fastest/most efficient to the slowest/least efficient.
+
+* GET on a single partition key and item id, also known as a point read
* Query with a filter clause on a single partition key
-* Query without an equality or range filter clause on any property
+* Query with an equality or range filter clause on any property
* Query without filters
-Queries that need to consult all partitions need higher latency, and can consume higher RUs. Since each partition has automatic indexing against all properties, the query can be served efficiently from the index in this case. You can make queries that span partitions faster by using the parallelism options.
+Queries that need to be executed on all partitions have higher latency, and can consume higher RUs. Since each partition has automatic indexing against all properties, the query can be served efficiently from the index in this case. You can make queries that span partitions faster by using the parallelism options.
To learn more about partitioning and partition keys, see [Partitioning in Azure Cosmos DB](../partitioning-overview.md). ### SDK and query options
-See [Query performance tips](performance-tips-query-sdk.md) and [Performance testing](performance-testing.md) for how to get the best client-side performance from Azure Cosmos DB using our SDKs.
+
+See [query performance tips](performance-tips-query-sdk.md) and [performance testing](performance-testing.md) for how to get the best client-side performance from Azure Cosmos DB using our SDKs.
### Network latency
-See [Azure Cosmos DB global distribution](tutorial-global-distribution.md) for how to set up global distribution, and connect to the closest region. Network latency has a significant impact on query performance when you need to make multiple round-trips or retrieve a large result set from the query.
-The section on query execution metrics explains how to retrieve the server execution time of queries ( `totalExecutionTimeInMs`), so that you can differentiate between time spent in query execution and time spent in network transit.
+See [Azure Cosmos DB global distribution](tutorial-global-distribution.md) for how to set up global distribution and connect to the closest region. Network latency has a significant effect on query performance when you need to make multiple round-trips or retrieve a large result set from the query.
-### Indexing policy
-See [Configuring indexing policy](../index-policy.md) for indexing paths, kinds, and modes, and how they impact query execution. By default, the indexing policy uses range indexing for strings, which is effective for equality queries. If you need range queries for strings, we recommend specifying the Range index type for all strings.
+You can use [query execution metrics](#query-execution-metrics) to retrieve the server execution time of queries, allowing you to differentiate time spent in query execution from time spent in network transit.
-By default, Azure Cosmos DB will apply automatic indexing to all data. For high performance insert scenarios, consider excluding paths as this will reduce the RU cost for each insert operation.
+### Indexing policy
-## Query execution metrics
-You can obtain detailed metrics on query execution by passing in the optional `x-ms-documentdb-populatequerymetrics` header (`FeedOptions.PopulateQueryMetrics` in the .NET SDK). The value returned in `x-ms-documentdb-query-metrics` has the following key-value pairs meant for advanced troubleshooting of query execution.
+See [configuring indexing policy](../index-policy.md) for indexing paths, kinds, and modes, and how they impact query execution. By default, Azure Cosmos DB applies automatic indexing to all data and uses range indexes for strings and numbers, which is effective for equality queries. For high performance insert scenarios, consider excluding paths to reduce the RU cost for each insert operation.
-```cs
-IDocumentQuery<dynamic> query = client.CreateDocumentQuery(
- UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName),
- "SELECT * FROM c WHERE c.city = 'Seattle'",
- new FeedOptions
- {
- PopulateQueryMetrics = true,
- }).AsDocumentQuery();
+You can use the [index metrics](./index-metrics.md) to identify which indexes are used for each query and if there are any missing indexes that would improve query performance.
-FeedResponse<dynamic> result = await query.ExecuteNextAsync();
+## Query execution metrics
-// Returns metrics by partition key range Id
-IReadOnlyDictionary<string, QueryMetrics> metrics = result.QueryMetrics;
+Detailed metrics are returned for each query execution in the [Diagnostics](./troubleshoot-dotnet-sdk.md#capture-diagnostics) for the request. These metrics describe where time is spent during query execution and enable advanced troubleshooting.
-```
+Learn more about [getting the query metrics](./query-metrics-performance.md).
| Metric | Unit | Description | | | --| -- |
-| `totalExecutionTimeInMs` | milliseconds | Query execution time |
-| `queryCompileTimeInMs` | milliseconds | Query compile time |
-| `queryLogicalPlanBuildTimeInMs` | milliseconds | Time to build logical query plan |
-| `queryPhysicalPlanBuildTimeInMs` | milliseconds | Time to build physical query plan |
-| `queryOptimizationTimeInMs` | milliseconds | Time spent in optimizing query |
-| `VMExecutionTimeInMs` | milliseconds | Time spent in query runtime |
-| `indexLookupTimeInMs` | milliseconds | Time spent in physical index layer |
-| `documentLoadTimeInMs` | milliseconds | Time spent in loading documents |
-| `systemFunctionExecuteTimeInMs` | milliseconds | Total time spent executing system (built-in) functions in milliseconds |
-| `userFunctionExecuteTimeInMs` | milliseconds | Total time spent executing user-defined functions in milliseconds |
-| `retrievedDocumentCount` | count | Total number of retrieved documents |
-| `retrievedDocumentSize` | bytes | Total size of retrieved documents in bytes |
-| `outputDocumentCount` | count | Number of output documents |
-| `writeOutputTimeInMs` | milliseconds | Time spent writing the output in milliseconds |
-| `indexUtilizationRatio` | ratio (<=1) | Ratio of number of documents matched by the filter to the number of documents loaded |
-
-The client SDKs may internally make multiple query operations to serve the query within each partition. The client makes more than one call per-partition if the total results exceed `x-ms-max-item-count`, if the query exceeds the provisioned throughput for the partition, or if the query payload reaches the maximum size per page, or if the query reaches the system allocated timeout limit. Each partial query execution returns a `x-ms-documentdb-query-metrics` for that page.
+| `TotalTime` | milliseconds | Total query execution time |
+| `DocumentLoadTime` | milliseconds | Time spent loading documents |
+| `DocumentWriteTime` | milliseconds | Time spent writing and serializing the output documents |
+| `IndexLookupTime` | milliseconds | Time spent in physical index layer |
+| `QueryPreparationTime` | milliseconds | Time spent in preparing query |
+| `RuntimeExecutionTime` | milliseconds | Total query runtime execution time |
+| `VMExecutionTime` | milliseconds | Time spent in query runtime executing the query |
+| `OutputDocumentCount` | count | Number of output documents in the result set |
+| `OutputDocumentSize` | count | Total size of outputted documents in bytes |
+| `RetrievedDocumentCount` | count | Total number of retrieved documents |
+| `RetrievedDocumentSize` | bytes | Total size of retrieved documents in bytes |
+| `IndexHitRatio` | ratio [0,1] | Ratio of number of documents matched by the filter to the number of documents loaded |
+
+The client SDKs can internally make multiple query requests to serve the query within each partition. The client makes more than one call per-partition if the total results exceed the max item count request option, if the query exceeds the provisioned throughput for the partition, if the query payload reaches the maximum size per page, or if the query reaches the system allocated timeout limit. Each partial query execution returns query metrics for that page.
Here are some sample queries, and how to interpret some of the metrics returned from query execution: | Query | Sample Metric | Description | | | --| -- |
-| `SELECT TOP 100 * FROM c` | `"RetrievedDocumentCount": 101` | The number of documents retrieved is 100+1 to match the TOP clause. Query time is mostly spent in `WriteOutputTime` and `DocumentLoadTime` since it is a scan. |
+| `SELECT TOP 100 * FROM c` | `"RetrievedDocumentCount": 101` | The number of documents retrieved is 100+1 to match the TOP clause. Query time is mostly spent in `WriteOutputTime` and `DocumentLoadTime` since it's a scan. |
| `SELECT TOP 500 * FROM c` | `"RetrievedDocumentCount": 501` | RetrievedDocumentCount is now higher (500+1 to match the TOP clause). | | `SELECT * FROM c WHERE c.N = 55` | `"IndexLookupTime": "00:00:00.0009500"` | About 0.9 ms is spent in IndexLookupTime for a key lookup, because it's an index lookup on `/N/?`. | | `SELECT * FROM c WHERE c.N > 55` | `"IndexLookupTime": "00:00:00.0017700"` | Slightly more time (1.7 ms) spent in IndexLookupTime over a range scan, because it's an index lookup on `/N/?`. |
-| `SELECT TOP 500 c.N FROM c` | `"IndexLookupTime": "00:00:00.0017700"` | Same time spent on `DocumentLoadTime` as previous queries, but lower `WriteOutputTime` because we're projecting only one property. |
-| `SELECT TOP 500 udf.toPercent(c.N) FROM c` | `"UserDefinedFunctionExecutionTime": "00:00:00.2136500"` | About 213 ms is spent in `UserDefinedFunctionExecutionTime` executing the UDF on each value of `c.N`. |
-| `SELECT TOP 500 c.Name FROM c WHERE STARTSWITH(c.Name, 'Den')` | `"IndexLookupTime": "00:00:00.0006400", "SystemFunctionExecutionTime": "00:00:00.0074100"` | About 0.6 ms is spent in `IndexLookupTime` on `/Name/?`. Most of the query execution time (~7 ms) in `SystemFunctionExecutionTime`. |
+| `SELECT TOP 500 c.N FROM c` | `"IndexLookupTime": "00:00:00.0017700"` | Same time spent on `DocumentLoadTime` as previous queries, but lower `DocumentWriteTime` because we're projecting only one property. |
+| `SELECT TOP 500 udf.toPercent(c.N) FROM c` | `"RuntimeExecutionTime": "00:00:00.2136500"` | About 213 ms is spent in `RuntimeExecutionTime` executing the UDF on each value of `c.N`. |
+| `SELECT TOP 500 c.Name FROM c WHERE STARTSWITH(c.Name, 'Den')` | `"IndexLookupTime": "00:00:00.0006400", "RuntimeExecutionTime": "00:00:00.0074100"` | About 0.6 ms is spent in `IndexLookupTime` on `/Name/?`. Most of the query execution time (~7 ms) in `RuntimeExecutionTime`. |
| `SELECT TOP 500 c.Name FROM c WHERE STARTSWITH(LOWER(c.Name), 'den')` | `"IndexLookupTime": "00:00:00", "RetrievedDocumentCount": 2491, "OutputDocumentCount": 500` | Query is performed as a scan because it uses `LOWER`, and 500 out of 2491 retrieved documents are returned. | - ## Next steps * To learn about the supported SQL query operators and keywords, see [SQL query](query/getting-started.md). * To learn about request units, see [request units](../request-units.md).
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
The following table summarizes each plan and their cloud availability.
| [Permissions management (Preview)](enable-permissions-management.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
-Starting March 1, 2024, Defender CSPM must be enabled to have premium DevOps security capabilities which include code-to-cloud contextualization powering security explorer and attack paths and pull request annotations for Infrastructure-as-Code security findings. Learn more about DevOps security [support and prerequisites](devops-support.md).
-
-Starting March 1, 2024, Defender CSPM must be enabled to have premium DevOps security capabilities that include code-to-cloud contextualization powering security explorer and attack paths and pull request annotations for Infrastructure-as-Code security findings. See DevOps security [support and prerequisites](devops-support.md) to learn more.
+> [!NOTE]
+> Starting March 1, 2024, Defender CSPM must be enabled to have premium DevOps security capabilities that include code-to-cloud contextualization powering security explorer and attack paths and pull request annotations for Infrastructure-as-Code security findings. See DevOps security [support and prerequisites](devops-support.md) to learn more.
## Integrations (preview)
defender-for-cloud Concept Data Security Posture Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture-prepare.md
Previously updated : 11/15/2023 Last updated : 01/07/2024
The table summarizes support for data-aware posture management.
|What GCP data resources can I discover? | GCP storage buckets<br/> Standard Class<br/> Geo: region, dual region, multi region | |What permissions do I need for discovery? | Storage account: Subscription Owner<br/> **or**<br/> `Microsoft.Authorization/roleAssignments/*` (read, write, delete) **and** `Microsoft.Security/pricings/*` (read, write, delete) **and** `Microsoft.Security/pricings/SecurityOperators` (read, write)<br/><br/> Amazon S3 buckets and RDS instances: AWS account permission to run Cloud Formation (to create a role). <br/><br/>GCP storage buckets: Google account permission to run script (to create a role). | |What file types are supported for sensitive data discovery? | Supported file types (you can't select a subset) - .doc, .docm, .docx, .dot, .gz, .odp, .ods, .odt, .pdf, .pot, .pps, .ppsx, .ppt, .pptm, .pptx, .xlc, .xls, .xlsb, .xlsm, .xlsx, .xlt, .csv, .json, .psv, .ssv, .tsv, .txt., xml, .parquet, .avro, .orc.|
-|What Azure regions are supported? | You can discover Azure storage accounts in:<br/><br/> Australia Central; Australia Central 2; Australia East; Australia Southeast; Brazil South; Canada Central; Canada East; Central India; Central US; East Asia; East US; East US 2; France Central; Germany West Central; Japan East; Japan West: Jio India West: North Central US; North Europe; Norway East; South Africa North: South Central US; South India; Sweden Central; Switzerland North; UAE North; UK South; UK West: West Central US; West Europe; West US, West US3.<br/><br/> You can discover Azure SQL Databases in any region where Defender CSPM and Azure SQL Databases are supported. |
-|What AWS regions are supported? | S3:<br /><br />Asia Pacific (Mumbai); Asia Pacific (Singapore); Asia Pacific (Sydney); Asia Pacific (Tokyo); Canada (Montreal); Europe (Frankfurt); Europe (Ireland); Europe (London); Europe (Paris); Europe (Stockholm); South America (São Paulo); US East (Ohio); US East (N. Virginia); US West (N. California): US West (Oregon).<br/><br/><br />RDS:<br /><br />Africa (Capetown); Asia Pacific (Hong Kong SAR); Asia Pacific (Hyderabad); Asia Pacific (Melbourne); Asia Pacific (Mumbai); Asia Pacific (Osaka); Asia Pacific (Seoul); Asia Pacific (Singapore); Asia Pacific (Sydney); Asia Pacific (Tokyo); Canada (Central); Europe (Frankfurt); Europe (Ireland); Europe (London); Europe (Paris); Europe (Stockholm); Europe (Zurich); Middle East (UAE); South America (São Paulo); US East (Ohio); US East (N. Virginia); US West (N. California): US West (Oregon).<br /><br /> Discovery is done locally within the region. |
+|What Azure regions are supported? | You can discover Azure storage accounts in:<br/><br/> Asia East; Asia South East; Australia Central; Australia Central 2; Australia East; Australia South East; Brazil South; Brazil Southeast; Canada Central; Canada East; Europe North; Europe West; France Central; France South; Germany North; Germany West Central; India Central; India South; Japan East; Japan West; Jio India West; Korea Central; Korea South; Norway East; Norway West; South Africa North; South Africa West; Sweden Central; Switzerland North; Switzerland West; UAE North; UK South; UK West; US Central; US East; US East 2; US North Central; US South Central; US West; US West 2; US West 3; US West Central; <br/><br/> You can discover Azure SQL Databases in any region where Defender CSPM and Azure SQL Databases are supported. |
+|What AWS regions are supported? | S3:<br /><br />Asia Pacific (Mumbai); Asia Pacific (Singapore); Asia Pacific (Sydney); Asia Pacific (Tokyo); Canada (Montreal); Europe (Frankfurt); Europe (Ireland); Europe (London); Europe (Paris); Europe (Stockholm); South America (São Paulo); US East (Ohio); US East (N. Virginia); US West (N. California): US West (Oregon).<br/><br/><br />RDS:<br /><br/>Africa (Capetown); Asia Pacific (Hong Kong SAR); Asia Pacific (Hyderabad); Asia Pacific (Melbourne); Asia Pacific (Mumbai); Asia Pacific (Osaka); Asia Pacific (Seoul); Asia Pacific (Singapore); Asia Pacific (Sydney); Asia Pacific (Tokyo); Canada (Central); Europe (Frankfurt); Europe (Ireland); Europe (London); Europe (Paris); Europe (Stockholm); Europe (Zurich); Middle East (UAE); South America (São Paulo); US East (Ohio); US East (N. Virginia); US West (N. California): US West (Oregon).<br /><br /> Discovery is done locally within the region. |
|What GCP regions are supported? | europe-west1, us-east1, us-west1, us-central1, us-east4, asia-south1, northamerica-northeast1| |Do I need to install an agent? | No, discovery requires no agent installation. |
-|What's the cost? | The feature is included with the Defender CSPM and Defender for Storage plans, and doesnΓÇÖt incur additional costs except for the respective plan costs. |
+|What's the cost? | The feature is included with the Defender CSPM and Defender for Storage plans, and doesnΓÇÖt incur extra costs except for the respective plan costs. |
|What permissions do I need to view/edit data sensitivity settings? | You need one of these Microsoft Entra roles: Global Administrator, Compliance Administrator, Compliance Data Administrator, Security Administrator, Security Operator.| | What permissions do I need to perform onboarding? | You need one of these Microsoft Entra roles: Security Admin, Contributor, Owner on the subscription level (where the GCP project/s reside in). For consuming the security findings: Security Reader, Security Admin, Reader, Contributor, Owner on the subscription level (where the GCP project/s reside). | ## Configuring data sensitivity settings
-The main steps for configuring data sensitivity setting include:
+The main steps for configuring data sensitivity settings include:
-- [Import custom sensitive info types/labels from Microsoft Purview compliance portal](data-sensitivity-settings.md#import-custom-sensitive-info-typeslabels)
+- [Import custom sensitivity info types/labels from Microsoft Purview compliance portal](data-sensitivity-settings.md#import-custom-sensitivity-info-typeslabels)
- [Customize sensitive data categories/types](data-sensitivity-settings.md#customize-sensitive-data-categoriestypes) - [Set the threshold for sensitivity labels](data-sensitivity-settings.md#set-the-threshold-for-sensitive-data-labels)
defender-for-cloud Data Sensitivity Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-sensitivity-settings.md
description: Learn how to customize data sensitivity settings in Defender for Cl
Previously updated : 09/05/2023 Last updated : 01/04/2024 # Customize data sensitivity settings
This configuration helps you focus on your critical sensitive resources and impr
Changes in sensitivity settings take effect the next time that resources are discovered.
-## Import custom sensitive info types/labels
+## Import custom sensitivity info types/labels
+
+To import custom sensitivity info types and labels, you need to have Enterprise Mobility and Security E5/A5/G5 licensing. Learn more about [sensitivity labeling licensing](/office365/servicedescriptions/microsoft-365-service-descriptions/microsoft-365-tenantlevel-services-licensing-guidance/microsoft-365-security-compliance-licensing-guidance#microsoft-purview-information-protection-sensitivity-labeling).
Defender for Cloud uses built-in sensitive info types. You can optionally import your own custom sensitive info types and labels from Microsoft Purview compliance portal to align with your organization's needs.
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Microsoft Defender for Cloud CSPM service acquires a Microsoft Entra token with
The Microsoft Entra token is exchanged with AWS short living credentials and Defender for Cloud's CSPM service assumes the CSPM IAM role (assumed with web identity).
-Since the principle of the role is a federated identity as defined in a trust relationship policy, the AWS identity provider validates the Microsoft Entra token against the Microsoft Entra ID through a process that includes:
+Since the principal of the role is a federated identity as defined in a trust relationship policy, the AWS identity provider validates the Microsoft Entra token against the Microsoft Entra ID through a process that includes:
- audience validation -- signing of the token-
+- token digital signature validation
- certificate thumbprint The Microsoft Defender for Cloud CSPM role is assumed only after the validation conditions defined at the trust relationship have been met. The conditions defined for the role level are used for validation within AWS and allows only the Microsoft Defender for Cloud CSPM application (validated audience) access to the specific role (and not any other Microsoft token).
-After the Microsoft Entra token validated by the AWS identity provider, the AWS STS exchanges the token with AWS short-living credentials which CSPM service uses to scan the AWS account.
+After the Microsoft Entra token is validated by the AWS identity provider, the AWS STS exchanges the token with AWS short-living credentials which the CSPM service uses to scan the AWS account.
## Prerequisites
Connecting your AWS account is part of the multicloud experience available in Mi
- Set up your [on-premises machines](quickstart-onboard-machines.md) and [GCP projects](quickstart-onboard-gcp.md). - Get answers to [common questions](faq-general.yml) about onboarding your AWS account. - [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector).+
defender-for-cloud Recommendations Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference.md
description: This article lists Microsoft Defender for Cloud's security recommen
Previously updated : 09/27/2023 Last updated : 10/05/2023
impact on your secure score.
|Microsoft Defender for APIs should be enabled|Enable the Defender for APIs plan to discover and protect API resources against attacks and security misconfigurations. [Learn more](defender-for-apis-deploy.md)|High| |Azure API Management APIs should be onboarded to Defender for APIs. | Onboarding APIs to Defender for APIs requires compute and memory utilization on the Azure API Management service. Monitor performance of your Azure API Management service while onboarding APIs, and scale out your Azure API Management resources as needed.|High| |API endpoints that are unused should be disabled and removed from the Azure API Management service|As a security best practice, API endpoints that haven't received traffic for 30 days are considered unused, and should be removed from the Azure API Management service. Keeping unused API endpoints might pose a security risk. These might be APIs that should have been deprecated from the Azure API Management service, but have accidentally been left active. Such APIs typically do not receive the most up-to-date security coverage.|Low|
-|API endpoints in Azure API Management should be authenticated|API endpoints published within Azure API Management should enforce authentication to help minimize security risk. Authentication mechanisms are sometimes implemented incorrectly or are missing. This allows attackers to exploit implementation flaws and to access data. For APIs published in Azure API Management, this recommendation assesses authentication though verifying the presence of Azure API Management subscription keys for APIs or products where subscription is required, and the execution of policies for validating [JWT](/azure/api-management/validate-jwt-policy), [client certificates](/azure/api-management/validate-client-certificate-policy), and [Microsoft Entra](/azure/api-management/validate-azure-ad-token-policy) tokens. If none of these authentication mechanisms are executed during the API call, the API will receive this recommendation.|High|
+|API endpoints in Azure API Management should be authenticated|API endpoints published within Azure API Management should enforce authentication to help minimize security risk. Authentication mechanisms are sometimes implemented incorrectly or are missing. This allows attackers to exploit implementation flaws and to access data. For APIs published in Azure API Management, this recommendation assesses authentication through verifying the presence of Azure API Management subscription keys for APIs or products where subscription is required, and the execution of policies for validating [JWT](/azure/api-management/validate-jwt-policy), [client certificates](/azure/api-management/validate-client-certificate-policy), and [Microsoft Entra](/azure/api-management/validate-azure-ad-token-policy) tokens. If none of these authentication mechanisms are executed during the API call, the API will receive this recommendation.|High|
## API management recommendations
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
This page provides you with information about:
- Bug fixes - Deprecated functionality
+## July 2023
+
+Updates in July include:
+
+|Date |Update |
+|-|-|
+| July 31 | [Preview release of containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management (MDVM) in Defender for Containers and Defender for Container Registries](#preview-release-of-containers-vulnerability-assessment-powered-by-microsoft-defender-vulnerability-management-mdvm-in-defender-for-containers-and-defender-for-container-registries) |
+| July 30 | [Agentless container posture in Defender CSPM is now Generally Available](#agentless-container-posture-in-defender-cspm-is-now-generally-available) |
+| July 20 | [Management of automatic updates to Defender for Endpoint for Linux](#management-of-automatic-updates-to-defender-for-endpoint-for-linux) |
+| July 18 | [Agentless secrets scanning for virtual machines in Defender for servers P2 & Defender CSPM](#agentless-secrets-scanning-for-virtual-machines-in-defender-for-servers-p2--defender-cspm) |
+| July 12 | [New Security alert in Defender for Servers plan 2: Detecting Potential Attacks leveraging Azure VM GPU driver extensions](#new-security-alert-in-defender-for-servers-plan-2-detecting-potential-attacks-leveraging-azure-vm-gpu-driver-extensions) |
+| July 9 | [Support for disabling specific vulnerability findings](#support-for-disabling-specific-vulnerability-findings) |
+| July 1 | [Data Aware Security Posture is now Generally Available](#data-aware-security-posture-is-now-generally-available) |
+
+### Preview release of containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management (MDVM) in Defender for Containers and Defender for Container Registries
+
+July 31, 2023
+
+We're announcing the release of Vulnerability Assessment (VA) for Linux container images in Azure container registries powered by Microsoft Defender Vulnerability Management (MDVM) in Defender for Containers and Defender for Container Registries. The new container VA offering will be provided alongside our existing Container VA offering powered by Qualys in both Defender for Containers and Defender for Container Registries, and include daily rescans of container images, exploitability information, support for OS and programming languages (SCA) and more.
+
+This new offering will start rolling out today, and is expected to be available to all customers by August 7.
+
+For more information, see [Container Vulnerability Assessment powered by MDVM](agentless-vulnerability-assessment-azure.md) and [Microsoft Defender Vulnerability Management (MDVM)](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management).
+
+### Agentless container posture in Defender CSPM is now Generally Available
+
+July 30, 2023
+
+Agentless container posture capabilities are now Generally Available (GA) as part of the Defender CSPM (Cloud Security Posture Management) plan.
+
+Learn more about [agentless container posture in Defender CSPM](concept-agentless-containers.md).
+
+### Management of automatic updates to Defender for Endpoint for Linux
+
+July 20, 2023
+
+By default, Defender for Cloud attempts to update your Defender for Endpoint for Linux agents onboarded with the `MDE.Linux` extension. With this release, you can manage this setting and opt-out from the default configuration to manage your update cycles manually.
+
+Learn how to [manage automatic updates configuration for Linux](integration-defender-for-endpoint.md#manage-automatic-updates-configuration-for-linux).
+
+### Agentless secrets scanning for virtual machines in Defender for servers P2 & Defender CSPM
+
+July 18, 2023
+
+Secrets scanning is now available as part of the agentless scanning in Defender for Servers P2 and Defender CSPM. This capability helps to detect unmanaged and insecure secrets saved on virtual machines in Azure or AWS resources that can be used to move laterally in the network. If secrets are detected, Defender for Cloud can help to prioritize and take actionable remediation steps to minimize the risk of lateral movement, all without affecting your machine's performance.
+
+For more information about how to protect your secrets with secrets scanning, see [Manage secrets with agentless secrets scanning](secret-scanning.md).
+
+### New security alert in Defender for Servers plan 2: detecting potential attacks leveraging Azure VM GPU driver extensions
+
+July 12, 2023
+
+This alert focuses on identifying suspicious activities leveraging Azure virtual machine **GPU driver extensions** and provides insights into attackers' attempts to compromise your virtual machines. The alert targets suspicious deployments of GPU driver extensions; such extensions are often abused by threat actors to utilize the full power of the GPU card and perform cryptojacking.
+
+| Alert Display Name <br> (Alert Type) | Description | Severity | MITRE Tactic |
+|||||
+| Suspicious installation of GPU extension in your virtual machine (Preview) <br> (VM_GPUDriverExtensionUnusualExecution) | Suspicious installation of a GPU extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers might use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. | Low | Impact |
+
+For a complete list of alerts, see the [reference table for all security alerts in Microsoft Defender for Cloud](alerts-reference.md).
+
+### Support for disabling specific vulnerability findings
+
+July 9, 2023
+
+Release of support for disabling vulnerability findings for your container registry images or running images as part of agentless container posture. If you have an organizational need to ignore a vulnerability finding on your container registry image, rather than remediate it, you can optionally disable it. Disabled findings don't affect your secure score or generate unwanted noise.
+
+Learn how to [disable vulnerability assessment findings on Container registry images](disable-vulnerability-findings-containers.md).
+
+### Data Aware Security Posture is now Generally Available
+
+July 1, 2023
+
+Data-aware security posture in Microsoft Defender for Cloud is now Generally Available. It helps customers to reduce data risk, and respond to data breaches. Using data-aware security posture you can:
+
+- Automatically discover sensitive data resources across Azure and AWS.
+- Evaluate data sensitivity, data exposure, and how data flows across the organization.
+- Proactively and continuously uncover risks that might lead to data breaches.
+- Detect suspicious activities that might indicate ongoing threats to sensitive data resources
+
+For more information, see [Data-aware security posture in Microsoft Defender for Cloud](concept-data-security-posture.md).
+ ## June 2023 Updates in June include:
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 12/26/2023 Last updated : 01/03/2024 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
If you're looking for items older than six months, you can find them in the [Archive for What's new in Microsoft Defender for Cloud](release-notes-archive.md).
+## January 2024
+
+| Date | Update |
+|--|--|
+| January 4 | [Recommendations released for preview: Nine new Azure security recommendations](#recommendations-released-for-preview-nine-new-azure-security-recommendations) |
+
+### Recommendations released for preview: Nine new Azure security recommendations
+
+January 4, 2024
+
+We have added nine new Azure security recommendations aligned with the Microsoft Cloud Security Benchmark. These new recommendations are currently in public preview.
+
+|Recommendation | Description | Severity |
+|-|-|-|
+| [Cognitive Services accounts should have local authentication methods disabled](recommendations-reference.md#identityandaccess-recommendations) | Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: https://aka.ms/cs/auth. (Related policy: [Cognitive Services accounts should have local authentication methods disabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f71ef260a-8f18-47b7-abcb-62d0673d94dc)). | Low |
+| [Cognitive Services should use private link](recommendations-reference.md#data-recommendations) | Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about [private links](https://go.microsoft.com/fwlink/?linkid=2129800). (Related policy: [Cognitive Services should use private link](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fcddd188c-4b82-4c48-a19d-ddf74ee66a01)). | Medium |
+| [Virtual machines and virtual machine scale sets should have encryption at host enabled](recommendations-reference.md#compute-recommendations) | Use encryption at host to get end-to-end encryption for your virtual machine and virtual machine scale set data. Encryption at host enables encryption at rest for your temporary disk and OS/data disk caches. Temporary and ephemeral OS disks are encrypted with platform-managed keys when encryption at host is enabled. OS/data disk caches are encrypted at rest with either customer-managed or platform-managed key, depending on the encryption type selected on the disk. Learn more at https://aka.ms/vm-hbe. (Related policy: [Virtual machines and virtual machine scale sets should have encryption at host enabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ffc4d8e41-e223-45ea-9bf5-eada37891d87)). | Medium |
+| [Azure Cosmos DB should disable public network access](recommendations-reference.md#data-recommendations) | Disabling public network access improves security by ensuring that your Cosmos DB account isn't exposed on the public internet. Creating private endpoints can limit exposure of your Cosmos DB account. [Learn more](/azure/cosmos-db/how-to-configure-private-endpoints#blocking-public-network-access-during-account-creation). (Related policy: [Azure Cosmos DB should disable public network access](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f797b37f7-06b8-444c-b1ad-fc62867f335a)). | Medium |
+| [Cosmos DB accounts should use private link](recommendations-reference.md#data-recommendations) | Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Cosmos DB account, data leakage risks are reduced. Learn more about [private links](/azure/cosmos-db/how-to-configure-private-endpoints). (Related policy: [Cosmos DB accounts should use private link](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f58440f8a-10c5-4151-bdce-dfbaad4a20b7)). | Medium |
+| [VPN gateways should use only Azure Active Directory (Azure AD) authentication for point-to-site users](recommendations-reference.md#identityandaccess-recommendations) | Disabling local authentication methods improves security by ensuring that VPN Gateways use only Azure Active Directory identities for authentication. Learn more about [Azure AD authentication](/azure/vpn-gateway/openvpn-azure-ad-tenant). (Related policy: [VPN gateways should use only Azure Active Directory (Azure AD) authentication for point-to-site users](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f21a6bc25-125e-4d13-b82d-2e19b7208ab7)). | Medium |
+| [Azure SQL Database should be running TLS version 1.2 or newer](recommendations-reference.md#data-recommendations) | Setting TLS version to 1.2 or newer improves security by ensuring your Azure SQL Database can only be accessed from clients using TLS 1.2 or newer. Using versions of TLS less than 1.2 is not recommended since they have well documented security vulnerabilities. (Related policy: [Azure SQL Database should be running TLS version 1.2 or newer](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f32e6bbec-16b6-44c2-be37-c5b672d103cf)). | Medium |
+| [Azure SQL Managed Instances should disable public network access](recommendations-reference.md#data-recommendations) | Disabling public network access (public endpoint) on Azure SQL Managed Instances improves security by ensuring that they can only be accessed from inside their virtual networks or via Private Endpoints. Learn more about [public network access](https://aka.ms/mi-public-endpoint). (Related policy: [Azure SQL Managed Instances should disable public network access](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f9dfea752-dd46-4766-aed1-c355fa93fb91)). | Medium |
+| [Storage accounts should prevent shared key access](recommendations-reference.md#data-recommendations) | Audit requirement of Azure Active Directory (Azure AD) to authorize requests for your storage account. By default, requests can be authorized with either Azure Active Directory credentials, or by using the account access key for Shared Key authorization. Of these two types of authorization, Azure AD provides superior security and ease of use over shared Key, and is recommended by Microsoft. (Related policy: [Storage accounts should prevent shared key access](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f8c6a50c6-9ffd-4ae7-986f-5fa6111f9a54)). |Medium |
+
+See the [list of security recommendations](recommendations-reference.md).
+ ## December 2023 | Date | Update |
For a complete list of alerts, see the [reference table for all security alerts
| November 27 | [General availability of agentless secrets scanning in Defender for Servers and Defender CSPM](#general-availability-of-agentless-secrets-scanning-in-defender-for-servers-and-defender-cspm) | | November 22 | [Enable permissions management with Defender for Cloud (Preview)](#enable-permissions-management-with-defender-for-cloud-preview) | | November 22 | [Defender for Cloud integration with ServiceNow](#defender-for-cloud-integration-with-servicenow) |
-| November 20| [General Availability of the autoprovisioning process for SQL Servers on machines plan](#general-availability-of-the-autoprovisioning-process-for-sql-servers-on-machines-plan)|
-| November 15 | [General availability of Defender for APIs](#general-availability-of-defender-for-apis) |
+| November 20 | [General Availability of the autoprovisioning process for SQL Servers on machines plan](#general-availability-of-the-autoprovisioning-process-for-sql-servers-on-machines-plan)|
+| November 15 | [General availability of Defender for APIs](#general-availability-of-defender-for-apis) |
| November 15 | [Defender for Cloud is now integrated with Microsoft 365 Defender (Preview)](#defender-for-cloud-is-now-integrated-with-microsoft-365-defender-preview) | | November 15 | [General availability of Containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management (MDVM) in Defender for Containers and Defender for Container Registries](#general-availability-of-containers-vulnerability-assessment-powered-by-microsoft-defender-vulnerability-management-mdvm-in-defender-for-containers-and-defender-for-container-registries) | | November 15 | [Change to Container Vulnerability Assessments recommendation names](#change-to-container-vulnerability-assessments-recommendation-names) |
Here's a table of the new alerts.
|Alert (alert type)|Description|MITRE tactics|Severity| |-|-|-|-| | **Suspicious failure installing GPU extension in your subscription (Preview)**<br>(VM_GPUExtensionSuspiciousFailure) | Suspicious intent of installing a GPU extension on unsupported VMs. This extension should be installed on virtual machines equipped with a graphic processor, and in this case the virtual machines aren't equipped with such. These failures can be seen when malicious adversaries execute multiple installations of such extension for crypto-mining purposes. | Impact | Medium |
-| **Suspicious installation of a GPU extension was detected on your virtual machine (Preview)**<br>(VM_GPUDriverExtensionUnusualExecution)<br>*This alert was [released in July 2023](#new-security-alert-in-defender-for-servers-plan-2-detecting-potential-attacks-leveraging-azure-vm-gpu-driver-extensions).* | Suspicious installation of a GPU extension was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers might use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. This activity is deemed suspicious as the principal's behavior departs from its usual patterns. | Impact | Low |
+| **Suspicious installation of a GPU extension was detected on your virtual machine (Preview)**<br>(VM_GPUDriverExtensionUnusualExecution)<br>*This alert was [released in July 2023](release-notes-archive.md#new-security-alert-in-defender-for-servers-plan-2-detecting-potential-attacks-leveraging-azure-vm-gpu-driver-extensions).* | Suspicious installation of a GPU extension was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers might use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. This activity is deemed suspicious as the principal's behavior departs from its usual patterns. | Impact | Low |
| **Run Command with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousScript) | A Run Command with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers might use Run Command to execute malicious code with high privileges on your virtual machine via the Azure Resource Manager. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High | | **Suspicious unauthorized Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousFailure) | Suspicious unauthorized usage of Run Command has failed and was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers might attempt to use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Medium | | **Suspicious Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousUsage) | Suspicious usage of Run Command was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers might use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Low |
Existing customers of Defender for Key-Vault, Defender for Resource Manager, and
Learn more about the pricing for these plans in the [Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h).
-## July 2023
-
-Updates in July include:
-
-|Date |Update |
-|-|-|
-| July 31 | [Preview release of containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management (MDVM) in Defender for Containers and Defender for Container Registries](#preview-release-of-containers-vulnerability-assessment-powered-by-microsoft-defender-vulnerability-management-mdvm-in-defender-for-containers-and-defender-for-container-registries) |
-| July 30 | [Agentless container posture in Defender CSPM is now Generally Available](#agentless-container-posture-in-defender-cspm-is-now-generally-available) |
-| July 20 | [Management of automatic updates to Defender for Endpoint for Linux](#management-of-automatic-updates-to-defender-for-endpoint-for-linux) |
-| July 18 | [Agentless secrets scanning for virtual machines in Defender for servers P2 & Defender CSPM](#agentless-secrets-scanning-for-virtual-machines-in-defender-for-servers-p2--defender-cspm) |
-| July 12 | [New Security alert in Defender for Servers plan 2: Detecting Potential Attacks leveraging Azure VM GPU driver extensions](#new-security-alert-in-defender-for-servers-plan-2-detecting-potential-attacks-leveraging-azure-vm-gpu-driver-extensions) |
-| July 9 | [Support for disabling specific vulnerability findings](#support-for-disabling-specific-vulnerability-findings) |
-| July 1 | [Data Aware Security Posture is now Generally Available](#data-aware-security-posture-is-now-generally-available) |
-
-### Preview release of containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management (MDVM) in Defender for Containers and Defender for Container Registries
-
-July 31, 2023
-
-We're announcing the release of Vulnerability Assessment (VA) for Linux container images in Azure container registries powered by Microsoft Defender Vulnerability Management (MDVM) in Defender for Containers and Defender for Container Registries. The new container VA offering will be provided alongside our existing Container VA offering powered by Qualys in both Defender for Containers and Defender for Container Registries, and include daily rescans of container images, exploitability information, support for OS and programming languages (SCA) and more.
-
-This new offering will start rolling out today, and is expected to be available to all customers by August 7.
-
-For more information, see [Container Vulnerability Assessment powered by MDVM](agentless-vulnerability-assessment-azure.md) and [Microsoft Defender Vulnerability Management (MDVM)](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management).
-
-### Agentless container posture in Defender CSPM is now Generally Available
-
-July 30, 2023
-
-Agentless container posture capabilities are now Generally Available (GA) as part of the Defender CSPM (Cloud Security Posture Management) plan.
-
-Learn more about [agentless container posture in Defender CSPM](concept-agentless-containers.md).
-
-### Management of automatic updates to Defender for Endpoint for Linux
-
-July 20, 2023
-
-By default, Defender for Cloud attempts to update your Defender for Endpoint for Linux agents onboarded with the `MDE.Linux` extension. With this release, you can manage this setting and opt-out from the default configuration to manage your update cycles manually.
-
-Learn how to [manage automatic updates configuration for Linux](integration-defender-for-endpoint.md#manage-automatic-updates-configuration-for-linux).
-
-### Agentless secrets scanning for virtual machines in Defender for servers P2 & Defender CSPM
-
-July 18, 2023
-
-Secrets scanning is now available as part of the agentless scanning in Defender for Servers P2 and Defender CSPM. This capability helps to detect unmanaged and insecure secrets saved on virtual machines in Azure or AWS resources that can be used to move laterally in the network. If secrets are detected, Defender for Cloud can help to prioritize and take actionable remediation steps to minimize the risk of lateral movement, all without affecting your machine's performance.
-
-For more information about how to protect your secrets with secrets scanning, see [Manage secrets with agentless secrets scanning](secret-scanning.md).
-
-### New security alert in Defender for Servers plan 2: detecting potential attacks leveraging Azure VM GPU driver extensions
-
-July 12, 2023
-
-This alert focuses on identifying suspicious activities leveraging Azure virtual machine **GPU driver extensions** and provides insights into attackers' attempts to compromise your virtual machines. The alert targets suspicious deployments of GPU driver extensions; such extensions are often abused by threat actors to utilize the full power of the GPU card and perform cryptojacking.
-
-| Alert Display Name <br> (Alert Type) | Description | Severity | MITRE Tactic |
-|||||
-| Suspicious installation of GPU extension in your virtual machine (Preview) <br> (VM_GPUDriverExtensionUnusualExecution) | Suspicious installation of a GPU extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers might use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. | Low | Impact |
-
-For a complete list of alerts, see the [reference table for all security alerts in Microsoft Defender for Cloud](alerts-reference.md).
-
-### Support for disabling specific vulnerability findings
-
-July 9, 2023
-
-Release of support for disabling vulnerability findings for your container registry images or running images as part of agentless container posture. If you have an organizational need to ignore a vulnerability finding on your container registry image, rather than remediate it, you can optionally disable it. Disabled findings don't affect your secure score or generate unwanted noise.
-
-Learn how to [disable vulnerability assessment findings on Container registry images](disable-vulnerability-findings-containers.md).
-
-### Data Aware Security Posture is now Generally Available
-
-July 1, 2023
-
-Data-aware security posture in Microsoft Defender for Cloud is now Generally Available. It helps customers to reduce data risk, and respond to data breaches. Using data-aware security posture you can:
--- Automatically discover sensitive data resources across Azure and AWS.-- Evaluate data sensitivity, data exposure, and how data flows across the organization.-- Proactively and continuously uncover risks that might lead to data breaches.-- Detect suspicious activities that might indicate ongoing threats to sensitive data resources-
-For more information, see [Data-aware security posture in Microsoft Defender for Cloud](concept-data-security-posture.md).
- ## Next steps For past changes to Defender for Cloud, see [Archive for what's new in Defender for Cloud?](release-notes-archive.md).
defender-for-cloud Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/troubleshooting-guide.md
Last updated 06/18/2023 + # Microsoft Defender for Cloud Troubleshooting Guide This guide is for information technology (IT) professionals, information security analysts, and cloud administrators whose organizations need to troubleshoot Defender for Cloud related issues.
AWS connector issues:
- Make sure that EKS clusters are successfully connected to Arc-enabled Kubernetes. - If you don't see AWS data in Defender for Cloud, make sure that the AWS resources required to send data to Defender for Cloud exist in the AWS account.
+Defender API calls to AWS:
+
+Cost impact: When you onboard your AWS single or management account, our Discovery service initiates an immediate scan of your environment by executing API calls to various service endpoints in order to retrieve all resources that we secure.
+
+Following this initial scan, the service will continue to periodically scan your environment at the interval that you configured during onboarding. It's important to note that in AWS, each API call to the account generates a lookup event that is recorded in the CloudTrail resource.
+
+The CloudTrail resource incurs costs, and the pricing details can be found in [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/).
+
+Furthermore, if you have connected your CloudTrail to GuardDuty, you're also responsible for associated costs, which can be found in the [GuardDuty documentation](https://docs.aws.amazon.com/guardduty/latest/ug/monitoring_costs.html).
+
+**Getting the number of native API calls executed by Defender for Cloud**:
+
+There are two ways to get the number of calls made by Defender for Cloud and both rely on querying AWS CloudTrail logs:
+
+- **CloudTrail and Athena tables**:
+
+1. Use an existing or create a new *Athena table*. For more information, see [Querying AWS CloudTrail logs](https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html).
+
+1. Navigate to the above Athena table and use one of the below predefined queries per your needs.
+
+- **CloudTrail lake**:
+
+1. Use an existing or create a new *Event Data Store*. For more information, see [Working with AWS CloudTrail Lake](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-lake.html).
+
+1. Navigate to the above lake and use one of the below predefined queries per your needs.
+
+ Sample Queries:
+
+ - List the number of overall API calls by Defender for Cloud:
+
+ ```sql
+ SELECT COUNT(*) AS overallApiCallsCount FROM <TABLE-NAME>
+ WHERE userIdentity.arn LIKE 'arn:aws:sts::<YOUR-ACCOUNT-ID>:assumed-role/CspmMonitorAws/MicrosoftDefenderForClouds_<YOUR-AZURE-TENANT-ID>'
+ AND eventTime > TIMESTAMP '<DATETIME>'
+ ```
+
+ - List the number of overall API calls by Defender for Cloud aggregated by day:
+
+ ```sql
+ SELECT DATE(eventTime) AS apiCallsDate, COUNT(*) AS apiCallsCountByRegion FROM <TABLE-NAME>
+ WHERE userIdentity.arn LIKE 'arn:aws:sts:: <YOUR-ACCOUNT-ID>:assumed-role/CspmMonitorAws/MicrosoftDefenderForClouds_<YOUR-AZURE-TENANT-ID>'
+ AND eventTime > TIMESTAMP '<DATETIME>' GROUP BY DATE(eventTime)
+ ```
+
+ - List the number of overall API calls by Defender for Cloud aggregated by event name:
+
+ ```sql
+ SELECT eventName, COUNT(*) AS apiCallsCountByEventName FROM <TABLE-NAME>
+ WHERE userIdentity.arn LIKE 'arn:aws:sts::<YOUR-ACCOUNT-ID>:assumed-role/CspmMonitorAws/MicrosoftDefenderForClouds_<YOUR-AZURE-TENANT-ID>'
+ AND eventTime > TIMESTAMP '<DATETIME>' GROUP BY eventName
+ ```
+
+ - List the number of overall API calls by Defender for Cloud aggregated by region:
+
+ ```sql
+ SELECT awsRegion, COUNT(*) AS apiCallsCountByRegion FROM <TABLE-NAME>
+ WHERE userIdentity.arn LIKE 'arn:aws:sts::120589537074:assumed-role/CspmMonitorAws/MicrosoftDefenderForClouds_<YOUR-AZURE-TENANT-ID>'
+ AND eventTime > TIMESTAMP '<DATETIME>' GROUP BY awsRegion
+ ```
+
+ - The TABLE-NAME is Athena table or Event data store ID
+ GCP connector issues: - Make sure that the GCP Cloud Shell script completed successfully.
GCP connector issues:
- Make sure that Azure Arc endpoints are in the firewall allowlist. The GCP connector makes API calls to these endpoints to fetch the necessary onboarding files. - If the onboarding of GCP projects failed, make sure you have ΓÇ£compute.regions.listΓÇ¥ permission and Microsoft Entra permission to create the service principle used as part of the onboarding process. Make sure that the GCP resources `WorkloadIdentityPoolId`, `WorkloadIdentityProviderId`, and `ServiceAccountEmail` are created in the GCP project.
+Defender API calls to GCP:
+
+When you onboard your GCP single project or organization, our Discovery service initiates an immediate scan of your environment by executing API calls to various service endpoints in order to retrieve all resources that we secure.
+
+Following this initial scan, the service will continue to periodically scan your environment at the interval that you configured during onboarding.
+
+**Getting the number of native API calls executed by Defender for Cloud**:
+
+ 1. Go to **Logging** -> **Log Explorer**
+
+ 1. Filter the dates as you wish (for example, 1d)
+
+ 1. To show API calls executed by Defender for Cloud, run this query:
+
+ ```json
+ protoPayload.authenticationInfo.principalEmail : "microsoft-defender"
+ ```
+
+Refer to the histogram to see the number of calls over time.
+ ## Troubleshooting the Log Analytics agent Defender for Cloud uses the Log Analytics agent to [collect and store data](./monitoring-components.md#log-analytics-agent). The information in this article represents Defender for Cloud functionality after transition to the Log Analytics agent.
If you are not able to onboard your Azure DevOps organization, follow the follow
- It is important to know which account you are logged in to when you authorize the access, as that will be the account that is used. Your account can be associated with the same email address but also associated with different tenants. You should [check which account](https://app.vssps.visualstudio.com/profile/view) you are currently logged in on and ensure that the right account and tenant combination is selected. 1. On your profile page, select the drop-down menu to select another account.
-
+ :::image type="content" source="./media/troubleshooting-guide/authorize-select-tenant.png" alt-text="Screenshot of the Azure DevOps profile page that is used to select an account.":::
-
+ 1. After selecting the correct account/tenant combination, navigate to **Environment settings** in Defender for Cloud and edit your Azure DevOps connector. You will have the option to Re-authorize the connector, which will update the connector with the correct account/tenant combination. You should then see the correct list of organizations from the drop-down selection menu. - Ensure you have **Project Collection Administrator** role on the Azure DevOps organization you wish to onboard.
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you can find them in the [What's
| Planned change | Announcement date | Estimated date for change | |--|--|--|
+| [New version of Defender Agent for Defender for Containers](#new-version-of-defender-agent-for-defender-for-containers) | January 4, 2024 | February 2024 |
| [Upcoming change for the Defender for CloudΓÇÖs multicloud network requirements](#upcoming-change-for-the-defender-for-clouds-multicloud-network-requirements) | January 3, 2024 | May 2024 | | [Deprecation and severity changes to security alerts](#deprecation-and-severity-changes-to-security-alerts) | December 27, 2023 | January 2024 | | [Deprecation of two DevOps security recommendations](#deprecation-of-two-devops-security-recommendations) | November 30, 2023 | January 2024 |
If you're looking for the latest release notes, you can find them in the [What's
| [Deprecating two security incidents](#deprecating-two-security-incidents) | | November 2023 | | [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) | | August 2024 |
+## New version of Defender Agent for Defender for Containers
+
+**Announcement date: January 4, 2024**
+
+**Estimated date for change: February 2024**
+
+A new version of the [Defender Agent for Defender for Containers](tutorial-enable-containers-azure.md#deploy-the-defender-agent-in-azure) will be released in February 2024. It includes performance and security improvements, support for both AMD64 and ARM64 arch nodes (Linux only), and uses [Inspektor Gadget](https://www.inspektor-gadget.io/) as the process collection agent instead of Sysdig. The new version is only supported on Linux kernel versions 5.4 and higher, so if you have older versions of the Linux kernel, you'll need to upgrade. For more information, see [Supported host operating systems](support-matrix-defender-for-containers.md#supported-host-operating-systems).
+ ## Upcoming change for the Defender for CloudΓÇÖs multicloud network requirements **Announcement date: January 3, 2024**
-**Estimated date for change: May 2024**
+**Estimated date for change: May 2024**
Beginning May 2024, we'll be retiring the old IP addresses associated with our multicloud discovery services to accommodate improvements and ensure a more secure and efficient experience for all users.
The list is applicable to all plans and sufficient for full capability of the CS
**Announcement date: December 27, 2023**
-**Estimated date for change: January 2024**
+**Estimated date for change: January 2024**
The following security alerts are set for deprecation or are set for update to the **informational** severity level.
The following security alerts are set for deprecation or are set for update to t
- `Possible incoming SMTP brute force attempts detected (Generic_Incoming_BF_OneToOne)` - `Traffic detected from IP addresses recommended for blocking (Network_TrafficFromUnrecommendedIP)`
-
+ - **Alerts for Azure Resource Manager**: - `Privileged custom role created for your subscription in a suspicious way (Preview)(ARM_PrivilegedRoleDefinitionCreation)`
machine-learning How To Configure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cli.md
Previously updated : 11/16/2022 Last updated : 01/07/2024
postgresql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-backup-restore.md
Previously updated : 06/16/2021 Last updated : 12/23/2023 # Backup and restore in Azure Database for PostgreSQL - Flexible Server
These backup files can't be exported or used to create servers outside Azure Dat
## Backup frequency
-Backups on flexible servers are snapshot based. The first snapshot backup is scheduled immediately after a server is created. Snapshot backups are currently taken once daily. **The first snapshot is a full backup and consecutive snapshots are differential backups.**
+Backups on flexible servers are snapshot based. The first snapshot backup is scheduled immediately after a server is created. Snapshot backups are currently taken once daily. If none of the databases in the server receive any furhter modifications after the last snapshot backup is taken, snapshots backups are suspended until new modifications are made in any of the databases, point at which a new snapshot is immediately taken. **The first snapshot is a full backup and consecutive snapshots are differential backups.**
Transaction log backups happen at varied frequencies, depending on the workload and when the WAL file is filled and ready to be archived. In general, the delay (recovery point objective, or RPO) can be up to 15 minutes.
Later, the transaction logs and the daily backups are asynchronously copied to t
The estimated time to recover the server (recovery time objective, or RTO) depends on factors like the size of the database, the last database backup time, and the amount of WAL to process until the last received backup data. The overall recovery time usually takes from a few minutes up to a few hours.
-During the geo-restore, the server configurations that can be changed include virtual network settings and the ability to remove geo-redundant backup from the restored server. Changing other server configurations--such as compute, storage, or pricing tier (Burstable, General Purpose, or Memory Optimized)--during geo-restore is not supported.
+During the geo-restore, the server configurations that can be changed include virtual network settings and the ability to remove geo-redundant backup from the restored server. Changing other server configurations -such as compute, storage, or pricing tier (Burstable, General Purpose, or Memory Optimized)- during geo-restore is not supported.
For more information about performing a geo-restore, see the [how-to guide](how-to-restore-server-portal.md#perform-geo-restore).
Azure Backup and Azure PostgreSQL Services have built an enterprise-class long-t
* **Where can I see the backup usage?**
- In the Azure portal, under **Monitoring**, select **Metrics**. In **Backup usage metric**, you can monitor the total backup usage.
+ In the Azure portal, under **Monitoring**, select **Metrics**. In **Backup Storage Used**, you can monitor the total backup usage.
* **What happens to my backups if I delete my server?**
postgresql Concepts Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-logging.md
Previously updated : 11/30/2021 Last updated : 12/26/2021 # Logs in Azure Database for PostgreSQL - Flexible Server
The following table describes the fields for the **PostgreSQLLogs** type. Depend
| SubscriptionId | GUID for the subscription that the server belongs to | | ResourceGroup | Name of the resource group the server belongs to | | ResourceProvider | Name of the resource provider. Always `MICROSOFT.DBFORPOSTGRESQL` |
-| ResourceType | `Servers` |
+| ResourceType | `FlexibleServers` |
| ResourceId | Resource URI | | Resource | Name of the server | | Category | `PostgreSQLLogs` |
postgresql Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concept-reserved-pricing.md
Previously updated : 06/24/2022 Last updated : 12/12/2023 # Prepay for Azure Database for PostgreSQL compute resources with reserved capacity
Azure Database for PostgreSQL now helps you save money by prepaying for compute
## How does the instance reservation work?
-You don't need to assign the reservation to specific Azure Database for PostgreSQL servers. An already running Azure Database for PostgreSQL (or ones that are newly deployed) will automatically get the benefit of reserved pricing. By purchasing a reservation, you're pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure database for PostgreSQL compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation does not cover software, networking, or storage charges associated with the PostgreSQL Database servers. At the end of the reservation term, the billing benefit expires, and the Azure Database for PostgreSQL are billed at the pay-as-you go price. Reservations do not auto-renew. For pricing information, see the [Azure Database for PostgreSQL reserved capacity offering](https://azure.microsoft.com/pricing/details/postgresql/). </br>
+You don't need to assign the reservation to specific Azure Database for PostgreSQL servers. An already running Azure Database for PostgreSQL (or ones that are newly deployed) automatically get the benefit of reserved pricing. By purchasing a reservation, you're prepaying for the compute costs for one or three years. As soon as you buy a reservation, the Azure database for PostgreSQL compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation doesn't cover software, networking, or storage charges associated with the PostgreSQL Database servers. At the end of the reservation term, the billing benefit expires, and the vCores used by Azure Database for PostgreSQL are billed at the pay-as-you go price. Reservations don't autorenew. For pricing information, see the [Azure Database for PostgreSQL reserved capacity offering](https://azure.microsoft.com/pricing/details/postgresql/). </br>
> [!IMPORTANT] > Reserved capacity pricing is available for the Azure Database for PostgreSQL in [Single server](./overview.md#azure-database-for-postgresqlsingle-server) and [Flexible Server](../flexible-server/overview.md) deployment options.
You can buy Azure Database for PostgreSQL reserved capacity in the [Azure portal
* For Enterprise subscriptions, **Add Reserved Instances** must be enabled in the [EA portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an EA Admin on the subscription. * For Cloud Solution Provider (CSP) program, only the admin agents or sales agents can purchase Azure Database for PostgreSQL reserved capacity. </br>
-The details on how enterprise customers and Pay-As-You-Go customers are charged for reservation purchases, see [understand Azure reservation usage for your Enterprise enrollment](../../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md) and [understand Azure reservation usage for your Pay-As-You-Go subscription](../../cost-management-billing/reservations/understand-reserved-instance-usage.md).
+For details on how enterprise customers and Pay-As-You-Go customers are charged for reservation purchases, see [understand Azure reservation usage for your Enterprise enrollment](../../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md) and [understand Azure reservation usage for your Pay-As-You-Go subscription](../../cost-management-billing/reservations/understand-reserved-instance-usage.md).
## Reservation exchanges and refunds
You can exchange a reservation for another reservation of the same type, you can
## Reservation discount
-You may save up to 65% on compute costs with reserved instances. In order to find the discount for your case, please visit the [Reservation blade on the Azure portal](https://aka.ms/reservations) and check the savings per pricing tier and per region. Reserved instances help you manage your workloads, budget, and forecast better with an upfront payment for a one-year or three-year term. You can also exchange or cancel reservations as business needs change.
+You may save up to 65% on compute costs with reserved instances. In order to find the discount for your case, visit the [Reservation blade on the Azure portal](https://aka.ms/reservations) and check the savings per pricing tier and per region. Reserved instances help you manage your workloads, budget, and forecast better with an upfront payment for a one-year or three-year term. You can also exchange or cancel reservations as business needs change.
## Determine the right server size before purchase
-The size of reservation should be based on the total amount of compute used by the existing or soon-to-be-deployed servers within a specific region and using the same performance tier and hardware generation.</br>
+The size of reservation should be based on total amount of compute used by the existing, or soon-to-be-deployed, servers within a specific region, and using the same performance tier and hardware generation.</br>
-For example, let's suppose that you are running one general purpose Gen5 ΓÇô 32 vCore PostgreSQL database, and two memory-optimized Gen5 ΓÇô 16 vCore PostgreSQL databases. Further, let's supposed that you plan to deploy within the next month an additional general purpose Gen5 ΓÇô 8 vCore database server, and one memory-optimized Gen5 ΓÇô 32 vCore database server. Let's suppose that you know that you will need these resources for at least one year. In this case, you should purchase a 40 (32 + 8) vCores, one-year reservation for single database general purpose - Gen5 and a 64 (2x16 + 32) vCore one year reservation for single database memory optimized - Gen5
+For example, let's suppose that you're running one general purpose Gen5 ΓÇô 32 vCore PostgreSQL database, and two memory-optimized Gen5 ΓÇô 16 vCore PostgreSQL databases. Further, let's suppose that you plan to deploy an additional general purpose Gen5 ΓÇô 8 vCore database server, and one memory-optimized Gen5 ΓÇô 32 vCore database server within the next month. Let's suppose that you know that you need these resources for at least one year. In this case, you should purchase a 40 (32 + 8) vCores, one-year reservation for single database general purpose - Gen5 and a 64 (2x16 + 32) vCore one year reservation for single database memory optimized - Gen5.
## Buy Azure Database for PostgreSQL reserved capacity 1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Select **All services** > **Reservations**.
-3. Select **Add** and then in the Purchase reservations pane, select **Azure Database for PostgreSQL** to purchase a new reservation for your PostgreSQL databases.
+3. Select **Add** and then, in the Purchase reservations pane, select **Azure Database for PostgreSQL** to purchase a new reservation for your PostgreSQL databases.
4. Fill in the required fields. Existing or new databases that match the attributes you select qualify to get the reserved capacity discount. The actual number of your Azure Database for PostgreSQL servers that get the discount depend on the scope and quantity selected. :::image type="content" source="media/concepts-reserved-pricing/postgresql-reserved-price.png" alt-text="Overview of reserved pricing":::
The following table describes required fields.
| Region | The Azure region thatΓÇÖs covered by the Azure Database for PostgreSQL reserved capacity reservation. | Deployment Type | The Azure Database for PostgreSQL resource type that you want to buy the reservation for. | Performance Tier | The service tier for the Azure Database for PostgreSQL servers.
-| Term | One year
-| Quantity | The amount of compute resources being purchased within the Azure Database for PostgreSQL reserved capacity reservation. The quantity is a number of vCores in the selected Azure region and Performance tier that are being reserved and will get the billing discount. For example, if you are running or planning to run an Azure Database for PostgreSQL servers with the total compute capacity of Gen5 16 vCores in the East US region, then you would specify quantity as 16 to maximize the benefit for all servers.
+| Term | This term can be either One year or Three years.
+| Quantity | The amount of compute resources being purchased within the Azure Database for PostgreSQL reserved capacity reservation. Corresponds to the number of vCores in the selected Azure region and Performance tier that are being reserved and get the billing discount. For example, if you're running or planning to run an Azure Database for PostgreSQL servers with the total compute capacity of Gen5 16 vCores in the East US region, then you would specify quantity as 16 to maximize the benefit for all servers.
## Reserved instances API support
For more information, see [APIs for Azure reservation automation](../../cost-man
## vCore size flexibility
-vCore size flexibility helps you scale up or down within a performance tier and region, without losing the reserved capacity benefit. If you scale to higher vCores than your reserved capacity, you will be billed for the excess vCores using pay-as-you-go pricing.
+vCore size flexibility helps you scale up or down within a performance tier and region, without losing the reserved capacity benefit. If you scale to higher vCores than your reserved capacity, you're billed for the excess vCores using pay-as-you-go pricing.
## How to view reserved instance purchase details
-You can view your reserved instance purchase details via the [Reservations menu on the left side of the Azure portal](https://aka.ms/reservations).
+You can view your reserved instance purchase details via the [Reservations](https://aka.ms/reservations) blade in the Azure portal.
## Reserved instance expiration
-You'll receive email notifications, first one 30 days prior to reservation expiry and other one at expiration. Once the reservation expires, deployed VMs will continue to run and be billed at a pay-as-you-go rate.
+You receive email notifications, first one 30 days prior to reservation expiry and another one at expiration. Once the reservation expires, deployed VMs continue to run and are billed at a pay-as-you-go rate.
## Need help? Contact us
sap Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/troubleshooting.md
This section describes how to troubleshoot issues that you can encounter when pe
### Unable to access keyvault: XXXXX error
-If you see an error similar to the following when running the deployment:
+If you see an error similar to the following error when running the deployment:
```text Unable to access keyvault: XXXXYYYYDEP00userBEB
This error indicates that the specified key vault doesn't exist or that the depl
Depending on the deployment stage, you can resolve this issue in the following ways:
-You can either add the IP of the environment from which you're executing the deployment (recommended) or you can allow public access to the key vault. See [Allow public access to a key vault](/azure/key-vault/general/network-security#allow-public-access-to-a-key-vault) for more information.
+You can either add the IP of the environment from which you're executing the deployment (recommended) or you can allow public access to the key vault. For more information about controlling access to the key vault, see [Allow public access to a key vault](/azure/key-vault/general/network-security#allow-public-access-to-a-key-vault).
The following variables are used to configure the key vault access:
Agent_IP = "10.0.0.5"
public_network_access_enabled = true ```
+### Failed to get existing workspaces error
+
+If you see an error similar to the following error when running the deployment:
+
+```text
+Error: : Error retrieving keys for Storage Account "mgmtweeutfstate###": azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to
+https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MGMT-WEEU-SAP_LIBRARY/providers/Microsoft.Storage/storageAccounts/mgmtweeutfstate###/listKeys?api-version=2021-01-01
+: StatusCode=400 -- Original Error: adal: Refresh request failed. Status Code = '400'. Response body: {"error":"invalid_request","error_description":"Identity not found"} Endpoint
+http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&client_id=yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy&resource=https%3A%2F%2Fmanagement.azure.com%2F
+```
+
+This error indicates that the credentials used to do the deployment doesn't have access to the storage account. To resolve this issue, assign the 'Storage Account Contributor' role to the deployment credential on the terraform state storage account, the resource group or the subscription (if feasible).
+
+You can verify if the deployment is being performed using a service principal or a managed identity by checking the output of the deployment. If the deployment is using a service principal, the output contains the following section:
+
+```text
+ [set_executing_user_environment_variables]: Identifying the executing user and client
+ [set_azure_cloud_environment]: Identifying the executing cloud environment
+ [set_azure_cloud_environment]: Azure cloud environment: public
+ [set_executing_user_environment_variables]: User type: servicePrincipal
+ [set_executing_user_environment_variables]: client id: yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy
+ [set_executing_user_environment_variables]: Identified login type as 'service principal'
+ [set_executing_user_environment_variables]: Initializing state with SPN named: <SPN Name>
+ [set_executing_user_environment_variables]: exporting environment variables
+ [set_executing_user_environment_variables]: ARM environment variables:
+ ARM_CLIENT_ID: yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy
+ ARM_SUBSCRIPTION_ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+ ARM_USE_MSI: false
+```
+
+Look for the following line in the output: "ARM_USE_MSI: false"
+
+If the deployment is using a managed identity, the output contains the following section:
+
+```text
+
+ [set_executing_user_environment_variables]: Identifying the executing user and client
+ [set_azure_cloud_environment]: Identifying the executing cloud environment
+ [set_azure_cloud_environment]: Azure cloud environment: public
+ [set_executing_user_environment_variables]: User type: servicePrincipal
+ [set_executing_user_environment_variables]: client id: systemAssignedIdentity
+ [set_executing_user_environment_variables]: logged in using 'servicePrincipal'
+ [set_executing_user_environment_variables]: unset ARM_CLIENT_SECRET
+ [set_executing_user_environment_variables]: ARM environment variables:
+ ARM_CLIENT_ID: zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz
+ ARM_SUBSCRIPTION_ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+ ARM_USE_MSI: true
+```
+
+Look for the following line in the output: "ARM_USE_MSI: true"
+
+You can assign the 'Storage Account Contributor' role to the deployment credential on the terraform state storage account, the resource group or the subscription (if feasible). Use the ARM_CLIENT_ID from the deployment output.
+
+```cloudshell-interactive
+export appId="<ARM_CLIENT_ID>"
+
+az role assignment create --assignee ${appId} \
+ --role "Storage Account Contributor" \
+ --scope /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MGMT-WEEU-SAP_LIBRARY/providers/Microsoft.Storage/storageAccounts/mgmtweeutfstate###
+```
+
+You may also need to assign the reader role to the deployment credential on the subscription containing the resource group with the Terraform state file. You can do that with the following command:
+
+```cloudshell-interactive
+export appId="<ARM_CLIENT_ID>"
+
+az role assignment create --assignee ${appId} \
+ --role "Reader" \
+ --scope /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+```
+
+### Private DNS Zone Name 'xxx' wasn't found
+
+If you see an error similar to the following error when running the deployment:
+
+```text
+Private DNS Zone Name: "privatelink.file.core.windows.net" was not found
+
+or
+
+Private DNS Zone Name: "privatelink.blob.core.windows.net" was not found
+
+or
+
+Private DNS Zone Name: "privatelink.vaultcore.azure.net" was not found
+
+```
+
+This error indicates that the Private DNS zone listed in the error isn't available. You can resolve this issue by either creating the Private DNS or providing the configuration for an existing private DNS Zone. For more information on how to create the Private DNS Zone, see [Create a private DNS zone](/azure/dns/private-dns-getstarted-cli#create-a-private-dns-zone).
+
+You can specify the details for an existing private DNS zone by using the following variables:
+
+```terraform
+# Resource group name for resource group that contains the private DNS zone
+management_dns_resourcegroup_name="<resource group name for the Private DNS Zone>"
+
+# Subscription ID name for resource group that contains the private DNS zone
+management_dns_subscription_id="<subscription id for resource group name for the Private DNS Zone>"
+
+use_custom_dns_a_registration=false
+
+```
+
+Rerun the deployment after you made these changes.
+ ### OverconstrainedAllocationRequest error
-If you see an error similar to the following when running the deployment:
+If you see an error similar to the following error when running the deployment:
```text Virtual Machine Name: "devsap01app01": Code="OverconstrainedAllocationRequest" Message="Allocation failed. VM(s) with the following constraints cannot be allocated, because the condition is too restrictive. Please remove some constraints and try again. Constraints applied are:
If you see an error similar to the following message when running the deployment
ERROR! this task 'ansible.builtin.command' has extra params, which is only allowed in the following modules: set_fact, shell, include_tasks, win_shell, import_tasks, import_role, include, win_command, command, include_role, meta, add_host, script, group_by, raw, include_vars ```
-This error indicates that the task isn't supported by the version of Ansible that is installed. To resolve this issue, upgrade to the latest version of Ansible on the agent virtual machine.
+This error indicates that the version of Ansible installed on the agent doesn't support this task. To resolve this issue, upgrade to the latest version of Ansible on the agent virtual machine.
## Software download
If you see an error similar to the following message when running the Azure Pipe
##[error]Bash exited with code '2'. ```
-This error indicates that the configured personal access token doesn't have permissions to access the variable group. Ensure that the personal access token has the **Read & manage** permission for the variable group and that it hasn't expired. The personal access token is configured in the Azure DevOps pipeline variable groups either as 'PAT' in the control plane variable group or as WZ_PAT in the workload zone variable group.
+This error indicates that the configured personal access token doesn't have permissions to access the variable group. Ensure that the personal access token has the **Read & manage** permission for the variable group and that it's still valid. The personal access token is configured in the Azure DevOps pipeline variable groups either as 'PAT' in the control plane variable group or as 'WZ_PAT' in the workload zone variable group.
## Next step
sentinel Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/feature-availability.md
Title: Microsoft Sentinel feature support for Azure clouds
+ Title: Microsoft Sentinel feature support for Azure commercial/other clouds
description: This article describes feature availability in Microsoft Sentinel across different Azure environments.--++ Last updated 07/25/2023
-# Microsoft Sentinel feature support for Azure clouds
+# Microsoft Sentinel feature support for Azure commercial/other clouds
This article describes the features available in Microsoft Sentinel across different Azure environments. Features are listed as GA (generally available), public preview, or shown as not available.
sentinel Reference Systemconfig Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-systemconfig-json.md
Last updated 06/03/2023
The *systemconfig.json* file is used to configure behavior of the data collector. Configuration options are grouped into several sections. This article lists options available and provides an explanation to the options. > [!IMPORTANT]
-> Microsoft Sentinel solution for SAP® applications uses the new *systemconfig.json* file from agent versions deployed on June 22 and later. For previous agent versions, you must still use the *[systemconfig.ini file](reference-systemconfig.md)*.
+> Microsoft Sentinel solution for SAP® applications uses the new *systemconfig.json* file for agent versions released on or after June 22, 2023. For previous agent versions, you must still use the *[systemconfig.ini file](reference-systemconfig.md)*.
## File structure
sentinel Reference Systemconfig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-systemconfig.md
The *systemconfig.ini* file is used to configure behavior of the data collector. Configuration options are grouped into several sections. This article lists options available and provides an explanation to the options. > [!IMPORTANT]
-> Microsoft Sentinel solution for SAP® applications uses the new *[systemconfig.json file](reference-systemconfig-json.md)* from agent versions deployed on June 22 and later. For previous agent versions, you must still use the *systemconfig.ini* file.
+> Microsoft Sentinel solution for SAP® applications uses the new *[systemconfig.json file](reference-systemconfig-json.md)* for agent versions released on or after June 22, 2023. For previous agent versions, you must still use the *systemconfig.ini* file.
> > If you update the agent version, the configuration file is automatically migrated.
update-manager Pre Post Events Common Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/pre-post-events-common-scenarios.md
You can view the status of the maintenance job from the ARG query mentioned abov
:::image type="content" source="./media/pre-post-events-common-scenarios/view-job-status.png" alt-text="Screenshot that shows how to insert the resource group, maintenance configuration." lightbox="./media/pre-post-events-common-scenarios/view-job-status.png"::: + ## Why the scheduled run was cancelled by the system?
If the user modifies the schedule run time after the pre-event has been triggere
> [!NOTE] > Azure Event Grid adheres to an at-least-once delivery paradigm. This implies that, in exceptional circumstances, there is a chance of the event handler being invoked more than once for a given event. Customers are advised to ensure that their event handler actions are idempotent. In other words, if the event handler is executed multiple times, it should not have any adverse effects. Implementing idempotency ensures the robustness of your application in the face of potential duplicate event invocations. --- ## Next steps - For an overview on [pre and post scenarios](pre-post-scripts-overview.md) - Manage the [pre and post maintenance configuration events](manage-pre-post-events.md)