Updates from: 10/31/2023 02:12:28
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Model Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/model-versions.md
description: Learn about model versions in Azure OpenAI. Previously updated : 10/13/2023 Last updated : 10/30/2023
keywords:
Azure OpenAI Service is committed to providing the best generative AI models for customers. As part of this commitment, Azure OpenAI Service regularly releases new model versions to incorporate the latest features and improvements from OpenAI. In particular, the GPT-3.5 Turbo and GPT-4 models see regular updates with new features. For example, versions 0613 of GPT-3.5 Turbo and GPT-4 introduced function calling. Function calling is a popular feature that allows the model to create structured outputs that can be used to call external tools.+ ## How model versions work We want to make it easy for customers to stay up to date as models improve. Customers can choose to start with a particular version and to automatically update as new versions are released.
When a customer deploys GPT-3.5-Turbo and GPT-4 on Azure OpenAI Service, the sta
Customers can also deploy a specific version like GPT-4 0314 or GPT-4 0613 and choose an update policy, which can include the following options:
-* Deployments set to **Auto-update to default** automatically update to use the new default version.
-* Deployments set to **Upgrade when expired** automatically update when its current version is retired.
+* Deployments set to **Auto-update to default** automatically update to use the new default version.
+* Deployments set to **Upgrade when expired** automatically update when its current version is retired.
* Deployments that are set to **No Auto Upgrade** stop working when the model is retired.
+### VersionUpgradeOption
+
+You can check what model upgrade options are set for previously deployed models in [Azure OpenAI Studio](https://oai.azure.com). Select **Deployments** > Under the deployment name column select one of the deployment names that are highlighted in blue > The **Properties** will contain a value for **Version update policy**.
+
+The corresponding property can also be accessed via [REST](../how-to/working-with-models.md#model-deployment-upgrade-configuration), [Azure PowerShell](/powershell/module/az.cognitiveservices/get-azcognitiveservicesaccountdeployment), and [Azure CLI](/cli/azure/cognitiveservices/account/deployment#az-cognitiveservices-account-deployment-show).
+
+|Option| Read | Update |
+||||
+| [REST](../how-to/working-with-models.md#model-deployment-upgrade-configuration) | Yes. If `versionUpgradeOption` is not returned it means it is `null` |Yes |
+| [Azure PowerShell](/powershell/module/az.cognitiveservices/get-azcognitiveservicesaccountdeployment) | Yes.`VersionUpgradeOption` can be checked for `$null`| Yes |
+| [Azure CLI](/cli/azure/cognitiveservices/account/deployment#az-cognitiveservices-account-deployment-show) | Yes. It shows `null` if `versionUpgradeOption` is not set.| *No.* It is currently not possible to update the version upgrade option.|
+
+> [!NOTE]
+> `null` is equivalent to `AutoUpgradeWhenExpired`.
+
+**Azure PowerShell**
+
+Review the Azure PowerShell [getting started guide](/powershell/azure/get-started-azureps) to install Azure PowerShell locally or you can use the [Azure Cloud Shell](/azure/cloud-shell/overview).
+
+The steps below demonstrate checking the `VersionUpgradeOption` option property as well as updating it:
+
+```powershell
+// Step 1: Get Deployment
+$deployment = Get-AzCognitiveServicesAccountDeployment -ResourceGroupName {ResourceGroupName} -AccountName {AccountName} -Name {DeploymentName}
+
+// Step 2: Show Deployment VersionUpgradeOption
+$deployment.Properties.VersionUpgradeOption
+
+// VersionUpgradeOption can be null - one way to check is
+$null -eq $deployment.Properties.VersionUpgradeOption
+
+// Step 3: Update Deployment VersionUpgradeOption
+$deployment.Properties.VersionUpgradeOption = "NoAutoUpgrade"
+New-AzCognitiveServicesAccountDeployment -ResourceGroupName {ResourceGroupName} -AccountName {AccountName} -Name {DeploymentName} -Properties $deployment.Properties -Sku $deployment.Sku
+
+// repeat step 1 and 2 to confirm the change.
+// If not sure about deployment name, use this command to show all deployments under an account
+Get-AzCognitiveServicesAccountDeployment -ResourceGroupName {ResourceGroupName} -AccountName {AccountName}
+```
+ ## How Azure updates OpenAI models Azure works closely with OpenAI to release new model versions. When a new version of a model is released, a customer can immediately test it in new deployments. Azure publishes when new versions of models are released, and notifies customers at least two weeks before a new version becomes the default version of the model. Azure also maintains the previous major version of the model until its retirement date, so customers can switch back to it if desired.
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
You can see the token context length supported by each model in the [model summa
To learn more about how to interact with GPT-3.5 Turbo and the Chat Completions API check out our [in-depth how-to](../how-to/chatgpt.md).
-## Embeddings models
+## Embeddings
> [!IMPORTANT] > We strongly recommend using `text-embedding-ada-002 (Version 2)`. This model/version provides parity with OpenAI's `text-embedding-ada-002`. To learn more about the improvements offered by this model, please refer to [OpenAI's blog post](https://openai.com/blog/new-and-improved-embedding-model). Even if you are currently using Version 1 you should migrate to Version 2 to take advantage of the latest weights/updated token limit. Version 1 and Version 2 are not interchangeable, so document embedding and document search must be done using the same version of the model.
You can also use the Whisper model via Azure AI Speech [batch transcription](../
### GPT-4 models
-GPT-4 and GPT-4-32k are now available to all Azure OpenAI Service customers. Availability varies by region. If you don't see GPT-4 in your region, please check back later.
+GPT-4 and GPT-4-32k models are now available to all Azure OpenAI Service customers. Availability varies by region. If you don't see GPT-4 in your region, please check back later.
These models can only be used with the Chat Completion API. GPT-4 version 0314 is the first version of the model released. Version 0613 is the second version of the model and adds function calling support.
-| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
-| | | | | |
-| `gpt-4` <sup>2</sup> (0314) | East US<sup>1</sup>, France Central<sup>1</sup> | N/A<sup>3</sup> | 8,192 | September 2021 |
-| `gpt-4-32k` <sup>2</sup> (0314) | East US<sup>1</sup>, France Central<sup>1</sup> | N/A<sup>3</sup> | 32,768 | September 2021 |
-| `gpt-4` (0613) | Australia East<sup>1</sup>, Canada East, East US<sup>1</sup>, East US 2<sup>1</sup>, France Central<sup>1</sup>, Japan East<sup>1</sup>, Sweden Central, Switzerland North, UK South<sup>1</sup> | N/A<sup>3</sup> | 8,192 | September 2021 |
-| `gpt-4-32k` (0613) | Australia East<sup>1</sup>, Canada East, East US<sup>1</sup>, East US 2<sup>1</sup>, France Central<sup>1</sup>, Japan East<sup>1</sup>, Sweden Central, Switzerland North, UK South<sup>1</sup> | N/A<sup>3</sup> | 32,768 | September 2021 |
+| Model ID | Max Request (tokens) | Training Data (up to) |
+| | :: | :: |
+| `gpt-4` (0314) | 8,192 | Sep 2021 |
+| `gpt-4-32k`(0314) | 32,768 | Sep 2021 |
+| `gpt-4` (0613) | 8,192 | Sep 2021 |
+| `gpt-4-32k` (0613) | 32,768 | Sep 2021 |
-<sup>1</sup> Due to high demand, availability is limited in the region<br>
-<sup>2</sup> Version `0314` of gpt-4 and gpt-4-32k will be retired no earlier than July 5, 2024. See [model updates](../how-to/working-with-models.md#model-updates) for model upgrade behavior.<br>
-<sup>3</sup> Fine-tuning is not supported for GPT-4 models.
+> [!NOTE]
+> Any region where GPT-4 is listed as available will always have access to both the 8K and 32K versions of the model
+
+### GPT-4 model availability
+
+| Model Availability | gpt-4 (0314) | gpt-4 (0613) |
+||:|:|
+| Available to all subscriptions with Azure OpenAI access | | Canada East <br> Sweden Central <br> Switzerland North |
+| Available to subscriptions with current access to the model version in the region | East US <br> France Central <br> South Central US <br> UK South | Australia East <br> East US <br> East US 2 <br> France Central <br> Japan East <br> UK South |
### GPT-3.5 models
GPT-3.5 Turbo is used with the Chat Completion API. GPT-3.5 Turbo (0301) can als
GPT-3.5 Turbo version 0301 is the first version of the model released. Version 0613 is the second version of the model and adds function calling support.
-| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
-| | | - | -- | - |
-| `gpt-35-turbo`<sup>1</sup> (0301) | East US, France Central, South Central US, UK South, West Europe | N/A | 4,096 | Sep 2021 |
-| `gpt-35-turbo` (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, North Central US, Sweden Central, Switzerland North, UK South | North Central US, Sweden Central | 4,096 | Sep 2021 |
-| `gpt-35-turbo-16k` (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, North Central US, Sweden Central, Switzerland North, UK South | N/A | 16,384 | Sep 2021 |
-| `gpt-35-turbo-instruct` (0914) | East US, Sweden Central | N/A | 4,097 | Sep 2021 |
+> [!NOTE]
+> Version `0301` of `gpt-35-turbo` will be retired no earlier than July 5, 2024. See [model updates](../how-to/working-with-models.md#model-updates) for model upgrade behavior.
+
+### GPT-3.5-Turbo model availability
+
+| Model ID | Model Availability | Max Request (tokens) | Training Data (up to) |
+| | -- |::|:-:|
+| `gpt-35-turbo`<sup>1</sup> (0301) | East US <br> France Central <br> South Central US <br> UK South <br> West Europe | 4096 | Sep 2021 |
+| `gpt-35-turbo` (0613) | Australia East <br> Canada East <br> East US <br> East US 2 <br> France Central <br> Japan East <br> North Central US <br> Sweden Central <br> Switzerland North <br> UK South | 4096 | Sep 2021 |
+| `gpt-35-turbo-16k` (0613) | Australia East <br> Canada East <br> East US <br> East US 2 <br> France Central <br> Japan East <br> North Central US <br> Sweden Central <br> Switzerland North<br> UK South | 16,384 | Sep 2021 |
+| `gpt-35-turbo-instruct` (0914) | East US <br> Sweden Central | 4097 |Sep 2021 |
-<sup>1</sup> Version `0301` of gpt-35-turbo will be retired no earlier than July 5, 2024. See [model updates](../how-to/working-with-models.md#model-updates) for model upgrade behavior.
+<sup>1</sup> This model will accept requests > 4096 tokens. It is not recommended to exceed the 4096 input token limit as the newer version of the model are capped at 4096 tokens. If you encounter issues when exceeding 4096 input tokens with this model this configuration is not officially supported.
### Embeddings models
These models can only be used with Embedding API requests.
> [!NOTE] > We strongly recommend using `text-embedding-ada-002 (Version 2)`. This model/version provides parity with OpenAI's `text-embedding-ada-002`. To learn more about the improvements offered by this model, please refer to [OpenAI's blog post](https://openai.com/blog/new-and-improved-embedding-model). Even if you are currently using Version 1 you should migrate to Version 2 to take advantage of the latest weights/updated token limit. Version 1 and Version 2 are not interchangeable, so document embedding and document search must be done using the same version of the model.
-| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | Output dimensions |
-| | | | | |
-| text-embedding-ada-002 (version 2) | Australia East, Canada East, East US, East US2, France Central, Japan East, North Central US, South Central US, Switzerland North, UK South, West Europe | N/A |8,191 | Sep 2021 | 1536 |
-| text-embedding-ada-002 (version 1) | East US, South Central US, West Europe | N/A |2,046 | Sep 2021 | 1536 |
+| Model ID | Model Availability | Max Request (tokens) | Training Data (up to) | Output Dimensions |
+||| ::|::|::|
+| `text-embedding-ada-002` (version 2) | Australia East <br> Canada East <br> East US <br> East US2 <br> France Central <br> Japan East <br> North Central US <br> South Central US <br> Switzerland North <br> UK South <br> West Europe |8,191 | Sep 2021 | 1536 |
+| `text-embedding-ada-002` (version 1) | East US <br> South Central US <br> West Europe |2,046 | Sep 2021 | 1536 |
### DALL-E models (Preview)
-| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (characters) | Training Data (up to) |
-| | | | | |
-| dalle2 | East US | N/A | 1000 | N/A |
+| Model ID | Feature Availability | Max Request (characters) |
+| | | :: |
+| dalle2 | East US | 1000 |
### Fine-tuning models (Preview)
These models can only be used with Embedding API requests.
`gpt-35-turbo-0613` - fine-tuning of this model is limited to a subset of regions, and is not available in every region the base model is available. | Model ID | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
-| | | | | |
-| `babbage-002` | North Central US, Sweden Central | 16,384 | Sep 2021 |
-| `davinci-002` | North Central US, Sweden Central | 16,384 | Sep 2021 |
-| `gpt-35-turbo` (0613) | North Central US, Sweden Central | 4096 | Sep 2021 |
+| | | :: | :: |
+| `babbage-002` | North Central US <br> Sweden Central | 16,384 | Sep 2021 |
+| `davinci-002` | North Central US <br> Sweden Central | 16,384 | Sep 2021 |
+| `gpt-35-turbo` (0613) | North Central US <br> Sweden Central | 4096 | Sep 2021 |
### Whisper models (Preview)
-| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (audio file size) | Training Data (up to) |
-| | | | | |
-| whisper | North Central US, West Europe | N/A | 25 MB | N/A |
+| Model ID | Model Availability | Max Request (audio file size) |
+| | | :: |
+| `whisper` | North Central US <br> West Europe | 25 MB |
## Next steps
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
Azure OpenAI on your data provides several search options you can use when you a
* [Keyword search](/azure/search/search-lucene-query-architecture) * [Semantic search](/azure/search/semantic-search-overview)
-* [Vector search](/azure/search/vector-search-overview) using Ada [embedding](./understand-embeddings.md) models, available in [select regions](models.md#embeddings-models-1).
+* [Vector search](/azure/search/vector-search-overview) using Ada [embedding](./understand-embeddings.md) models, available in [select regions](models.md#embeddings-models).
To enable vector search, you will need a `text-embedding-ada-002` deployment in your Azure OpenAI resource. Select your embedding deployment when connecting your data, then select one of the vector search types under **Data management**.
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
The following parameters can be used inside of the `parameters` field inside of
| `filter` | string | Optional | null | The filter pattern used for [restricting access to sensitive documents](./concepts/use-your-data.md#document-level-access-control) | `embeddingEndpoint` | string | Optional | null | The endpoint URL for an Ada embedding model deployment, generally of the format `https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2023-05-15`. Use with the `embeddingKey` parameter for [vector search](./concepts/use-your-data.md#search-options) outside of private networks and private endpoints. | | `embeddingKey` | string | Optional | null | The API key for an Ada embedding model deployment. Use with `embeddingEndpoint` for [vector search](./concepts/use-your-data.md#search-options) outside of private networks and private endpoints. |
-| `embeddingDeploymentName` | string | Optional | null | The Ada embedding model deployment name within the same Azure OpenAI resource. Used instead of `embeddingEndpoint` and `embeddingKey` for [vector search](./concepts/use-your-data.md#search-options). Should only be used when both the `embeddingEndpoint` and `embeddingKey` parameters are defined. When this parameter is provided, Azure OpenAI on your data will use an internal call to evaluate the Ada embedding model, rather than calling the Azure OpenAI endpoint. This enables you to use vector search in private networks and private endpoints. Billing remains the same whether this parameter is defined or not. Available in regions where embedding models are [available](./concepts/models.md#embeddings-models-1) starting in API versions `2023-06-01-preview` and later.|
+| `embeddingDeploymentName` | string | Optional | null | The Ada embedding model deployment name within the same Azure OpenAI resource. Used instead of `embeddingEndpoint` and `embeddingKey` for [vector search](./concepts/use-your-data.md#search-options). Should only be used when both the `embeddingEndpoint` and `embeddingKey` parameters are defined. When this parameter is provided, Azure OpenAI on your data will use an internal call to evaluate the Ada embedding model, rather than calling the Azure OpenAI endpoint. This enables you to use vector search in private networks and private endpoints. Billing remains the same whether this parameter is defined or not. Available in regions where embedding models are [available](./concepts/models.md#embeddings-models) starting in API versions `2023-06-01-preview` and later.|
### Start an ingestion job
aks Concepts Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-network.md
Nodes use the [Azure CNI][cni-networking] Kubernetes plugin.
For more information, see [Configure Azure CNI for an AKS cluster][aks-configure-advanced-networking].
-### Azure CNI overlay networking
+### Azure CNI Overlay networking
-[Azure CNI Overlay][azure-cni-overlay] represents an evolution of Azure CNI, addressing scalability and planning challenges arising from the assignment of VNet IPs to pods. It achieves this by assigning private CIDR IPs to pods, which are separate from the VNet and can be reused across multiple clusters. Unlike Kubenet, where the traffic dataplane is handled by the Linux kernel networking stack of the Kubernetes nodes, Azure CNI Overlay delegates this responsibility to Azure networking.
+[Azure CNI Overlay][azure-cni-overlay] represents an evolution of Azure CNI, addressing scalability and planning challenges arising from the assignment of VNet IPs to pods. It achieves this by assigning private CIDR IPs to pods, which are separate from the VNet and can be reused across multiple clusters. Additionally, Azure CNI Overlay can scale beyond the 400 node limit enforced in Kubenet clusters. Azure CNI Overlay is the recommended option for most clusters.
### Azure CNI Powered by Cilium
-In [Azure CNI Powered by Cilium][azure-cni-powered-by-cilium], the data plane for Pods is managed by the Linux kernel of the Kubernetes nodes. Unlike Kubenet, which faces scalability and performance issues with the Linux kernel networking stack, [Cilium][https://cilium.io/] bypasses the Linux kernel networking stack and instead leverages eBPF programs in the Linux Kernel to accelerate packet processing for faster performance.
+[Azure CNI Powered by Cilium][azure-cni-powered-by-cilium] uses [Cilium](https://cilium.io) to provide high-performance networking, observability, and network policy enforcement. It integrates natively with [Azure CNI Overlay][azure-cni-overlay] for scalable IP address management (IPAM)
+
+Additionally, Cilium enforces network policies by default, without requiring a separate network policy engine. Using eBPF programs and a more efficient API object structure, Azure CNI Powered by Cilium can scale beyond [Azure Network Policy Manager's limits of 250 nodes / 20K pod][use-network-policies].
+
+Azure CNI Powered by Cilium is the recommended option for clusters that require network policy enforcement.
### Bring your own CNI
aks Keda About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-about.md
# Simplified application autoscaling with Kubernetes Event-driven Autoscaling (KEDA) add-on (Preview)
-Kubernetes Event-driven Autoscaling (KEDA) is a single-purpose and lightweight component that strives to make application autoscaling simple and is a CNCF Incubation project.
+Kubernetes Event-driven Autoscaling (KEDA) is a single-purpose and lightweight component that strives to make application autoscaling simple and is a CNCF Graduate project.
It applies event-driven autoscaling to scale your application to meet demand in a sustainable and cost-efficient manner with scale-to-zero.
aks Monitor Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks.md
AKS generates the same kinds of monitoring data as other Azure resources that ar
Source | Description | |:|:|
-| Platform metrics | [Platform metrics](monitor-aks-reference.md#metrics) are automatically collected for AKS clusters at no cost. You can analyze these metrics with [metrics explorer](../azure-monitor/essentials/metrics-getting-started.md) or use them for [metric alerts](../azure-monitor/alerts/alerts-types.md#metric-alerts). |
+| Platform metrics | [Platform metrics](monitor-aks-reference.md#metrics) are automatically collected for AKS clusters at no cost. You can analyze these metrics with [metrics explorer](../azure-monitor/essentials/analyze-metrics.md) or use them for [metric alerts](../azure-monitor/alerts/alerts-types.md#metric-alerts). |
| Prometheus metrics | When you [enable metric scraping](../azure-monitor/containers/prometheus-metrics-enable.md) for your cluster, [Prometheus metrics](../azure-monitor/containers/prometheus-metrics-scrape-default.md) are collected by [Azure Monitor managed service for Prometheus](../azure-monitor/essentials/prometheus-metrics-overview.md) and stored in an [Azure Monitor workspace](../azure-monitor/essentials/azure-monitor-workspace-overview.md). Analyze them with [prebuilt dashboards](../azure-monitor/visualize/grafana-plugin.md#use-out-of-the-box-dashboards) in [Azure Managed Grafana](../managed-grafan). | | Activity logs | [Activity log](monitor-aks-reference.md) is collected automatically for AKS clusters at no cost. These logs track information such as when a cluster is created or has a configuration change. Send the [Activity log to a Log Analytics workspace](../azure-monitor/essentials/activity-log.md#send-to-log-analytics-workspace) to analyze it with your other log data. | | Resource logs | [Control plane logs](monitor-aks-reference.md#resource-logs) for AKS are implemented as resource logs. [Create a diagnostic setting](#resource-logs) to send them to [Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md) where you can analyze and alert on them with log queries in [Log Analytics](../azure-monitor/logs/log-analytics-overview.md). |
-| Container insights | Container insights collects various logs and performance data from a cluster including stdout/stderr streams and stores them in a [Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md) and [Azure Monitor Metrics](../azure-monitor/essentials/data-platform-metrics.md). Analyze this data with views and workbooks included with Container insights or with [Log Analytics](../azure-monitor/logs/log-analytics-overview.md) and [metrics explorer](../azure-monitor/essentials/metrics-getting-started.md). |
+| Container insights | Container insights collects various logs and performance data from a cluster including stdout/stderr streams and stores them in a [Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md) and [Azure Monitor Metrics](../azure-monitor/essentials/data-platform-metrics.md). Analyze this data with views and workbooks included with Container insights or with [Log Analytics](../azure-monitor/logs/log-analytics-overview.md) and [metrics explorer](../azure-monitor/essentials/analyze-metrics.md). |
## Monitoring overview page in Azure portal
-The **Monitoring** tab on the **Overview** page offers a quick way to get started viewing monitoring data in the Azure portal for each AKS cluster. This includes graphs with common metrics for the cluster separated by node pool. Click on any of these graphs to further analyze the data in [metrics explorer](../azure-monitor/essentials/metrics-getting-started.md).
+The **Monitoring** tab on the **Overview** page offers a quick way to get started viewing monitoring data in the Azure portal for each AKS cluster. This includes graphs with common metrics for the cluster separated by node pool. Click on any of these graphs to further analyze the data in [metrics explorer](../azure-monitor/essentials/analyze-metrics.md).
The **Overview** page also includes links to [Managed Prometheus](#integrations) and [Container insights](#integrations) for the current cluster. If you haven't already enabled these tools, you'll be prompted to do so. You may also see a banner at the top of the screen recommending that you enable additional features to improve monitoring of your cluster.
aks Network Observability Managed Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/network-observability-managed-cli.md
az aks get-credentials --name myAKSCluster --resource-group myResourceGroup
1. Select **Dashboards** from the left navigation menu, open **Kubernetes / Networking** dashboard under **Managed Prometheus** folder.
-1. Check if the Metrics in **Kubernetes / Networking** Grafana dashboard are visible. If metrics aren't shown, change the time range to the last 15 minutes in the top right.
+1. Check if the Metrics in **Kubernetes / Networking** Grafana dashboard are visible. If metrics aren't shown, change time range to last 15 minutes in top right dropdown box.
# [**Cilium**](#tab/cilium) > [!NOTE] > The following section requires deployments of Azure managed Prometheus and Grafana.
+>[!WARNING]
+> File should only be named as **`prometheus-config`**. Do not add any extensions like .yaml or .txt.
+ 1. Use the following example to create a file named **`prometheus-config`**. Copy the code in the example into the file created. ```yaml
az aks get-credentials --name myAKSCluster --resource-group myResourceGroup
1. Azure Monitor pods should restart themselves, if they don't, rollout restart with following command:
-```azurecli-interactive
+ ```azurecli-interactive
kubectl rollout restart deploy -n kube-system ama-metrics ```
az aks get-credentials --name myAKSCluster --resource-group myResourceGroup
kubectl port-forward -n kube-system $(kubectl get po -n kube-system -l rsName=ama-metrics -oname | head -n 1) 9090:9090 ```
-1. In **Targets** of prometheus, verify the **cilium-pods** are present.
+1. Open `http://localhost:9090` in your browser and navigate to **Status** > **Targets**, verify if **cilium-pods** are present and state says up.
-1. Sign in to Grafana and import dashboards with the following ID [16611-cilium-metrics](https://grafana.com/grafana/dashboards/16611-cilium-metrics/).
+1. Sign in to Azure Managed Grafana and import dashboard with the ID: [16611](https://grafana.com/grafana/dashboards/16611-cilium-metrics/). Also, select **Dashboards** from the left navigation menu, open **Kubernetes / Networking** dashboard under **Managed Prometheus** folder. Metrics should be visible in both these dashboards.
analysis-services Analysis Services Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-monitor.md
# Monitor server metrics
-Analysis Services provides metrics in Azure Metrics Explorer, a free tool in the portal, to help you monitor the performance and health of your servers. For example, monitor memory and CPU usage, number of client connections, and query resource consumption. Analysis Services uses the same monitoring framework as most other Azure services. To learn more, see [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md).
+Analysis Services provides metrics in Azure Metrics Explorer, a free tool in the portal, to help you monitor the performance and health of your servers. For example, monitor memory and CPU usage, number of client connections, and query resource consumption. Analysis Services uses the same monitoring framework as most other Azure services. To learn more, see [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md).
To perform more in-depth diagnostics, track performance, and identify trends across multiple service resources in a resource group or subscription, use [Azure Monitor](../azure-monitor/overview.md). Azure Monitor (service) may result in a billable service.
Use this table to determine which metrics are best for your monitoring scenario.
## Next steps [Azure Monitor overview](../azure-monitor/overview.md)
-[Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md)
+[Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md)
[Metrics in Azure Monitor REST API](/rest/api/monitor/metrics)
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 10/06/2023 Last updated : 10/30/2023
As in the IP generation step, you can't scale, modify your App Service Environme
There's no cost to migrate your App Service Environment. You stop being charged for your previous App Service Environment as soon as it shuts down during the migration process, and you begin getting charged for your new App Service Environment v3 as soon as it's deployed. For more information about App Service Environment v3 pricing, see the [pricing details](overview.md#pricing).
-When you migrate to App Service Environment v3 from previous versions, there are scenarios that you should consider that can potentially reduce your monthly cost.
+When you migrate to App Service Environment v3 from previous versions, there are scenarios that you should consider that can potentially reduce your monthly cost. In addition to the following scenarios, consider [reservations](../../cost-management-billing/reservations/reservation-discount-app-service.md#how-reservation-discounts-apply-to-isolated-v2-instances) and [savings plans](../../cost-management-billing/savings-plan/savings-plan-compute-overview.md) to further reduce your costs.
### Scale down your App Service plans
app-service Upgrade To Asev3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/upgrade-to-asev3.md
description: Take the first steps toward upgrading to App Service Environment v3
Previously updated : 07/17/2023 Last updated : 10/30/2023 # Upgrade to App Service Environment v3
This page is your one-stop shop for guidance and resources to help you upgrade s
|**1**|**Pre-flight check**|Determine if your environment meets the prerequisites to automate your upgrade using the migration feature.<br><br>- [Automated upgrade using the migration feature](migrate.md)<br><br>If not, you can upgrade manually.<br><br>- [Manual migration](migration-alternatives.md)| |**2**|**Migrate**|Based on results of your review, either upgrade using the migration feature or follow the manual steps.<br><br>- [Use the automated migration feature](how-to-migrate.md)<br>- [Migrate manually](migration-alternatives.md)| |**3**|**Testing and troubleshooting**|Upgrading using the automated migration feature requires a 3-6 hour service window. Support teams are monitoring upgrades to ensure success. If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).|
-|**4**|**Optimize your App Service plans**|Once your upgrade is complete, you can optimize the App Service plans for additional benefits.<br><br>Review the autoselected Isolated v2 SKU sizes and scale up or scale down your App Service plans as needed.<br><br>- [Scale down your App Service plans](../manage-scale-up.md)<br>- [App Service Environment post-migration scaling guidance](migrate.md#pricing)<br><br>Check out the pricing estimates if needed.<br><br>- [App Service pricing page](https://azure.microsoft.com/pricing/details/app-service/windows/)<br>- [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator)|
+|**4**|**Optimize your App Service plans**|Once your upgrade is complete, you can optimize the App Service plans for additional benefits.<br><br>Review the autoselected Isolated v2 SKU sizes and scale up or scale down your App Service plans as needed.<br><br>- [Scale down your App Service plans](../manage-scale-up.md)<br>- [App Service Environment post-migration scaling guidance](migrate.md#pricing)<br><br>Explore reserved instance pricing, savings plans, and check out the pricing estimates if needed.<br><br>- [App Service pricing page](https://azure.microsoft.com/pricing/details/app-service/windows/)<br>- [How reservation discounts apply to Isolated v2 instances](../../cost-management-billing/reservations/reservation-discount-app-service.md#how-reservation-discounts-apply-to-isolated-v2-instances)<br>- [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator)|
|**5**|**Learn more**|Join the [free live webinar](https://developer.microsoft.com/en-us/reactor/events/20417) with FastTrack Architects.<br><br>Need more help? [Submit a request](https://cxp.azure.com/nominationportal/nominationform/fasttrack) to contact FastTrack.<br><br>[Frequently asked questions](migrate.md#frequently-asked-questions)<br><br>[Community support](https://aka.ms/asev1v2retirement)| ## Additional information
app-service Monitor App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-app-service.md
The metrics and logs you can collect are discussed in the following sections.
## Analyzing metrics
-You can analyze metrics for *App Service* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+You can analyze metrics for *App Service* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
For a list of platform metrics collected for App Service, see [Monitoring App Service data reference metrics](monitor-app-service-reference.md#metrics)
application-gateway Monitor Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/monitor-application-gateway.md
The metrics and logs you can collect are discussed in the following sections.
<!-- REQUIRED. Please keep headings in this order If you don't support metrics, say so. Some services may be only onboarded to logs -->
-You can analyze metrics for Azure Application Gateway with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+You can analyze metrics for Azure Application Gateway with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
<!-- Point to the list of metrics available in your monitor-service-reference article. --> For a list of the platform metrics collected for Azure Application Gateway, see [Monitoring Application Gateway data reference metrics](monitor-application-gateway-reference.md#metrics).
azure-app-configuration Monitor App Configuration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/monitor-app-configuration-reference.md
App Configuration uses the [AACHttpRequest Table](/azure/azure-monitor/refere
|Category |string |The log category of the event, always HttpRequest. |ClientIPAddress | string| IP Address of the client that sent the request. |ClientRequestId| string| Request ID provided by client.
-|CorrelationId| string| GUID for correlated logs.
+|CorrelationId| string| An ID provided by the client to correlate multiple requests.
|DurationMs| int |The duration of the operation in milliseconds.
+|HitCount| int |The number of requests that the record is associated with.
|Method string| HTTP| Http request method (get or post) |RequestId| string| Unique request ID generated by server. |RequestLength| int |Length in bytes of the HTTP request.
azure-app-configuration Monitor App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/monitor-app-configuration.md
When you create a diagnostic setting, you specify which categories of logs to co
## Analyzing metrics
-You can analyze metrics for App Configuration with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool. For App Configuration, the following metrics are collected:
+You can analyze metrics for App Configuration with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool. For App Configuration, the following metrics are collected:
* Http Incoming Request Count * Http Incoming Request Duration
Following are sample queries that you can use to help you monitor your App Confi
```Kusto    AACHttpRequest | where TimeGenerated > ago(3d)
- | summarize requestCount= count() by ClientIPAddress
+ | summarize requestCount=sum(HitCount) by ClientIPAddress
| order by requestCount desc ```
Following are sample queries that you can use to help you monitor your App Confi
```Kusto    AACHttpRequest | where TimeGenerated > ago(3d)
- | summarize requestCount=count() by StatusCode
+ | summarize requestCount=sum(HitCount) by StatusCode
| order by requestCount desc | render piechart ```
Following are sample queries that you can use to help you monitor your App Confi
AACHttpRequest | where TimeGenerated > ago(14d) | extend Day = startofday(TimeGenerated)
- | summarize requestcount=count() by Day
+ | summarize requestcount=sum(HitCount) by Day
| order by Day desc ```
The following table lists common and recommended alert rules for App C
| Alert type | Condition | Description  | |:|:|:|
-|Rate Limit on Http Requests | Status Code = 429  | The configuration store has exceeded the [hourly request quota](./faq.yml#are-there-any-limits-on-the-number-of-requests-made-to-app-configuration). Upgrade to a standard store or follow the [best practices](./howto-best-practices.md#reduce-requests-made-to-app-configuration) to optimize your usage. |
+|Request quota usage exceeded | RequestQuotaUsage >= 100 | The configuration store has exceeded the [request quota usage](./faq.yml#are-there-any-limits-on-the-number-of-requests-made-to-app-configuration). Upgrade to a standard tier store or follow the [best practices](./howto-best-practices.md#reduce-requests-made-to-app-configuration) to optimize your usage. |
azure-arc Prepare Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prepare-extended-security-updates.md
Title: How to prepare to deliver Extended Security Updates for Windows Server 2012 through Azure Arc description: Learn how to prepare to deliver Extended Security Updates for Windows Server 2012 through Azure Arc. Previously updated : 10/18/2023 Last updated : 10/30/2023
Other Azure services through Azure Arc-enabled servers are available as well, wi
## Prepare delivery of ESUs
-To prepare for this new offer, you need to plan and prepare to onboard your machines to Azure Arc-enabled servers through the installation of the [Azure Connected Machine agent](agent-overview.md) (version 1.34 or higher) and establishing a connection to Azure.
+To prepare for this new offer, you need to plan and prepare to onboard your machines to Azure Arc-enabled servers through the installation of the [Azure Connected Machine agent](agent-overview.md) (version 1.34 or higher) and establishing a connection to Azure. Windows Server 2012 Extended Security Updates supports Windows Server 2012 and R2 Standard and Datacenter editions. Windows Server 2012 Storage is not supported.
We recommend you deploy your machines to Azure Arc in preparation for when the related Azure services deliver supported functionality to manage ESU. Once these machines are onboarded to Azure Arc-enabled servers, you'll have visibility into their ESU coverage and enroll through the Azure portal or using Azure Policy one month before Windows Server 2012 end of support. Billing for this service starts from October 2023, after Windows Server 2012 end of support. ++ > [!NOTE] > In order to purchase ESUs, you must have Software Assurance through Volume Licensing Programs such as an Enterprise Agreement (EA), Enterprise Agreement Subscription (EAS), Enrollment for Education Solutions (EES), or Server and Cloud Enrollment (SCE). Alternatively, if your Windows Server 2012/2012 R2 machines are licensed through SPLA or with a Server Subscription, Software Assurance is not required to purchase ESUs.
azure-functions Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/monitor-functions.md
The metrics and logs you can collect are discussed in the following sections.
## Analyzing metrics
-You can analyze metrics for *Azure Functions* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+You can analyze metrics for *Azure Functions* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
For a list of the platform metrics collected for Azure Functions, see [Monitoring *Azure Functions* data reference metrics](monitor-functions-reference.md#metrics).
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md
Application Insights now supports [Microsoft Entra authentication](../../active-directory/authentication/overview-authentication.md). By using Microsoft Entra ID, you can ensure that only authenticated telemetry is ingested in your Application Insights resources.
-Using various authentication systems can be cumbersome and risky because it's difficult to manage credentials at scale. You can now choose to [opt out of local authentication](#disable-local-authentication) to ensure only telemetry exclusively authenticated by using [managed identities](../../active-directory/managed-identities-azure-resources/overview.md) and [Microsoft Entra ID](../../active-directory/fundamentals/active-directory-whatis.md) is ingested in your resource. This feature is a step to enhance the security and reliability of the telemetry used to make critical operational ([alerting](../alerts/alerts-overview.md#what-are-azure-monitor-alerts)and [autoscale](../autoscale/autoscale-overview.md#overview-of-autoscale-in-azure)) and business decisions.
+Using various authentication systems can be cumbersome and risky because it's difficult to manage credentials at scale. You can now choose to [opt out of local authentication](#disable-local-authentication) to ensure only telemetry exclusively authenticated by using [managed identities](../../active-directory/managed-identities-azure-resources/overview.md) and [Microsoft Entra ID](../../active-directory/fundamentals/active-directory-whatis.md) is ingested in your resource. This feature is a step to enhance the security and reliability of the telemetry used to make critical operational ([alerting](../alerts/alerts-overview.md#what-are-azure-monitor-alerts) and [autoscaling](../autoscale/autoscale-overview.md#overview-of-autoscale-in-azure)) and business decisions.
> [!NOTE]
-> Note
-> This document covers data ingestion into Application Insights using Microsoft Entra ID. authentication. For information on querying data within Application Insights, see [Query Application Insights using Microsoft Entra authentication](./app-insights-azure-ad-api.md).
+> This document covers data ingestion into Application Insights using Microsoft Entra ID-based authentication. For information on querying data within Application Insights, see [Query Application Insights using Microsoft Entra authentication](./app-insights-azure-ad-api.md).
## Prerequisites
->
-The following prerequisites enable Microsoft Entra authenticated ingestion. You need to:
+The following preliminary steps are required to enable Microsoft Entra authenticated ingestion. You need to:
- Be in the public cloud.-- Have familiarity with:
- - [Managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
- - [Service principal](../../active-directory/develop/howto-create-service-principal-portal.md).
- - [Assigning Azure roles](../../role-based-access-control/role-assignments-portal.md).
+- Be familiar with:
+ - [Managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
+ - [Service principal](../../active-directory/develop/howto-create-service-principal-portal.md).
+ - [Assigning Azure roles](../../role-based-access-control/role-assignments-portal.md).
- Have an Owner role to the resource group to grant access by using [Azure built-in roles](../../role-based-access-control/built-in-roles.md). - Understand the [unsupported scenarios](#unsupported-scenarios).
The following prerequisites enable Microsoft Entra authenticated ingestion. You
The following SDKs and features are unsupported for use with Microsoft Entra authenticated ingestion: -- [Application Insights Java 2.x SDK](deprecated-java-2x.md#monitor-dependencies-caught-exceptions-and-method-execution-times-in-java-web-apps).<br>
+- [Application Insights Java 2.x SDK](deprecated-java-2x.md#monitor-dependencies-caught-exceptions-and-method-execution-times-in-java-web-apps).<br />
Microsoft Entra authentication is only available for Application Insights Java Agent greater than or equal to 3.2.0. - [ApplicationInsights JavaScript web SDK](javascript.md). - [Application Insights OpenCensus Python SDK](/previous-versions/azure/azure-monitor/app/opencensus-python) with Python version 3.4 and 3.5. - [Certificate/secret-based Microsoft Entra ID](../../active-directory/authentication/active-directory-certificate-based-authentication-get-started.md) isn't recommended for production. Use managed identities instead.-- On-by-default codeless monitoring (for languages) for Azure App Service, Azure Virtual Machines/Azure Virtual Machine Scale Sets, and Azure Functions.
+- On-by-default [autoinstrumentation/codeless monitoring](codeless-overview.md) (for languages) for Azure App Service, Azure Virtual Machines/Azure Virtual Machine Scale Sets, and Azure Functions.
- [Availability tests](availability-overview.md). - [Profiler](profiler-overview.md).
Application Insights .NET SDK supports the credential classes provided by [Azure
- We recommend `DefaultAzureCredential` for local development. - We recommend `ManagedIdentityCredential` for system-assigned and user-assigned managed identities.
- - For system-assigned, use the default constructor without parameters.
- - For user-assigned, provide the client ID to the constructor.
+ - For system-assigned, use the default constructor without parameters.
+ - For user-assigned, provide the client ID to the constructor.
- We recommend `ClientSecretCredential` for service principals.
- - Provide the tenant ID, client ID, and client secret to the constructor.
+ - Provide the tenant ID, client ID, and client secret to the constructor.
The following example shows how to manually create and configure `TelemetryConfiguration` by using .NET:
appInsights.defaultClient.config.aadTokenCredential = credential;
1. Add the JSON configuration to the *ApplicationInsights.json* configuration file depending on the authentication you're using. We recommend using managed identities. > [!NOTE]
-> For more information about migrating from the 2.X SDK to the 3.X Java agent, see [Upgrading from Application Insights Java 2.x SDK](java-standalone-upgrade-from-2x.md).
+> For more information about migrating from the `2.X` SDK to the `3.X` Java agent, see [Upgrading from Application Insights Java 2.x SDK](java-standalone-upgrade-from-2x.md).
#### System-assigned managed identity
The following example shows how to configure the Java agent to use user-assigned
} } ```+ :::image type="content" source="media/azure-ad-authentication/user-assigned-managed-identity.png" alt-text="Screenshot that shows user-assigned managed identity." lightbox="media/azure-ad-authentication/user-assigned-managed-identity.png"::: #### Client secret
The following example shows how to configure the Java agent to use a service pri
} } ```+ :::image type="content" source="media/azure-ad-authentication/client-secret-tenant-id.png" alt-text="Screenshot that shows the client secret with the tenant ID and the client ID." lightbox="media/azure-ad-authentication/client-secret-tenant-id.png"::: :::image type="content" source="media/azure-ad-authentication/client-secret-cs.png" alt-text="Screenshot that shows the Client secrets section with the client secret." lightbox="media/azure-ad-authentication/client-secret-cs.png":::
The following example shows how to configure the Java agent to use a service pri
The `APPLICATIONINSIGHTS_AUTHENTICATION_STRING` environment variable lets Application Insights authenticate to Microsoft Entra ID and send telemetry.
- - For system-assigned identity:
+- For system-assigned identity:
- | App setting | Value |
- | -- | |
- | APPLICATIONINSIGHTS_AUTHENTICATION_STRING | `Authorization=AAD` |
+| App setting | Value |
+| -- | |
+| APPLICATIONINSIGHTS_AUTHENTICATION_STRING | `Authorization=AAD` |
- - For user-assigned identity:
+- For user-assigned identity:
- | App setting | Value |
- | - | -- |
- | APPLICATIONINSIGHTS_AUTHENTICATION_STRING | `Authorization=AAD;ClientId={Client id of the User-Assigned Identity}` |
+| App setting | Value |
+| - | -- |
+| APPLICATIONINSIGHTS_AUTHENTICATION_STRING | `Authorization=AAD;ClientId={Client id of the User-Assigned Identity}` |
Set the `APPLICATIONINSIGHTS_AUTHENTICATION_STRING` environment variable using this string.
is included starting with beta version [opencensus-ext-azure 1.1b0](https://pypi
Construct the appropriate [credentials](/python/api/overview/azure/identity-readme#credentials) and pass them into the constructor of the Azure Monitor exporter. Make sure your connection string is set up with the instrumentation key and ingestion endpoint of your resource.
-The `OpenCensus`` Azure Monitor exporters support these authentication types. We recommend using managed identities in production environments.
+The `OpenCensus` Azure Monitor exporters support these authentication types. We recommend using managed identities in production environments.
#### System-assigned managed identity
tracer = Tracer(
) ... ```+ ## Disable local authentication
When developing a custom client to obtain an access token from Microsoft Entra I
If you're using sovereign clouds, you can find the audience information in the connection string as well. The connection string follows this structure:
-_InstrumentationKey={profile.InstrumentationKey};IngestionEndpoint={ingestionEndpoint};LiveEndpoint={liveDiagnosticsEndpoint};AADAudience={aadAudience}_
+*InstrumentationKey={profile.InstrumentationKey};IngestionEndpoint={ingestionEndpoint};LiveEndpoint={liveDiagnosticsEndpoint};AADAudience={aadAudience}*
The audience parameter, AADAudience, may vary depending on your specific environment.
Next, you should review the Application Insights resource's access control. The
The Application Insights .NET SDK emits error logs by using the event source. To learn more about collecting event source logs, see [Troubleshooting no data - collect logs with PerfView](asp-net-troubleshoot-no-data.md#PerfView). If the SDK fails to get a token, the exception message is logged as
-`Failed to get AAD Token. Error message: `.
+`Failed to get AAD Token. Error message:`.
### [Node.js](#tab/nodejs)
If the following WARN message is seen in the log file `WARN c.m.a.TelemetryChann
If you're using Fiddler, you might see the response header `HTTP/1.1 403 Forbidden - provided credentials do not grant the access to ingest the telemetry into the component`. The root cause might be one of the following reasons:+ - You've created the resource with a system-assigned managed identity or associated a user-assigned identity with it. However, you might have forgotten to add the Monitoring Metrics Publisher role to the resource (if using SAMI) or the user-assigned identity (if using UAMI). - You've provided the right credentials to get the access tokens, but the credentials don't belong to the right Application Insights resource. Make sure you see your resource (VM or app service) or user-assigned identity with Monitoring Metrics Publisher roles in your Application Insights resource.
You're probably missing a credential or your credential is set to `None`, but yo
This error usually occurs when the provided credentials don't grant access to ingest telemetry for the Application Insights resource. Make sure your Application Insights resource has the correct role assignments. + ## Next steps
-* [Monitor your telemetry in the portal](overview-dashboard.md)
-* [Diagnose with Live Metrics Stream](live-stream.md)
-* [Query Application Insights using Microsoft Entra authentication](./app-insights-azure-ad-api.md)
+- [Monitor your telemetry in the portal](overview-dashboard.md)
+- [Diagnose with Live Metrics Stream](live-stream.md)
+- [Query Application Insights using Microsoft Entra authentication](./app-insights-azure-ad-api.md)
azure-monitor Get Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/get-metric.md
SeverityLevel.Error);
* [Application Insights API for custom events and metrics](api-custom-events-metrics.md) * [Learn more](./worker-service.md) about monitoring worker service applications. * Use [log-based and pre-aggregated metrics](./pre-aggregated-metrics-log-metrics.md).
-* Get started with [metrics explorer](../essentials/metrics-getting-started.md).
+* Analyze metrics with [metrics explorer](../essentials/analyze-metrics.md).
* Learn how to enable Application Insights for [ASP.NET Core applications](asp-net-core.md). * Learn how to enable Application Insights for [ASP.NET applications](asp-net.md).
azure-monitor Javascript Feature Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md
Telemetry data generated from the click events are stored as `customEvents` in t
### `name` The `name` column of the `customEvent` is populated based on the following rules:
- 1. The `id` provided in the `data-*-id`, which means it must start with `data` and end with `id`, is used as the `customEvent` name. For example, if the clicked HTML element has the attribute `"data-sample-id"="button1"`, then `"button1"` is the `customEvent` name.
- 1. If no such attribute exists and if the `useDefaultContentNameOrId` is set to `true` in the configuration, the clicked element's HTML attribute `id` or content name of the element is used as the `customEvent` name. If both `id` and the content name are present, precedence is given to `id`.
+ 1. If [`customDataPrefix`](#customdataprefix) isn't declared in the advanced configuration, the `id` provided in the `data-id` is used as the `customEvent` name.
+ 1. If [`customDataPrefix`](#customdataprefix) is declared, the `id` provided in the `data-*-id`, which means it must start with `data` and end with `id`, is used as the `customEvent` name. For example, if the clicked HTML element has the attribute `"data-sample-id"="button1"`, then `"button1"` is the `customEvent` name.
+ 1. If the `data-id` or `data-*-id` attribute doesn't exist and if [`useDefaultContentNameOrId`](#icustomdatatags) is set to `true`, the clicked element's HTML attribute `id` or content name of the element is used as the `customEvent` name. If both `id` and the content name are present, precedence is given to `id`.
1. If `useDefaultContentNameOrId` is `false`, the `customEvent` name is `"not_specified"`. We recommend setting `useDefaultContentNameOrId` to `true` for generating meaningful data.
+### `contentName`
+
+If you have the [`contentName` callback function](#ivaluecallback) in advanced configuration defined, the `contentName` column of the `customEvent` is populated based on the following rules:
+
+- For a clicked HTML `<a>` element, the plugin attempts to collect the value of its innerText (text) attribute. If the plugin can't find this attribute, it attempts to collect the value of its innerHtml attribute.
+- For a clicked HTML `<img>` or `<area>` element, the plugin collects the value of its `alt` attribute.
+- For all other clicked HTML elements, `contentName` is populated based on the following rules, which are listed in order of precedence:
+
+ 1. The value of the `value` attribute for the element
+ 1. The value of the `name` attribute for the element
+ 1. The value of the `alt` attribute for the element
+ 1. The value of the innerText attribute for the element
+ 1. The value of the `id` attribute for the element
+ ### `parentId` key To populate the `parentId` key within `customDimensions` of the `customEvent` table in the logs, declare the tag `parentDataTag` or define the `data-parentid` attribute.
For examples showing which value is fetched as the `parentId` for different conf
### `customDataPrefix`
-The `customDataPrefix` provides the user the ability to configure a data attribute prefix to help identify where heart is located within the individual's codebase. The prefix should always be lowercase and start with `data-`. For example:
+The [`customDataPrefix` option in advanced configuration](#icustomdatatags) provides the user the ability to configure a data attribute prefix to help identify where heart is located within the individual's codebase. The prefix must always be lowercase and start with `data-`. For example:
- `data-heart-` - `data-team-name-` - `data-example-`
-n HTML, the `data-*` global attributes are called custom data attributes that allow proprietary information to be exchanged between the HTML and its DOM representation by scripts. Older browsers like Internet Explorer and Safari drop attributes they don't understand, unless they start with `data-`.
+In HTML, the `data-*` global attributes are called custom data attributes that allow proprietary information to be exchanged between the HTML and its DOM representation by scripts. Older browsers like Internet Explorer and Safari drop attributes they don't understand, unless they start with `data-`.
You can replace the asterisk (`*`) in `data-*` with any name following the [production rule of XML names](https://www.w3.org/TR/REC-xml/#NT-Name) with the following restrictions. - The name must not start with "xml," whatever case is used for the letters.
azure-monitor Javascript Framework Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-framework-extensions.md
It measures time from the [`ComponentDidMount`](https://react.dev/reference/reac
##### Explore your data
-Use [Metrics Explorer](../essentials/metrics-getting-started.md) to plot a chart for the custom metric name `React Component Engaged Time (seconds)` and [split](../essentials/metrics-getting-started.md#apply-dimension-filters-and-splitting) this custom metric by `Component Name`.
+Use [Azure Monitor metrics explorer](../essentials/analyze-metrics.md) to plot a chart for the custom metric name `React Component Engaged Time (seconds)` and [split](../essentials/analyze-metrics.md#use-dimension-filters-and-splitting) this custom metric by `Component Name`.
:::image type="content" source="./media/javascript-react-plugin/chart.png" lightbox="./media/javascript-react-plugin/chart.png" alt-text="Screenshot that shows a chart that displays the custom metric React Component Engaged Time (seconds) split by Component Name":::
azure-monitor Opentelemetry Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md
For more information about Java, see the [Java supplemental documentation](java-
```sh npm install @opentelemetry/api npm install @opentelemetry/exporter-trace-otlp-http
- npm install @opentelemetry/@opentelemetry/sdk-trace-base
+ npm install @opentelemetry/sdk-trace-base
npm install @opentelemetry/sdk-trace-node ```
azure-monitor Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-sources.md
Most Azure services will send [platform metrics](essentials/data-platform-metric
| Destination | Description | Reference | |:|:|:|
-| Azure Monitor Metrics | Platform metrics will write to the Azure Monitor metrics database with no configuration. Access platform metrics from Metrics Explorer. | [Getting started with Azure Metrics Explorer](essentials/metrics-getting-started.md)<br>[Supported metrics with Azure Monitor](essentials/metrics-supported.md) |
+| Azure Monitor Metrics | Platform metrics will write to the Azure Monitor metrics database with no configuration. Access platform metrics from Metrics Explorer. | [Analyze metrics with Azure Monitor metrics explorer](essentials/analyze-metrics.md) <br>[Supported metrics with Azure Monitor](essentials/metrics-supported.md) |
| Azure Monitor Logs | Copy platform metrics to Logs for trending and other analysis using Log Analytics. | [Azure diagnostics direct to Log Analytics](essentials/resource-logs.md#send-to-log-analytics-workspace) | | Azure Monitor Change Analysis | Change Analysis detects various types of changes, from the infrastructure layer through application deployment. | [Use Change Analysis in Azure Monitor](./change/change-analysis.md) | | Event Hubs | Stream metrics to other locations using Event Hubs. |[Stream Azure monitoring data to an event hub for consumption by an external tool](essentials/stream-monitoring-data-event-hubs.md) |
azure-monitor Analyze Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/analyze-metrics.md
+
+ Title: Analyze metrics with Azure Monitor metrics explorer
+description: Learn how to analyze metrics with Azure Monitor metrics explorer by creating metrics charts, setting chart dimensions, time ranges, aggregation, filters, splitting, and sharing.
++++ Last updated : 10/23/2023+++
+# Analyze metrics with Azure Monitor metrics explorer
+
+In Azure Monitor, [metrics](data-platform-metrics.md) are a series of measured values and counts that are collected and stored over time. Metrics can be standard (also called *platform*) or custom. The Azure platform provides standard metrics. These metrics reflect the health and usage statistics of your Azure resources.
+
+In addition to standard metrics, your application emits extra _custom_ performance indicators or business-related metrics. Custom metrics can be emitted by any application or Azure resource and collected by using [Azure Monitor Insights](../insights/insights-overview.md), agents running on virtual machines, or [OpenTelemetry](../app/opentelemetry-enable.md).
+
+Azure Monitor metrics explorer is a component of the Azure portal that helps you plot charts, visually correlate trends, and investigate spikes and dips in metrics values. You can use metrics explorer to investigate the health and utilization of your resources.
+
+Watch the following video for an overview of creating and working with metrics charts in Azure Monitor metrics explorer.
+
+> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE4qO59]
+
+## Create a metric chart
+
+You can open metrics explorer from the **Azure Monitor overview** page, or from the **Monitoring** section of any resource. In the Azure portal, select **Metrics**.
++
+If you open metrics explorer from Azure Monitor, the **Select a scope** page opens. Set the **Subscription**, **Resource**, and region **Location** fields to the resource to explore. If you open metrics explorer for a specific resource, the scope is prepopulated with information about that resource.
+
+Here's a summary of configuration tasks for creating a chart to analyze metrics:
+
+- [Select your resource and metric](#set-the-resource-scope) to see the chart. You can choose to work with one or multiple resources and view a single or multiple metrics.
+
+- [Configure the time settings](#configure-the-time-range) that are relevant for your investigation. You can set the time granularity to allow for pan and zoom on your chart, and configure aggregations to show values like the maximum and minimum.
+
+- [Use dimension filters and splitting](#use-dimension-filters-and-splitting) to analyze which segments of the metric contribute to the overall metric value and identify possible outliers in the data.
+
+- Work with advanced settings to customize your chart. [Lock the y-axis range](#lock-the-y-axis-range) to identify small data variations that might have significant consequences. [Correlate metrics to logs](#correlate-metrics-to-logs) to diagnose the cause of anomalies in your chart.
+
+- [Configure alerts](../alerts/alerts-metric-overview.md) and [receive notifications](#set-up-alert-rules) when the metric value exceeds or drops below a threshold.
+
+- [Share your chart](#share-your-charts) or pin it to dashboards.
+
+## Set the resource scope
+
+The resource **scope picker** lets you scope your chart to view metrics for a single resource or for multiple resources. To view metrics across multiple resources, the resources must be within the same subscription and region location.
+
+> [!NOTE]
+> You must have _Monitoring Reader_ permission at the subscription level to visualize metrics across multiple resources, resource groups, or a subscription. For more information, see [Assign Azure roles in the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+
+### Select a single resource
+
+1. Choose **Select a scope**.
+
+ :::image source="./media/analyze-metrics/scope-picker.png" alt-text="Screenshot that shows how to open the resource scope picker for metrics explorer.":::
+
+1. Use the scope picker to select the resources whose metrics you want to see. If you open metrics explorer for a specific resource, the scope should be populated.
+
+ For some resources, you can view only one resource's metrics at a time. On the **Resource types** menu, these resources are shown in the **All resource types** section.
+
+ :::image source="./media/analyze-metrics/single-resource-scope.png" alt-text="Screenshot that shows available resources in the scope picker." lightbox="./media/analyze-metrics/single-resource-scope.png":::
+
+1. Select a resource. The picker updates to show all subscriptions and resource groups that contain the selected resource.
+
+ :::image source="./media/analyze-metrics/available-single-resource.png" alt-text="Screenshot that shows a single resource." lightbox="./media/analyze-metrics/available-single-resource.png":::
+
+ > [!TIP]
+ > If you want the capability to view the metrics for multiple resources at the same time, or to view metrics across a subscription or resource group, select **Upvote**.
+
+1. When you're satisfied with your selection, select **Apply**.
+
+### Select multiple resources
+
+You can see which metrics can be queried across multiple resources at the top of the **Resource types** menu in the scope picker.
++
+1. To visualize metrics over multiple resources, start by selecting multiple resources within the resource scope picker.
+
+ :::image source="./media/analyze-metrics/select-multiple-resources.png" alt-text="Screenshot that shows how to select multiple resources in the resource scope picker.":::
+
+ The resources you select must be within the same resource type, location, and subscription. Resources that don't meet these criteria aren't selectable.
+
+1. Select **Apply**.
+
+### Select a resource group or subscription
+
+For types that are compatible with multiple resources, you can query for metrics across a subscription or multiple resource groups.
+
+1. Start by selecting a subscription or one or more resource groups.
+
+ :::image source="./media/analyze-metrics/query-across-multiple-resource-groups.png" alt-text="Screenshot that shows how to query across multiple resource groups.":::
+
+1. Select a resource type and location.
+
+ :::image source="./media/analyze-metrics/select-resource-group.png" alt-text="Screenshot that shows how to select resource groups in the resource scope picker.":::
+
+1. Expand the selected scopes to verify the resources your selections apply to.
+
+ :::image source="./media/analyze-metrics/verify-selected-resources.png" alt-text="Screenshot that shows the selected resources within the groups.":::
+
+1. Select **Apply**.
+
+## Configure the time range
+
+The **time picker** lets you configure the time range for your metric chart to view data that's relevant to your monitoring scenario. By default, the chart shows the most recent 24 hours of metrics data.
+
+> [!NOTE]
+> [Most metrics in Azure are stored for 93 days](../essentials/data-platform-metrics.md#retention-of-metrics). You can query no more than 30 days of data on any single chart. You can [pan](#pan-across-metrics-data) the chart to view the full retention. The 30-day limitation doesn't apply to [log-based metrics](../app/pre-aggregated-metrics-log-metrics.md#log-based-metrics).
+
+Use the time picker to change the **Time range** for your data, such as the last 12 hours or the last 30 days.
++
+In addition to changing the time range with the time picker, you can pan and zoom by using the controls in the chart area.
+
+### Pan across metrics data
+
+To pan, select the left and right arrows at the edge of the chart. The arrow control moves the selected time range back and forward by one half of the chart's time span. If you're viewing the past 24 hours, selecting the left arrow causes the time range to shift to span a day and a half to 12 hours ago.
++
+### Zoom into metrics data
+
+You can configure the _time granularity_ of the chart data to support zoom in and zoom out for the time range. Use the **time brush** to investigate an interesting area of the chart like a spike or a dip in the data. Select an area on the chart and the chart zooms in to show more detail for the selected area based on your granularity settings. If the time grain is set to **Automatic**, zooming selects a smaller time grain. The new time range applies to all charts in metrics explorer.
++
+## View multiple metric lines and charts
+
+You can create charts that plot multiple metric lines or show multiple metric charts at the same time. This functionality allows you to:
+
+- Correlate related metrics on the same graph to see how one value relates to another.
+- Display metrics that use different units of measure in close proximity.
+- Visually aggregate and compare metrics from multiple resources.
+
+Suppose you have five storage accounts and you want to know how much space they consume together. You can create a stacked area chart that shows the individual values and the sum of all the values at points in time.
+
+After you create a chart, select **Add metric** to add another metric to the same chart.
++
+### Add multiple charts
+
+Typically, your charts shouldn't mix metrics that use different units of measure. For example, avoid mixing one metric that uses milliseconds with another that uses kilobytes. Also avoid mixing metrics whose scales differ significantly. In these cases, consider using multiple charts instead.
+
+- To create another chart that uses a different metric, select **New chart**.
+
+- To reorder or delete multiple charts, select **More options** (...), and then select the **Move up**, **Move down**, or **Delete** action.
+
+ :::image source="./media/analyze-metrics/multiple-charts.png" alt-text="Screenshot that shows multiple charts." lightbox="./media/analyze-metrics/multiple-charts.png":::
+
+### Use different line colors
+
+Chart lines are automatically assigned a color from a default palette. To change the color of a chart line, select the colored bar in the legend that corresponds to the line on the chart. Use the **color picker** to select the line color.
++
+Customized colors are preserved when you pin the chart to a dashboard. The following section shows how to pin a chart.
+
+## Configure aggregation
+
+When you add a metric to a chart, metrics explorer applies a default aggregation. The default makes sense in basic scenarios, but you can use a different aggregation to gain more insights about the metric.
+
+Before you use different aggregations on a chart, you should understand how metrics explorer handles them. Metrics are a series of measurements (or "metric values") that are captured over a time period. When you plot a chart, the values of the selected metric are separately aggregated over the _time granularity_.
+
+You select the size of the time grain by using the time picker in metrics explorer. If you don't explicitly select the time grain, metrics explorer uses the currently selected time range by default. After metrics explorer determines the time grain, the metric values that it captures during each time grain are aggregated on the chart, one data point per time grain.
+
+Suppose a chart shows the *Server response time* metric. It uses the average aggregation over the time span of the last 24 hours.
++
+In this scenario, if you set the time granularity to 30 minutes, metrics explorer draws the chart from 48 aggregated data points. That is, it uses two data points per hour for 24 hours. The line chart connects 48 dots in the chart plot area. Each data point represents the average of all captured response times for server requests that occurred during each of the relevant 30-minute time periods. If you switch the time granularity to 15 minutes, you get 96 aggregated data points. That is, you get four data points per hour for 24 hours.
+
+Metrics explorer has five aggregation types:
+
+- **Sum**: The sum of all values captured during the aggregation interval. The sum aggregation is sometimes called the *total* aggregation.
+- **Count**: The number of measurements captured during the aggregation interval.
+
+ When the metric is always captured with the value of 1, the count aggregation is equal to the sum aggregation. This scenario is common when the metric tracks the count of distinct events and each measurement represents one event. The code emits a metric record every time a new request arrives.
+
+- **Average**: The average of the metric values captured during the aggregation interval.
+- **Min**: The smallest value captured during the aggregation interval.
+- **Max**: The largest value captured during the aggregation interval.
++
+Metrics explorer hides the aggregations that are irrelevant and can't be used.
+
+For more information about how metric aggregation works, see [Azure Monitor metrics aggregation and display explained](metrics-aggregation-explained.md).
+
+## Use dimension filters and splitting
+
+Filtering and splitting are powerful diagnostic tools for metrics that have dimensions. You can implement these options to analyze which segments of the metric contribute to the overall metric value and identify possible outliers in the metric data. These features show how various metric segments or dimensions affect the overall value of the metric.
+
+**Filtering** lets you choose which dimension values are included in the chart. You might want to show successful requests when you chart the *server response time* metric. You apply the filter on the *success of request* dimension.
+
+**Splitting** controls whether the chart displays separate lines for each value of a dimension or aggregates the values into a single line. Splitting allows you to visualize how different segments of the metric compare with each other. You can see one line for an average CPU usage across all server instances, or you can see separate lines for each server.
+
+> [!TIP]
+> To hide segments that are irrelevant for your scenario and to make your charts easier to read, use both filtering and splitting on the same dimension.
+
+### Add filters
+
+You can apply filters to charts whose metrics have dimensions. Consider a *Transaction count* metric that has a *Response type* dimension. This dimension indicates whether the response from transactions succeeded or failed. If you filter on this dimension, metrics explorer displays a chart line for only successful or only failed transactions.
+
+1. Above the chart, select **Add filter** to open the **filter picker**.
+
+1. Select a dimension from the **Property** dropdown list.
+
+ :::image type="content" source="./media/analyze-metrics/filter-property.png" alt-text="Screenshot that shows the dropdown list for filter properties in metrics explorer." lightbox="./media/analyze-metrics/filter-property.png":::
+
+1. Select the operator that you want to apply against the dimension (or _property_). The default operator is equals (`=`).
+
+ :::image type="content" source="./media/analyze-metrics/filter-operator.png" alt-text="Screenshot that shows the operator that you can use with the filter.":::
+
+1. Select which dimension values you want to apply to the filter when you're plotting the chart. This example shows filtering out the successful storage transactions.
+
+ :::image type="content" source="./media/analyze-metrics/filter-values.png" alt-text="Screenshot that shows the dropdown list for filter values in metrics explorer.":::
+
+1. After you select the filter values, click outside the **filter picker** to complete the action. The chart shows how many storage transactions have failed.
+
+ :::image type="content" source="./media/analyze-metrics/filtered-chart.png" alt-text="Screenshot that shows the successful filtered storage transactions in the updated chart in metrics explorer." lightbox="./media/analyze-metrics/filtered-chart.png":::
+
+1. Repeat these steps to apply multiple filters to the same charts.
+
+### Apply metric splitting
+
+You can split a metric by dimension to visualize how different segments of the metric compare. Splitting can also help you identify the outlying segments of a dimension.
+
+1. Above the chart, select **Apply splitting** to open the **segment picker**.
+
+1. Choose the dimensions to use to segment your chart.
+
+ :::image type="content" source="./media/analyze-metrics/apply-splitting.png" alt-text="Screenshot that shows the selected dimension on which to segment the chart for splitting.":::
+
+ The chart shows multiple lines with one line for each dimension segment.
+
+ :::image type="content" source="./media/analyze-metrics/segment-dimension.png" alt-text="Screenshot that shows multiple lines, one for each segment of dimension." lightbox="./media/analyze-metrics/segment-dimension.png":::
+
+1. Choose a limit on the number of values to display after you split by the selected dimension. The default limit is 10, as shown in the preceding chart. The range of the limit is 1 to 50.
+
+ :::image type="content" source="./media/analyze-metrics/segment-dimension-limit.png" alt-text="Screenshot that shows the split limit, which restricts the number of values after splitting." lightbox="./media/analyze-metrics/segment-dimension-limit.png":::
+
+1. Choose the sort order on segments: **Descending** (default) or **Ascending**.
+
+ :::image type="content" source="./media/analyze-metrics/segment-dimension-sort.png" alt-text="Screenshot that shows the sort order on split values." lightbox="./media/analyze-metrics/segment-dimension-sort.png":::
+
+1. Segment by multiple segments by selecting multiple dimensions from the **Values** dropdown list. The legend shows a comma-separated list of dimension values for each segment.
+
+ :::image type="content" source="./media/analyze-metrics/segment-dimension-multiple.png" alt-text="Screenshot that shows multiple segments selected, and the corresponding chart." lightbox="./media/analyze-metrics/segment-dimension-multiple.png":::
+
+1. Click outside the segment picker to complete the action and update the chart.
+
+### Split metrics for multiple resources
+
+When you plot a metric for multiple resources, you can choose **Apply splitting** to split by resource ID or resource group. The split allows you to compare a single metric across multiple resources or resource groups. The following chart shows the percentage CPU across nine virtual machines. When you split by resource ID, you see how percentage CPU differs by virtual machine.
++
+For more examples that use filtering and splitting, see [Metric chart examples](../essentials/metric-chart-samples.md).
+
+## Lock the y-axis range
+
+Locking the range of the value (y) axis becomes important in charts that show small fluctuations of large values. Consider how a drop in the volume of successful requests from 99.99 percent to 99.5 percent might represent a significant reduction in the quality of service. Noticing a small fluctuation in a numeric value would be difficult or even impossible if you're using the default chart settings. In this case, you could lock the lowest boundary of the chart to 99 percent to make a small drop more apparent.
+
+Another example is a fluctuation in the available memory. In this scenario, the value technically never reaches 0. Fixing the range to a higher value might make drops in available memory easier to spot.
+
+1. To control the y-axis range, browse to the advanced chart settings by selecting **More options** (...) > **Chart settings**.
+
+ :::image source="./media/analyze-metrics/select-chart-settings.png" alt-text="Screenshot that shows the menu option for chart settings." lightbox="./media/analyze-metrics/select-chart-settings.png":::
+
+1. Modify the values in the **Y-axis range** section, or select **Auto** to revert to the default values.
+
+ :::image type="content" source="./media/analyze-metrics/chart-settings.png" alt-text="Screenshot that shows the Y-axis range section." lightbox="./media/analyze-metrics/chart-settings.png":::
+
+If you lock the boundaries of the y-axis for a chart that tracks count, sum, minimum, or maximum aggregations over a period of time, specify a fixed time granularity. Don't rely on the automatic defaults.
+
+You choose a fixed time granularity because chart values change when the time granularity is automatically modified after a user resizes a browser window or changes screen resolution. The resulting change in time granularity affects the appearance of the chart, invalidating the selection of the y-axis range.
+
+## Set up alert rules
+
+You can use your visualization criteria to create a metric-based alert rule. The new alert rule includes your chart's target resource, metric, splitting, and filter dimensions. You can modify these settings by using the **Create an alert rule** pane.
+
+1. To create an alert rule, select **New alert rule** in the upper-right corner of the chart.
+
+ :::image source="./media/analyze-metrics/new-alert.png" alt-text="Screenshot that shows the button for creating a new alert rule." lightbox="./media/analyze-metrics/new-alert.png":::
+
+1. Select the **Condition** tab. The **Signal name** entry defaults to the metric from your chart. You can choose a different metric.
+
+1. Enter a number for **Threshold value**. The threshold value is the value that triggers the alert. The **Preview** chart shows the threshold value as a horizontal line over the metric values. When you're ready, select the **Details** tab.
+
+ :::image source="./media/analyze-metrics/alert-rule-condition.png" alt-text="Screenshot that shows the Condition tab on the pane for creating an alert rule." lightbox="./media/analyze-metrics/alert-rule-condition.png":::
+
+1. Enter **Name** and **Description** values for the alert rule.
+
+1. Select a **Severity** level for the alert rule. Severities include **Critical**, **Error Warning**, **Informational**, and **Verbose**.
+
+1. Select **Review + create** to review the alert rule.
+
+ :::image source="./media/analyze-metrics/alert-rule-details.png" alt-text="Screenshot that shows the Details tab on the pane for creating an alert rule." lightbox="./media/analyze-metrics/alert-rule-details.png":::
+
+1. Select **Create** to create the alert rule.
+
+For more information, see [Create, view, and manage metric alerts](../alerts/alerts-metric.md).
+
+## Correlate metrics to logs
+
+In metrics explorer, the **Drill into Logs** feature helps you diagnose the root cause of anomalies in your metric chart. Drilling into logs allows you to correlate spikes in your metric chart to the following types of logs and queries:
+
+- **Activity log**: Provides insight into the operations on each Azure resource in the subscription from the outside (the management plane) and updates on Azure Service Health events. Use the activity log to determine the what, who, and when for any write operations (`PUT`, `POST`, or `DELETE`) taken on the resources in your subscription. There's a single activity log for each Azure subscription.
+- **Diagnostic log**: Provides insight into operations that you performed within an Azure resource (the data plane). Examples include getting a secret from a key vault or making a request to a database. The content of resource logs varies by the Azure service and resource type. You must enable logs for the resource.
+- **Recommended log** Provides scenario-based queries that you can use to investigate anomalies in metrics explorer.
+
+Currently, **Drill into Logs** is available for select resource providers. Resource providers that offer the complete **Drill into Logs** experience include Azure Application Insights, Autoscale, Azure App Service, and Azure Storage.
+
+1. To diagnose a spike in failed requests, select **Drill into Logs**.
+
+ :::image source="./media/analyze-metrics/drill-into-log-ai.png" alt-text="Screenshot that shows a spike in failures on an Application Insights metrics pane." lightbox="./media/analyze-metrics/drill-into-log-ai.png":::
+
+1. In the dropdown list, select **Failures**.
+
+ :::image source="./media/analyze-metrics/drill-into-logs-dropdown.png" alt-text="Screenshot that shows the dropdown menu for drilling into logs." lightbox="./media/analyze-metrics/drill-into-logs-dropdown.png":::
+
+1. On the custom failure pane, check for failed operations, top exception types, and failed dependencies.
+
+ :::image source="./media/analyze-metrics/ai-failure-blade.png" alt-text="Screenshot of the Application Insights failure pane." lightbox="./media/analyze-metrics/ai-failure-blade.png":::
+
+## Share your charts
+
+After you configure a chart, you can add it to a dashboard or workbook. By adding a chart to a dashboard or workbook, you can make it accessible to your team. You can also gain insights by viewing it in the context of other monitoring information.
+
+- To pin a configured chart to a dashboard, in the upper-right corner of the chart, select **Save to dashboard** > **Pin to dashboard**.
+
+- To save a configured chart to a workbook, in the upper-right corner of the chart, select **Save to dashboard** > **Save to workbook**.
++
+The Azure Monitor metrics explorer **Share** menu includes several options for sharing your metric chart.
+
+- Use the **Download to Excel** option to immediately download your chart.
+
+- Choose the **Copy link** option to add a link to your chart to the clipboard. You receive a notification when the link is copied successfully.
+
+- In the **Send to Workbook** window, send your chart to a new or existing workbook.
+
+- In the **Pin to Grafana** window, pin your chart to a new or existing Grafana dashboard.
++
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### Why are metrics from the guest OS of my Azure virtual machine not showing up in metrics explorer?
+
+[Platform metrics](./monitor-azure-resource.md#monitoring-data) are collected automatically for Azure resources. You must perform some configuration, though, to collect metrics from the guest OS of a virtual machine. For a Windows VM, install the diagnostic extension and configure the Azure Monitor sink as described in [Install and configure Azure Diagnostics extension for Windows (WAD)](../agents/diagnostics-extension-windows-install.md). For Linux, install the Telegraf agent as described in [Collect custom metrics for a Linux VM with the InfluxData Telegraf agent](./collect-custom-metrics-linux-telegraf.md).
+
+## Next steps
+
+- [Troubleshoot metrics explorer](metrics-troubleshoot.md)
+- [Review available metrics for Azure services](./metrics-supported.md)
+- [Explore examples of configured charts](../essentials/metric-chart-samples.md)
+- [Create custom KPI dashboards](../app/tutorial-app-dashboards.md)
azure-monitor App Insights Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/app-insights-metrics.md
Application Insights log-based metrics let you analyze the health of your monito
* [Log-based metrics](../app/pre-aggregated-metrics-log-metrics.md#log-based-metrics) behind the scene are translated into [Kusto queries](/azure/kusto/query/) from stored events. * [Standard metrics](../app/pre-aggregated-metrics-log-metrics.md#pre-aggregated-metrics) are stored as pre-aggregated time series.
-Since *standard metrics* are pre-aggregated during collection, they have better performance at query time. This makes them a better choice for dashboarding and in real-time alerting. The *log-based metrics* have more dimensions, which makes them the superior option for data analysis and ad-hoc diagnostics. Use the [namespace selector](./metrics-getting-started.md#create-your-first-metric-chart) to switch between log-based and standard metrics in [metrics explorer](./metrics-getting-started.md).
+Since *standard metrics* are pre-aggregated during collection, they have better performance at query time. This makes them a better choice for dashboarding and in real-time alerting. The *log-based metrics* have more dimensions, which makes them the superior option for data analysis and ad-hoc diagnostics. Use the [namespace selector](./metrics-custom-overview.md#namespace) to switch between log-based and standard metrics in [metrics explorer](./analyze-metrics.md).
## Interpret and use queries from this article This article lists metrics with supported aggregations and dimensions. The details about log-based metrics include the underlying Kusto query statements. For convenience, each query uses defaults for time granularity, chart type, and sometimes splitting dimension which simplifies using the query in Log Analytics without any need for modification.
-When you plot the same metric in [metrics explorer](./metrics-getting-started.md), there are no defaults - the query is dynamically adjusted based on your chart settings:
+When you plot the same metric in [metrics explorer](./analyze-metrics.md), there are no defaults - the query is dynamically adjusted based on your chart settings:
- The selected **Time range** is translated into an additional *where timestamp...* clause to only pick the events from selected time range. For example, a chart showing data for the most recent 24 hours, the query includes *| where timestamp > ago(24 h)*.
azure-monitor Data Platform Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-platform-metrics.md
Use [Metrics Explorer](metrics-charts.md) to interactively analyze the data in y
![Screenshot that shows an example graph in Metrics Explorer that displays server requests, server response time, and failed requests.](media/data-platform-metrics/metrics-explorer.png)
-For more information, see [Getting started with Azure Monitor Metrics Explorer](./metrics-getting-started.md).
+For more information, see [Analyze metrics with Azure Monitor metrics explorer](./analyze-metrics.md).
## Data structure
One of the challenges to metric data is that it often has limited information to
Metric dimensions are name/value pairs that carry more data to describe the metric value. For example, a metric called _Available disk space_ might have a dimension called _Drive_ with values _C:_ and _D:_. That dimension would allow viewing available disk space across all drives or for each drive individually.
-See [Apply dimension filters and splitting](metrics-getting-started.md?#apply-dimension-filters-and-splitting) for details on viewing metric dimensions in metrics explorer.
+See [Apply dimension filters and splitting](analyze-metrics.md?#use-dimension-filters-and-splitting) for details on viewing metric dimensions in metrics explorer.
### Nondimensional metric The following table shows sample data from a nondimensional metric, network throughput. It can only answer a basic question like "What was my network throughput at a given time?"
azure-monitor Metrics Aggregation Explained https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-aggregation-explained.md
Let's define a few terms clearly first:
## Summary of process
-Metrics are a series of values stored with a time-stamp. In Azure, most metrics are stored in the Azure Metrics time-series database. When you plot a chart, the values of the selected metrics are retrieved from the database and then separately aggregated based on the chosen time granularity (also known as time grain). You select the size of the time granularity using the [Metrics Explorer time picker panel](../essentials/metrics-getting-started.md#select-a-time-range). If you donΓÇÖt make an explicit selection, the time granularity is automatically selected based on the currently selected time range. Once selected, the metric values that were captured during each time granularity interval are aggregated and placed onto the chart - one datapoint per interval.
+Metrics are a series of values stored with a time-stamp. In Azure, most metrics are stored in the Azure Metrics time-series database. When you plot a chart, the values of the selected metrics are retrieved from the database and then separately aggregated based on the chosen time granularity (also known as time grain). You select the size of the time granularity using the [metrics explorer time picker](../essentials/analyze-metrics.md#configure-the-time-range). If you donΓÇÖt make an explicit selection, the time granularity is automatically selected based on the currently selected time range. Once selected, the metric values that were captured during each time granularity interval are aggregated and placed onto the chart - one datapoint per interval.
## Aggregation types
You can also see that the NULLs give a better calculation of average than if zer
## Next steps -- [Getting started with metrics explorer](../essentials/metrics-getting-started.md)-- [Advanced Metrics explorer](../essentials/metrics-charts.md)
+- [Analyze metrics with Azure Monitor metrics explorer](../essentials/analyze-metrics.md)
azure-monitor Metrics Charts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-charts.md
- Title: Advanced features of Metrics Explorer in Azure Monitor
-description: Learn how to use Metrics Explorer to investigate the health and usage of resources.
---- Previously updated : 07/20/2023----
-# Advanced features of Metrics Explorer in Azure Monitor
-
-In Azure Monitor, [metrics](data-platform-metrics.md) are a series of measured values and counts that are collected and stored over time. Metrics can be standard (also called *platform*) or custom.
-
-The Azure platform provides standard metrics. These metrics reflect the health and usage statistics of your Azure resources.
-
-This article describes advanced features of Metrics Explorer in Azure Monitor. It assumes that you're familiar with basic features of Metrics Explorer. If you're a new user and want to learn how to create your first metric chart, see [Get started with Metrics Explorer](./metrics-getting-started.md).
-
-## Resource scope picker
-
-Use the resource scope picker to view metrics across single resources and multiple resources.
-
-### Select a single resource
-
-1. In the Azure portal, select **Metrics** from the **Monitor** menu or from the **Monitoring** section of a resource's menu.
-
-1. Choose **Select a scope**.
-
- :::image source="./media/metrics-charts/scope-picker.png" alt-text="Screenshot that shows the button that opens the resource scope picker." lightbox="./media/metrics-charts/scope-picker.png":::
-
-1. Use the scope picker to select the resources whose metrics you want to see. If you opened Metrics Explorer from a resource's menu, the scope should be populated.
-
- For some resources, you can view only one resource's metrics at a time. On the **Resource types** menu, these resources are in the **All resource types** section.
-
- :::image source="./media/metrics-charts/single-resource-scope.png" alt-text="Screenshot that shows available resources." lightbox="./media/metrics-charts/single-resource-scope.png":::
-
-1. Select a resource. All subscriptions and resource groups that contain that resource appear.
-
- :::image source="./media/metrics-charts/available-single-resource.png" alt-text="Screenshot that shows a single resource." lightbox="./media/metrics-charts/available-single-resource.png":::
-
- If you want the capability to view the metrics for multiple resources at the same time, or to view metrics across a subscription or resource group, select **Upvote**.
-
-1. When you're satisfied with your selection, select **Apply**.
-
-### Select multiple resources
-
-Some resource types can query for metrics over multiple resources. The resources must be within the same subscription and location. Find these resource types at the top of the **Resource types** menu. For more information, see [Select multiple resources](./metrics-dynamic-scope.md#select-multiple-resources).
--
-For types that are compatible with multiple resources, you can query for metrics across a subscription or multiple resource groups. For more information, see [Select a resource group or subscription](./metrics-dynamic-scope.md#select-a-resource-group-or-subscription).
-
-## Multiple metric lines and charts
-
-In Metrics Explorer, you can create charts that plot multiple metric lines or show multiple metric charts at the same time. This functionality allows you to:
--- Correlate related metrics on the same graph to see how one value relates to another.-- Display metrics that use different units of measure in close proximity.-- Visually aggregate and compare metrics from multiple resources.-
-For example, imagine that you have five storage accounts, and you want to know how much space they consume together. You can create a stacked area chart that shows the individual values and the sum of all the values at points in time.
-
-### Multiple metrics on the same chart
-
-To view multiple metrics on the same chart, first [create a new chart](./metrics-getting-started.md#create-your-first-metric-chart). Then select **Add metric**. Repeat this step to add another metric on the same chart.
--
-Typically, your charts shouldn't mix metrics that use different units of measure. For example, avoid mixing one metric that uses milliseconds with another that uses kilobytes. Also avoid mixing metrics whose scales differ significantly. In these cases, consider using multiple charts instead.
-
-### Multiple charts
-
-To create another chart that uses a different metric, select **New chart**.
-
-To reorder or delete multiple charts, select the ellipsis (**...**) button to open the chart menu. Then select **Move up**, **Move down**, or **Delete**.
--
-## Time range controls
-
-In addition to changing the time range by using the [time picker panel](metrics-getting-started.md#select-a-time-range), you can pan and zoom by using the controls in the chart area.
-
-### Pan
-
-To pan, select the left and right arrows at the edge of the chart. The arrow control moves the selected time range back and forward by one half of the chart's time span. For example, if you're viewing the past 24 hours, selecting the left arrow causes the time range to shift to span a day and a half to 12 hours ago.
-
-Most metrics support 93 days of retention but let you view only 30 days at a time. By using the pan controls, you look at the past 30 days and then easily go back 15 days at a time to view the rest of the retention period.
--
-### Zoom
-
-You can select and drag on the chart to zoom in to a section of a chart. Zooming updates the chart's time range to span your selection. If the time grain is set to **Automatic**, zooming selects a smaller time grain. The new time range applies to all charts in Metrics Explorer.
--
-## Aggregation
-
-When you add a metric to a chart, Metrics Explorer applies a default aggregation. The default makes sense in basic scenarios, but you can use a different aggregation to gain more insights about the metric.
-
-Before you use different aggregations on a chart, you should understand how Metrics Explorer handles them. Metrics are a series of measurements (or "metric values") that are captured over a time period. When you plot a chart, the values of the selected metric are separately aggregated over the *time granularity*.
-
-You select the size of the time grain by using the time picker panel in Metrics Explorer. If you don't explicitly select the time grain, Metrics Explorer uses the currently selected time range by default. After Metrics Explorer determines the time grain, the metric values that it captures during each time grain are aggregated on the chart, one data point per time grain.
-
-For example, suppose a chart shows the *Server response time* metric. It uses the average aggregation over the time span of the last 24 hours.
--
-In this example:
--- If you set the time granularity to 30 minutes, Metrics Explorer draws the chart from 48 aggregated data points. That is, it uses two data points per hour for 24 hours. The line chart connects 48 dots in the chart plot area. Each data point represents the average of all captured response times for server requests that occurred during each of the relevant 30-minute time periods.-- If you switch the time granularity to 15 minutes, you get 96 aggregated data points. That is, you get four data points per hour for 24 hours.-
-Metrics Explorer has five aggregation types:
--- **Sum**: The sum of all values captured during the aggregation interval. The sum aggregation is sometimes called the *total* aggregation.-- **Count**: The number of measurements captured during the aggregation interval.-
- When the metric is always captured with the value of 1, the count aggregation is equal to the sum aggregation. This scenario is common when the metric tracks the count of distinct events and each measurement represents one event. The code emits a metric record every time a new request arrives.
-- **Average**: The average of the metric values captured during the aggregation interval.-- **Min**: The smallest value captured during the aggregation interval.-- **Max**: The largest value captured during the aggregation interval.--
-Metrics Explorer hides the aggregations that are irrelevant and can't be used.
-
-For a deeper discussion of how metric aggregation works, see [Azure Monitor metrics aggregation and display explained](metrics-aggregation-explained.md).
-
-## Filters
-
-You can apply filters to charts whose metrics have dimensions. For example, imagine a *Transaction count* metric that has a *Response type* dimension. This dimension indicates whether the response from transactions succeeded or failed. If you filter on this dimension, Metrics Explorer displays a chart line for only successful or only failed transactions.
-
-### Add a filter
-
-1. Above the chart, select **Add filter**.
-
-1. Select a dimension from the **Property** dropdown list.
-
- :::image type="content" source="./media/metrics-charts/filter-property.png" alt-text="Screenshot that shows the dropdown list for filter properties." lightbox="./media/metrics-charts/filter-property.png":::
-
-1. Select the operator that you want to apply against the dimension (property). The default operator is equals (**=**).
-
- :::image type="content" source="./media/metrics-charts/filter-operator.png" alt-text="Screenshot that shows the operator that you can use with the filter." lightbox="./media/metrics-charts/filter-operator.png":::
-
-1. Select which dimension values you want to apply to the filter when you're plotting the chart. This example shows filtering out the successful storage transactions.
-
- :::image type="content" source="./media/metrics-charts/filter-values.png" alt-text="Screenshot that shows the dropdown list for filter values." lightbox="./media/metrics-charts/filter-values.png":::
-
-1. After you select the filter values, click away from the filter selector to close it. The chart shows how many storage transactions have failed.
-
- :::image type="content" source="./media/metrics-charts/filtered-chart.png" alt-text="Screenshot that shows the successful filtered storage transactions." lightbox="./media/metrics-charts/filtered-chart.png":::
-
-1. Repeat these steps to apply multiple filters to the same charts.
-
-## Metric splitting
-
-You can split a metric by dimension to visualize how different segments of the metric compare. Splitting can also help you identify the outlying segments of a dimension.
-
-### Apply splitting
-
-1. Above the chart, select **Apply splitting**.
-
-1. Choose dimensions on which to segment your chart.
-
- :::image type="content" source="./media/metrics-charts/apply-splitting.png" alt-text="Screenshot that shows the selected dimension on which to segment the chart." lightbox="./media/metrics-charts/apply-splitting.png":::
-
- The chart shows multiple lines, one for each dimension segment.
-
- :::image type="content" source="./media/metrics-charts/segment-dimension.png" alt-text="Screenshot that shows multiple lines, one for each segment of dimension." lightbox="./media/metrics-charts/segment-dimension.png":::
-
-1. Choose a limit on the number of values to be displayed after you split by the selected dimension. The default limit is 10, as shown in the preceding chart. The range of the limit is 1 to 50.
-
- :::image type="content" source="./media/metrics-charts/segment-dimension-limit.png" alt-text="Screenshot that shows the split limit, which restricts the number of values after splitting." lightbox="./media/metrics-charts/segment-dimension-limit.png":::
-
-1. Choose the sort order on segments: **Descending** (default) or **Ascending**.
-
- :::image type="content" source="./media/metrics-charts/segment-dimension-sort.png" alt-text="Screenshot that shows the sort order on split values." lightbox="./media/metrics-charts/segment-dimension-sort.png":::
-
-1. Segment by multiple segments by selecting multiple dimensions from the **Values** dropdown list. The legend shows a comma-separated list of dimension values for each segment.
-
- :::image type="content" source="./media/metrics-charts/segment-dimension-multiple.png" alt-text="Screenshot that shows multiple segments selected, and the corresponding chart." lightbox="./media/metrics-charts/segment-dimension-multiple.png":::
-
-1. Click away from the grouping selector to close it.
-
-> [!TIP]
-> To hide segments that are irrelevant for your scenario and to make your charts easier to read, use both filtering and splitting on the same dimension.
-
-## Locking the range of the y-axis
-
-Locking the range of the value (y) axis becomes important in charts that show small fluctuations of large values.
-
-For example, a drop in the volume of successful requests from 99.99 percent to 99.5 percent might represent a significant reduction in the quality of service. Noticing a small fluctuation in a numeric value would be difficult or even impossible if you're using the default chart settings. In this case, you could lock the lowest boundary of the chart to 99 percent to make a small drop more apparent.
-
-Another example is a fluctuation in the available memory. In this scenario, the value technically never reaches 0. Fixing the range to a higher value might make drops in available memory easier to spot.
-
-To control the y-axis range:
-
-1. Open the chart menu by selecting the ellipsis (**...**). Then select **Chart settings** to access advanced chart settings.
-
- :::image source="./media/metrics-charts/select-chart-settings.png" alt-text="Screenshot that shows the menu option for chart settings." lightbox="./media/metrics-charts/select-chart-settings.png":::
-
-1. Modify the values in the **Y-axis range** section, or select **Auto** to revert to the default values.
-
- :::image type="content" source="./media/metrics-charts/chart-settings.png" alt-text="Screenshot that shows the Y-axis range section." lightbox="./media/metrics-charts/chart-settings.png":::
-
-If you lock the boundaries of the y-axis for a chart that tracks count, sum, minimum, or maximum aggregations over a period of time, specify a fixed time granularity. Don't rely on the automatic defaults.
-
-You choose a fixed time granularity because chart values change when the time granularity is automatically modified after a user resizes a browser window or changes screen resolution. The resulting change in time granularity affects the appearance of the chart, invalidating the selection of the y-axis range.
-
-## Line colors
-
-Chart lines are automatically assigned a color from a default palette.
-
-To change the color of a chart line, select the colored bar in the legend that corresponds to the line on the chart. Use the color picker to select the line color.
--
-Customized colors are preserved when you pin the chart to a dashboard. The following section shows how to pin a chart.
-
-## Saving to dashboards or workbooks
-
-After you configure a chart, you can add it to a dashboard or workbook. By adding a chart to a dashboard or workbook, you can make it accessible to your team. You can also gain insights by viewing it in the context of other monitoring information.
--- To pin a configured chart to a dashboard, in the upper-right corner of the chart, select **Save to dashboard** > **Pin to dashboard**.-- To save a configured chart to a workbook, in the upper-right corner of the chart, select **Save to dashboard** > **Save to workbook**.--
-## Alert rules
-
-You can use your visualization criteria to create a metric-based alert rule. The new alert rule includes your chart's target resource, metric, splitting, and filter dimensions. You can modify these settings by using the **Create an alert rule** pane.
-
-To create an alert rule:
-
-1. Select **New alert rule** in the upper-right corner of the chart.
-
- :::image source="./media/metrics-charts/new-alert.png" alt-text="Screenshot that shows the button for creating a new alert rule." lightbox="./media/metrics-charts/new-alert.png":::
-
-1. Select the **Condition** tab. The **Signal name** entry defaults to the metric from your chart. You can choose a different metric.
-
-1. Enter a number for **Threshold value**. The threshold value is the value that triggers the alert. The **Preview** chart shows the threshold value as a horizontal line over the metric values. When you're ready, select the **Details** tab.
-
- :::image source="./media/metrics-charts/alert-rule-condition.png" alt-text="Screenshot that shows the Condition tab on the pane for creating an alert rule." lightbox="./media/metrics-charts/alert-rule-condition.png":::
-
-1. Enter **Name** and **Description** values for the alert rule.
-
-1. Select a **Severity** level for the alert rule. Severities include **Critical**, **Error Warning**, **Informational**, and **Verbose**.
-
-1. Select **Review + create** to review the alert rule.
-
- :::image source="./media/metrics-charts/alert-rule-details.png" alt-text="Screenshot that shows the Details tab on the pane for creating an alert rule." lightbox="./media/metrics-charts/alert-rule-details.png":::
-
-1. Select **Create** to create the alert rule.
-
-For more information, see [Create, view, and manage metric alerts](../alerts/alerts-metric.md).
-
-## Correlating metrics to logs
-
-In Metrics Explorer, **Drill into Logs** helps you diagnose the root cause of anomalies in your metric chart. Drilling into logs allows you to correlate spikes in your metric chart to the following types of logs and queries:
-
-| Term | Definition |
-||-|
-| Activity log | Provides insight into the operations on each Azure resource in the subscription from the outside (the management plane), in addition to updates on Azure Service Health events. Use the activity log to determine the what, who, and when for any write operations (`PUT`, `POST`, or `DELETE`) taken on the resources in your subscription. There's a single activity log for each Azure subscription. |
-| Diagnostic log | Provides insight into operations that you performed within an Azure resource (the data plane). Examples include getting a secret from a key vault or making a request to a database. The content of resource logs varies by the Azure service and resource type. You must enable logs for the resource. |
-| Recommended log | Provides scenario-based queries that you can use to investigate anomalies in Metrics Explorer. |
-
-Currently, **Drill into Logs** is available for select resource providers. The following resource providers offer the complete **Drill into Logs** experience:
--- Application Insights-- Autoscale-- Azure App Service-- Azure Storage-
-To diagnose a spike in failed requests:
-
-1. Select **Drill into Logs**.
-
- :::image source="./media/metrics-charts/drill-into-log-ai.png" alt-text="Screenshot that shows a spike in failures on an Application Insights metrics pane." lightbox="./media/metrics-charts/drill-into-log-ai.png":::
-
-1. In the dropdown list, select **Failures**.
-
- :::image source="./media/metrics-charts/drill-into-logs-dropdown.png" alt-text="Screenshot that shows the dropdown menu for drilling into logs." lightbox="./media/metrics-charts/drill-into-logs-dropdown.png":::
-
-1. On the custom failure pane, check for failed operations, top exception types, and failed dependencies.
-
- :::image source="./media/metrics-charts/ai-failure-blade.png" alt-text="Screenshot of the Application Insights failure pane." lightbox="./media/metrics-charts/ai-failure-blade.png":::
-
-## Next steps
-
-To create actionable dashboards by using metrics, see [Create custom KPI dashboards](../app/overview-dashboard.md#create-custom-kpi-dashboards-using-application-insights).
azure-monitor Metrics Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-custom-overview.md
After custom metrics are submitted to Azure Monitor, you can browse through them
1. Select the metrics namespace for your custom metric. 1. Select the custom metric.
-For more information on viewing metrics in the Azure portal, see [Getting started with Azure Metrics Explorer](./metrics-getting-started.md).
+For more information on viewing metrics in the Azure portal, see [Analyze metrics with Azure Monitor metrics explorer](./analyze-metrics.md).
## Supported regions
azure-monitor Metrics Dynamic Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-dynamic-scope.md
- Title: View multiple resources in the Azure metrics explorer
-description: Learn how to visualize multiple resources by using the Azure metrics explorer.
--- Previously updated : 12/14/2020--
-# View multiple resources in the Azure metrics explorer
-
-The resource scope picker allows you to view metrics across multiple resources that are within the same subscription and region. This article explains how to view multiple resources by using the Azure metrics explorer feature of Azure Monitor.
-
-## Select a resource
-
-Select **Metrics** from the **Azure Monitor** menu or from the **Monitoring** section of a resource's menu. Then choose **Select a scope** to open the scope picker.
-
-Use the scope picker to select the resources whose metrics you want to see. The scope should be populated if you opened the metrics explorer from a resource's menu.
-
-![Screenshot showing how to open the resource scope picker.](./media/metrics-dynamic-scope/019.png)
-
-## Select multiple resources
-
-Some resource types can query for metrics over multiple resources. The metrics must be within the same subscription and location. Find these resource types at the top of the **Resource types** menu.
-
-![Screenshot that shows a menu of resources that are compatible with multiple resources.](./media/metrics-dynamic-scope/020.png)
-
-> [!WARNING]
-> You must have Monitoring Reader permission at the subscription level to visualize metrics across multiple resources, resource groups, or a subscription. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-
-To visualize metrics over multiple resources, start by selecting multiple resources within the resource scope picker.
-
-![Screenshot that shows how to select multiple resources.](./media/metrics-dynamic-scope/021.png)
-
-> [!NOTE]
-> The resources you select must be within the same resource type, location, and subscription. Resources that don't fit these criteria aren't selectable.
-
-When you finish, choose **Apply** to save your selections.
-
-## Select a resource group or subscription
-
-> [!WARNING]
-> You must have Monitoring Reader permission at the subscription level to visualize metrics across multiple resources, resource groups, or a subscription.
-
-For types that are compatible with multiple resources, you can query for metrics across a subscription or multiple resource groups. Start by selecting a subscription or one or more resource groups:
-
-![Screenshot that shows how to query across multiple resource groups.](./media/metrics-dynamic-scope/022.png)
-
-Select a resource type and location.
-
-![Screenshot that shows the selected resource groups.](./media/metrics-dynamic-scope/023.png)
-
-You can expand the selected scopes to verify the resources your selections apply to.
-
-![Screenshot that shows the selected resources within the groups.](./media/metrics-dynamic-scope/024.png)
-
-When you finish selecting scopes, select **Apply**.
-
-## Split and filter by resource group or resources
-
-After plotting your resources, you can use splitting and filtering to gain more insight into your data.
-
-Splitting allows you to visualize how different segments of the metric compare with each other. For instance, when you plot a metric for multiple resources, you can choose **Apply splitting** to split by resource ID or resource group. The split allows you to compare a single metric across multiple resources or resource groups.
-
-For example, the following chart shows the percentage CPU across nine VMs. When you split by resource ID, you see how percentage CPU differs by VM.
-
-![Screenshot that shows how to use splitting to see the percentage CPU across VMs.](./media/metrics-dynamic-scope/026.png)
-
-Along with splitting, you can use filtering to display only the resource groups that you want to see. For instance, to view the percentage CPU for VMs for a certain resource group, you can select **Add filter** to filter by resource group.
-
-In this example, we filter by TailspinToysDemo. Here, the filter removes metrics associated with resources in TailspinToys.
-
-![Screenshot that shows how to filter by resource group.](./media/metrics-dynamic-scope/027.png)
-
-## Pin multiple-resource charts
-
-Multiple-resource charts that visualize metrics across resource groups and subscriptions require the user to have *Monitoring Reader* permission at the subscription level. Ensure that all users of the dashboards to which you pin multiple-resource charts have sufficient permissions. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-
-To pin your multiple-resource chart to a dashboard, see [Saving to dashboards or workbooks](../essentials/metrics-charts.md#saving-to-dashboards-or-workbooks).
-
-## Next steps
-
-* [Troubleshoot the metrics explorer](../essentials/metrics-troubleshoot.md)
-* [See a list of available metrics for Azure services](./metrics-supported.md)
-* [See examples of configured charts](../essentials/metric-chart-samples.md)
azure-monitor Metrics Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-getting-started.md
- Title: Get started with Azure Monitor metrics explorer
-description: Learn how to create your first metric chart with Azure Monitor metrics explorer.
---- Previously updated : 07/20/2023---
-# Get started with metrics explorer
-
-Azure Monitor metrics explorer is a component of the Azure portal that you can use to plot charts, visually correlate trends, and investigate spikes and dips in metrics' values. Use metrics explorer to investigate the health and utilization of your resources.
--
-## Create a metrics chart
-
-Create a metric chart using the following steps:
--- Open the Metrics explorer--- [Pick a resource and a metric](#create-your-first-metric-chart) and you see a basic chart. --- [Select a time range](#select-a-time-range) that's relevant for your investigation.--- [Apply dimension filters and splitting](#apply-dimension-filters-and-splitting). The filters and splitting allow you to analyze which segments of the metric contribute to the overall metric value and identify possible outliers.--- Use [advanced settings](#advanced-chart-settings) to customize the chart before you pin it to dashboards. [Configure alerts](../alerts/alerts-metric-overview.md) to receive notifications when the metric value exceeds or drops below a threshold.-- Add more metrics to the chart. You can also [view multiple resources in the same view](./metrics-dynamic-scope.md).--
-## Create your first metric chart
-
-1. Open the metrics explorer from the monitor overview page, or from the monitoring section of any resource.
-
- :::image type="content" source="./media/metrics-getting-started/metrics-menu.png" alt-text="A screenshot showing the monitoring page menu.":::
-
-1. Select the **Select a scope** button to open the resource scope picker. You can use the picker to select the resource you want to see metrics for. When you opened the metrics explorer from the resource's menu, the scope is populated for you.
-
- To learn how to view metrics across multiple resources, see [View multiple resources in Azure Monitor metrics explorer](./metrics-dynamic-scope.md).
-
- > ![Screenshot that shows selecting a resource.](./media/metrics-getting-started/scope-picker.png)
-
-1. For some resources, you must pick a namespace. The namespace is a way to organize metrics so that you can easily find them. For example, storage accounts have separate namespaces for storing metrics for files, tables, blobs, and queues. Most resource types have only one namespace.
-
-1. Select a metric from a list of available metrics.
-
-1. Select the metric aggregation. Available aggregations include minimum, maximum, the average value of the metric, or a count of the number of samples. For more information on aggregations, see [Advanced features of Metrics Explorer](../essentials/metrics-charts.md#aggregation).
-
- [ ![Screenshot that shows selecting a metric.](./media/metrics-getting-started/metrics-dropdown.png) ](./media/metrics-getting-started/metrics-dropdown.png#lightbox)
---
-> [!TIP]
-> Select **Add metric** and repeat these steps to see multiple metrics plotted in the same chart. For multiple charts in one view, select **Add chart**.
-
-## Select a time range
-
-> [!NOTE]
-> [Most metrics in Azure are stored for 93 days](../essentials/data-platform-metrics.md#retention-of-metrics). You can query no more than 30 days' worth of data on any single chart. You can [pan](metrics-charts.md#pan) the chart to view the full retention. The 30-day limitation doesn't apply to [log-based metrics](../app/pre-aggregated-metrics-log-metrics.md#log-based-metrics).
-
-By default, the chart shows the most recent 24 hours of metrics data. Use the **time picker** panel to change the time range, zoom in, or zoom out on your chart.
-
-[ ![Screenshot that shows changing the time range panel.](./media/metrics-getting-started/time.png) ](./media/metrics-getting-started/time.png#lightbox)
-
-> [!TIP]
-> Use the **time brush** to investigate an interesting area of the chart like a spike or a dip. Select an area on the chart and the chart zooms in to show more detail for the selected area.
--
-## Apply dimension filters and splitting
-
-[Filtering](../essentials/metrics-charts.md#filters) and [splitting](../essentials/metrics-charts.md#apply-splitting) are powerful diagnostic tools for the metrics that have dimensions. These features show how various metric segments or dimensions affect the overall value of the metric. You can use them to identify possible outliers. For example
--- **Filtering** lets you choose which dimension values are included in the chart. For example, you might want to show successful requests when you chart the *server response time* metric. You apply the filter on the *success of request* dimension.-- **Splitting** controls whether the chart displays separate lines for each value of a dimension or aggregates the values into a single line. For example, you can see one line for an average CPU usage across all server instances, or you can see separate lines for each server. The following image shows splitting a Virtual Machine Scale Set to see each virtual machine separately.--
-For examples that have filtering and splitting applied, see [Metric chart examples](../essentials/metric-chart-samples.md). The article shows the steps that were used to configure the charts.
-
-## Share your metric chart
-
-Share your metric chart in any of the following ways: See the following instructions on how to share information from your metric charts by using Excel, a link, or a workbook.
-
-+ Download to Excel. Select **Share** > **Download to Excel**. Your download starts immediately.
-+ Share a link. Select **Share** > **Copy link**. You should get a notification that the link was copied successfully.
-+ Send to workbook Select **Share** > **Send to Workbook**. In the **Send to Workbook** window, you can send the metric chart to a new or existing workbook.
-+ Pin to Grafana. Select **Share** > **Pin to Grafana**. In the **Pin to Grafana** window, you can send the metric chart to a new or existing Grafana dashboard.
--
-## Advanced chart settings
-
-You can customize the chart style and title, and modify advanced chart settings. When you're finished with customization, pin the chart to a dashboard or save it to a workbook. You can also configure metrics alerts. See [Advanced features of Metrics Explorer](../essentials/metrics-charts.md) to learn about these and other advanced features of Azure Monitor metrics explorer.
-
-## Next steps
-
-* [Learn about advanced features of metrics explorer](../essentials/metrics-charts.md)
-* [Viewing multiple resources in metrics explorer](./metrics-dynamic-scope.md)
-* [Troubleshooting metrics explorer](metrics-troubleshoot.md)
-* [See a list of available metrics for Azure services](./metrics-supported.md)
-* [See examples of configured charts](../essentials/metric-chart-samples.md)
azure-monitor Cross Workspace Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/cross-workspace-queries.md
For either implicit or explicit cross-workspace queries, you need to specify the
- Workspace ID - GUID string - Azure Resource ID - string with format /subscriptions/\<subscriptionId\>/resourceGroups/\<resourceGroup\>/providers/ microsoft.operationalinsights/workspaces/\<workspaceName\>
+> [!NOTE]
+> We strongly recommend identifying a workspace by its unique Workspace ID or Azure Resource ID because they remove ambiguity and are more performant.
+ ## Implicit cross workspace queries For implicit syntax, specify the workspaces that you want to include in your query scope. The API performs a single query over each application provided in your list. The syntax for a cross-workspace POST is:
azure-monitor Query Packs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/query-packs.md
Last updated 06/22/2022
# Query packs in Azure Monitor Logs Query packs act as containers for log queries in Azure Monitor. They let you save log queries and share them across workspaces and other contexts in Log Analytics.
-## View query packs
-You can view and manage query packs in the Azure portal from the **Log Analytics query packs** menu. Select a query pack to view and edit its permissions. This article describes how to create a query pack by using the API.
-
-[![Screenshot that shows query packs.](media/query-packs/view-query-pack.png)](media/query-packs/view-query-pack.png#lightbox)
- ## Permissions
-You can set the permissions on a query pack when you view it in the Azure portal. Users require the following permissions to use query packs:
+You can set the permissions on a query pack when you view it in the Azure portal. You need the following permissions to use query packs:
- **Reader**: Users can see and run all queries in the query pack. - **Contributor**: Users can modify existing queries and add new queries to the query pack.
You can set the permissions on a query pack when you view it in the Azure portal
> [!IMPORTANT] > When a user needs to modify or add queries, always grant the user the Contributor permission on the `DefaultQueryPack`. Otherwise, the user won't be able to save any queries to the subscription, including in other query packs.
+## View query packs
+You can view and manage query packs in the Azure portal from the **Log Analytics query packs** menu. Select a query pack to view and edit its permissions. This article describes how to create a query pack by using the API.
+
+[![Screenshot that shows query packs.](media/query-packs/view-query-pack.png)](media/query-packs/view-query-pack.png#lightbox)
+ ## Default query pack
-A query pack, called `DefaultQueryPack`, is automatically created in each subscription in a resource group called `LogAnalyticsDefaultResources` when the first query is saved. You can create queries in this query pack or create other query packs depending on your requirements.
+Azure Monitor automatically creates a query pack called `DefaultQueryPack` in each subscription in a resource group called `LogAnalyticsDefaultResources` when you save your first query. You can save queries to this query pack or create other query packs depending on your requirements.
## Use multiple query packs
-The single default query pack will be sufficient for most users to save and reuse queries. But there are reasons that you might want to create multiple query packs for users in your organization. For example, you might want to load different sets of queries in different Log Analytics sessions and provide different permissions for different collections of queries.
-When you create a new query pack by using the API, you can add tags that classify queries according to your business requirements. For example, you could tag a query pack to relate it to a particular department in your organization or to severity of issues that the included queries are meant to address. By using tags, you can create different sets of queries intended for different sets of users and different situations.
+The default query pack is sufficient for most users to save and reuse queries. You might want to create multiple query packs for users in your organization if, for example, you want to load different sets of queries in different Log Analytics sessions and provide different permissions for different collections of queries.
-## Query pack definition
-Each query pack is defined in a JSON file that includes the definition for one or more queries. Each query is represented by a block.
+When you [create a new query pack](#create-a-query-pack), you can add tags that classify queries based on your business needs. For example, you could tag a query pack to relate it to a particular department in your organization or to severity of issues that the included queries are meant to address. By using tags, you can create different sets of queries intended for different sets of users and different situations.
-```json
-{
- "properties":
- {
- "displayName": "Query name that will be displayed in the UI",
- "description": "Query description that will be displayed in the UI",
- "body": "<<query text, standard KQL code>>",
- "related": {
- "categories": [
- "workloads"
- ],
- "resourceTypes": [
- "microsoft.insights/components"
- ],
- "solutions": [
- "logmanagement"
- ]
- },
- "tags": {
- "Tag1": [
- "Value1",
- "Value2"
- ]
- },
- }
-}
-```
+To add query packs to your Log Analytics workspace:
-## Query properties
-Each query in the query pack has the following properties:
+1. Open Log Analytics and select **Queries** in the upper-right corner.
+1. In the upper-left corner on the **Queries** dialog, next to **Query packs**, click **0 selected**.
+1. Select the query packs that you want to add to the workspace.
-| Property | Description |
-|:|:|
-| displayName | Display name listed in Log Analytics for each query. |
-| description | Description of the query displayed in Log Analytics for each query. |
-| body | Query written in Kusto Query Language. |
-| related | Related categories, resource types, and solutions for the query. Used for grouping and filtering in Log Analytics by the user to help locate their query. Each query can have up to 10 of each type. Retrieve allowed values from https://api.loganalytics.io/v1/metadata?select=resourceTypes, solutions, and categories. |
-| tags | Other tags used by the user for sorting and filtering in Log Analytics. Each tag will be added to Category, Resource Type, and Solution when you [group and filter queries](queries.md#find-and-filter-queries). |
+
+> [!IMPORTANT]
+> You can add up to five query packs to a Log Analytics workspace.
## Create a query pack You can create a query pack by using the REST API or from the **Log Analytics query packs** pane in the Azure portal. To open the **Log Analytics query packs** pane in the portal, select **All services** > **Other**.
The payload of the request is the JSON that defines one or more queries and the
} ```
+Each query in the query pack has the following properties:
+
+| Property | Description |
+|:|:|
+| `displayName` | Display name listed in Log Analytics for each query. |
+| `description` | Description of the query displayed in Log Analytics for each query. |
+| `body` | Query written in Kusto Query Language. |
+| `related` | Related categories, resource types, and solutions for the query. Used for grouping and filtering in Log Analytics by the user to help locate their query. Each query can have up to 10 of each type. Retrieve allowed values from https://api.loganalytics.io/v1/metadata?select=resourceTypes, solutions, and categories. |
+| `tags` | Other tags used by the user for sorting and filtering in Log Analytics. Each tag will be added to Category, Resource Type, and Solution when you [group and filter queries](queries.md#find-and-filter-queries). |
+ ### Create a request Use the following request to create a new query pack by using the REST API. The request should use bearer token authorization. The content type should be `application/json`.
azure-monitor Monitor Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-azure-monitor.md
The metrics and logs you can collect are discussed in the following sections.
## Analyzing metrics
-You can analyze metrics for *Azure Monitor* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](./essentials/metrics-getting-started.md) for details on using this tool.
+You can analyze metrics for *Azure Monitor* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](./essentials/analyze-metrics.md) for details on using this tool.
For a list of the platform metrics collected for Azure Monitor into itself, see [Azure Monitor monitoring data reference](azure-monitor-monitoring-reference.md#metrics).
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
The Azure portal contains built in tools that allow you to analyze monitoring da
|Tool |Description | |||
-|[Metrics explorer](essentials/metrics-getting-started.md)|Use the Azure Monitor metrics explorer user interface in the Azure portal to investigate the health and utilization of your resources. Metrics explorer helps you plot charts, visually correlate trends, and investigate spikes and dips in metric values. Metrics explorer contains features for applying dimensions and filtering, and for customizing charts. These features help you analyze exactly the data you need in a visually intuitive way.|
+|[Metrics explorer](essentials/analyze-metrics.md)|Use the Azure Monitor metrics explorer user interface in the Azure portal to investigate the health and utilization of your resources. Metrics explorer helps you plot charts, visually correlate trends, and investigate spikes and dips in metric values. Metrics explorer contains features for applying dimensions and filtering, and for customizing charts. These features help you analyze exactly the data you need in a visually intuitive way.|
|[Log Analytics](logs/log-analytics-overview.md)|The Log Analytics user interface in the Azure portal helps you query the log data collected by Azure Monitor so that you can quickly retrieve, consolidate, and analyze collected data. After creating test queries, you can then directly analyze the data with Azure Monitor tools, or you can save the queries for use with visualizations or alert rules. Log Analytics workspaces are based on Azure Data Explorer, using a powerful analysis engine and the rich Kusto query language (KQL).Azure Monitor Logs uses a version of the Kusto Query Language suitable for simple log queries, and advanced functionality such as aggregations, joins, and smart analytics. You can [get started with KQL](logs/get-started-queries.md) quickly and easily. NOTE: The term "Log Analytics" is sometimes used to mean both the Azure Monitor Logs data platform store and the UI that accesses that store. Previous to 2019, the term "Log Analytics" did refer to both. It's still common to find content using that framing in various blogs and documentation on the internet. | |[Change Analysis](change/change-analysis.md)| Change Analysis is a subscription-level Azure resource provider that checks resource changes in the subscription and provides data for diagnostic tools to help users understand what changes might have caused issues. The Change Analysis user interface in the Azure portal gives you insight into the cause of live site issues, outages, or component failures. Change Analysis uses the [Azure Resource Graph](../governance/resource-graph/overview.md) to detect various types of changes, from the infrastructure layer through application deployment.|
azure-monitor Monitor Virtual Machine Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-analyze.md
Access the single machine analysis experience from the **Monitoring** section of
| Option | Description | |:|:|
-| Overview page | Select the **Monitoring** tab to display alerts, [platform metrics](../essentials/data-platform-metrics.md), and other monitoring information for the virtual machine host. You can see the number of active alerts on the tab. In the **Monitoring** tab, you get a quick view of:<br><br>**Alerts:** the alerts fired in the last 24 hours, with some important statistics about those alerts. If you do not have any alerts set up for this VM, there is a link to help you quickly create new alerts for your VM.<br><br>**Key metrics:** the trend over different time periods for important metrics, such as CPU, network, and disk. Because these are host metrics though, counters from the guest operating system such as memory aren't included. Select a graph to work with this data in [metrics explorer](../essentials/metrics-getting-started.md) where you can perform different aggregations, and add more counters for analysis. |
+| Overview page | Select the **Monitoring** tab to display alerts, [platform metrics](../essentials/data-platform-metrics.md), and other monitoring information for the virtual machine host. You can see the number of active alerts on the tab. In the **Monitoring** tab, you get a quick view of:<br><br>**Alerts:** the alerts fired in the last 24 hours, with some important statistics about those alerts. If you do not have any alerts set up for this VM, there is a link to help you quickly create new alerts for your VM.<br><br>**Key metrics:** the trend over different time periods for important metrics, such as CPU, network, and disk. Because these are host metrics though, counters from the guest operating system such as memory aren't included. Select a graph to work with this data in [metrics explorer](../essentials/analyze-metrics.md) where you can perform different aggregations, and add more counters for analysis. |
| Activity log | See [activity log](../essentials/activity-log.md#view-the-activity-log) entries filtered for the current virtual machine. Use this log to view the recent activity of the machine, such as any configuration changes and when it was stopped and started. | Insights | Displays VM insights views if If the VM is enabled for [VM insights](../vm/vminsights-overview.md).<br><br>Select the **Performance** tab to view trends of critical performance counters over different periods of time. When you open VM insights from the virtual machine menu, you also have a table with detailed metrics for each disk. For details on how to use the Map view for a single machine, see [Chart performance with VM insights](vminsights-performance.md#view-performance-directly-from-an-azure-vm).<br><br>If *processes and dependencies* is enabled for the VM, select the **Map** tab to view the running processes on the machine, dependencies on other machines, and external processes. For details on how to use the Map view for a single machine, see [Use the Map feature of VM insights to understand application components](vminsights-maps.md#view-a-map-from-a-vm).<br><br>If the VM is not enabled for VM insights, it offers the option to enable VM insights. | | Alerts | View [alerts](../alerts/alerts-overview.md) for the current virtual machine. These alerts only use the machine as the target resource, so there might be other alerts associated with it. You might need to use the **Alerts** option in the Azure Monitor menu to view alerts for all resources. For details, see [Monitor virtual machines with Azure Monitor - Alerts](monitor-virtual-machine-alerts.md). |
Access the multiple machine analysis experience from the **Monitor** menu in the
|:|:| | Activity log | See [activity log](../essentials/activity-log.md#view-the-activity-log) entries filtered for all resources. Create a filter for a **Resource Type** of virtual machines or Virtual Machine Scale Sets to view events for all your machines. | | Alerts | View [alerts](../alerts/alerts-overview.md) for all resources. This includes alerts related to all virtual machines in the workspace. Create a filter for a **Resource Type** of virtual machines or Virtual Machine Scale Sets to view alerts for all your machines. |
-| Metrics | Open [metrics explorer](../essentials/metrics-getting-started.md) with no scope selected. This feature is particularly useful when you want to compare trends across multiple machines. Select a subscription or a resource group to quickly add a group of machines to analyze together. |
+| Metrics | Open [metrics explorer](../essentials/analyze-metrics.md) with no scope selected. This feature is particularly useful when you want to compare trends across multiple machines. Select a subscription or a resource group to quickly add a group of machines to analyze together. |
| Logs | Open [Log Analytics](../logs/log-analytics-overview.md) with the [scope](../logs/scope.md) set to the workspace. You can select from a variety of existing queries to drill into log and performance data for all machines. Or you can create a custom query to perform additional analysis. | | Workbooks | Open the workbook gallery with the VM insights workbooks for multiple machines. For a list of the VM insights workbooks designed for multiple machines, see [VM insights workbooks](vminsights-workbooks.md#vm-insights-workbooks). |
This level of detail can be confusing if you're new to Azure Monitor. The follow
## Analyze metric data with metrics explorer
-By using metrics explorer, you can plot charts, visually correlate trends, and investigate spikes and dips in metrics' values. For details on how to use this tool, see [Getting started with Azure Metrics Explorer](../essentials/metrics-getting-started.md).
+By using metrics explorer, you can plot charts, visually correlate trends, and investigate spikes and dips in metrics' values. For details on how to use this tool, see [Analyze metrics with Azure Monitor metrics explorer](../essentials/analyze-metrics.md).
The following namespaces are used by virtual machines.
azure-resource-manager Virtual Machines Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/virtual-machines-move-limitations.md
Title: Special cases to move Azure VMs to new subscription or resource group description: Use Azure Resource Manager to move virtual machines to a new resource group or subscription. Previously updated : 10/25/2023 Last updated : 10/30/2023 # Handling special cases when moving virtual machines to resource group or subscription
-This article describes special cases that require extra steps when moving a virtual machine to a new resource group or Azure subscription. If your virtual machine doesn't match any of these scenarios, you can move the virtual machine with the standard steps described in [Move resources to a new resource group or subscription](../move-resource-group-and-subscription.md).
+This article describes special cases that require extra steps when moving a virtual machine to a new resource group or Azure subscription. If your virtual machine uses disk encryption, a Marketplace plan, or Azure Backup, you must use one of the workarounds described in this article. For all other scenarios, move the virtual machine with the standard operations for [Azure portal](../move-resource-group-and-subscription.md#use-the-portal), [Azure CLI](../move-resource-group-and-subscription.md#use-azure-cli), or [Azure PowerShell](../move-resource-group-and-subscription.md#use-azure-powershell). For Azure CLI, use the [az resource move](/cli/azure/resource#az-resource-move) command. For Azure PowerShell, use the [Move-AzResource](/powershell/module/az.resources/move-azresource) command.
If you want to move a virtual machine to a new region, see [Tutorial: Move Azure VMs across regions](../../../resource-mover/tutorial-move-region-virtual-machines.md).
azure-signalr Howto Enable Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-enable-geo-replication.md
With the new geo-replication feature, Contoso can now establish a replica in Can
![Diagram of using one Azure SignalR instance with replica to handle traffic from two countries.](./media/howto-enable-geo-replication/signalr-replica.png "Replica Example") ## Create a SignalR replica-
+# [Portal](#tab/Portal)
To create a replica, Navigate to the SignalR **Replicas** blade on the Azure portal and click **Add** to create a replica. It will be automatically enabled upon creation. ![Screenshot of creating replica for Azure SignalR on Portal.](./media/howto-enable-geo-replication/signalr-replica-create.png "Replica create")
-> [!NOTE]
-> * Geo-replication is a feature available in premium tier.
-> * A replica is considered a separate resource when it comes to billing. See [Pricing and resource unit](#pricing-and-resource-unit) for more details.
- After creation, you would be able to view/edit your replica on the portal by clicking the replica name. ![Screenshot of overview blade of Azure SignalR replica resource. ](./media/howto-enable-geo-replication/signalr-replica-overview.png "Replica Overview")
+# [Bicep](#tab/Bicep)
+
+Use Visual Studio Code or your favorite editor to create a file with the following content and name it main.bicep:
+
+```bicep
+@description('The name for your SignalR service')
+param primaryName string = 'contoso'
+
+@description('The region in which to create your SignalR service')
+param primaryLocation string = 'eastus'
+
+@description('Unit count of your SignalR service')
+param primaryCapacity int = 1
+
+resource primary 'Microsoft.SignalRService/signalr@2023-08-01-preview' = {
+ name: primaryName
+ location: primaryLocation
+ sku: {
+ capacity: primaryCapacity
+ name: 'Premium_P1'
+ }
+ properties: {
+ }
+}
+
+@description('The name for your SignalR replica')
+param replicaName string = 'contoso-westus'
+
+@description('The region in which to create the SignalR replica')
+param replicaLocation string = 'westus'
+
+@description('Unit count of the SignalR replica')
+param replicaCapacity int = 1
+
+@description('Whether to enable region endpoint for the replica')
+param regionEndpointEnabled string = 'Enabled'
+
+resource replica 'Microsoft.SignalRService/signalr/replicas@2023-08-01-preview' = {
+ parent: primary
+ name: replicaName
+ location: replicaLocation
+ sku: {
+ capacity: replicaCapacity
+ name: 'Premium_P1'
+ }
+ properties: {
+ regionEndpointEnabled: regionEndpointEnabled
+ }
+}
+```
+
+Deploy the Bicep file using Azure CLI
+ ```azurecli
+ az group create --name MyResourceGroup --location eastus
+ az deployment group create --resource-group MyResourceGroup --template-file main.bicep
+ ```
+
+-
## Pricing and resource unit Each replica has its **own** `unit` and `autoscale settings`.
azure-signalr Signalr Howto Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-diagnostic-logs.md
The metrics and logs you can collect are discussed in the following sections.
## Analyzing metrics
-You can analyze metrics for Azure SignalR with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+You can analyze metrics for Azure SignalR with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
For a list of the platform metrics collected for Azure SignalR, see [Metrics](concept-metrics.md).
azure-web-pubsub Howto Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-azure-monitor.md
The metrics and logs you can collect are discussed in the following sections.
## Analyzing metrics
-You can analyze metrics for Azure Web PubSub with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+You can analyze metrics for Azure Web PubSub with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
For a list of the platform metrics collected for Azure Web PubSub, see [Metrics](concept-metrics.md).
cloud-shell Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/get-started.md
description: Learn how to start using Azure Cloud Shell. ms.contributor: jahelmic Previously updated : 10/23/2023 Last updated : 10/30/2023 tags: azure-resource-manager Title: Get started with Azure Cloud Shell
To see all resource providers, and the registration status for your subscription
1. Select the **Subscription** used to create the storage account and file share. 1. Select **Create storage**.
+ > [!NOTE]
+ > By following these steps, Cloud Shell creates a standard storage account and allocates 5 GB of
+ > storage for the file share. You can also create a storage account manually and specify the
+ > storage account and file share to use. If you use a Premium storage account, Cloud Shell
+ > allocates 100 GB of storage for the file share.
+ ### Select your shell environment Cloud Shell allows you to select either **Bash** or **PowerShell** for your command-line experience.
cloud-shell Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/troubleshooting.md
This article covers troubleshooting Cloud Shell common scenarios.
### Disabling Cloud Shell in a locked down network environment -- **Details**: Administrators may wish to disable access to Cloud Shell for their users. Cloud Shell
+- **Details**: Administrators might want to disable access to Cloud Shell for their users. Cloud Shell
depends on access to the `ux.console.azure.com` domain, which can be denied, stopping any access to Cloud Shell's entry points including `portal.azure.com`, `shell.azure.com`, Visual Studio Code Azure Account extension, and `learn.microsoft.com`. In the US Government cloud, the entry point is `ux.console.azure.us`; there's no corresponding `shell.azure.us`. - **Resolution**: Restrict access to `ux.console.azure.com` or `ux.console.azure.us` via network
- settings to your environment. The Cloud Shell icon will still exist in the Azure portal, but can't
- connect to the service.
+ settings to your environment. Even though the Cloud Shell icon still exists in the Azure portal,
+ you can't connect to the service.
### Storage Dialog - Error: 403 RequestDisallowedByPolicy
This article covers troubleshooting Cloud Shell common scenarios.
- **Details**: Cloud Shell requires the ability to establish a websocket connection to Cloud Shell infrastructure. - **Resolution**: Confirm that your network settings to allow sending HTTPS and websocket requests
- to domains at `*.console.azure.com`.
+ to domains at `*.console.azure.com` and `*.servicebus.windows.net`.
### Set your Cloud Shell connection to support using TLS 1.2
This article covers troubleshooting Cloud Shell common scenarios.
> [!NOTE] > Azure VMs must have a Public facing IP address. -- **Details**: Due to the default Windows Firewall settings for WinRM the user may see the following
+- **Details**: Due to the default Windows Firewall settings for WinRM the user might see the following
error: > Ensure the WinRM service is running. Remote Desktop into the VM for the first time and ensure
sessions produces a "Tenant User Over Quota" error. If you have a legitimate nee
your anticipated usage. Cloud Shell is provided as a free service for managing your Azure environment. It's not as a general
-purpose computing platform. Excessive automated usage may be considered in breach to the Azure Terms
+purpose computing platform. Excessive automated usage can be considered in breach to the Azure Terms
of Service and could lead to Cloud Shell access being blocked. ### System state and persistence
Cloud Shell supports the latest versions of following browsers:
- Windows: <kbd>Ctrl</kbd>+<kbd>c</kbd> to copy is supported but use <kbd>Shift</kbd>+<kbd>Insert</kbd> to paste.
- - FireFox/IE may not support clipboard permissions properly.
+ - FireFox might not support clipboard permissions properly.
- macOS: <kbd>Cmd</kbd>+<kbd>c</kbd> to copy and <kbd>Cmd</kbd>+<kbd>v</kbd> to paste. - Linux: <kbd>CTRL</kbd>+<kbd>c</kbd> to copy and <kbd>CTRL</kbd>+<kbd>Shift</kbd>+<kbd>v</kbd> to paste.
cloud-shell Vnet Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/vnet-deployment.md
+
+description: This article provides step-by-step instructions to deploy Azure Cloud Shell in a private virtual network.
+ms.contributor: jahelmic
Last updated : 10/10/2023++
+ Title: Deploy Azure Cloud Shell in a virtual network with quickstart templates
++
+# Deploy Cloud Shell in a virtual network by using quickstart templates
+
+Before you run quickstart templates to deploy Azure Cloud Shell in a virtual network (VNet), there
+are several prerequisites to complete. You must have the **Owner** role assignment on the
+subscription. To view and assign roles, see [List Owners of a Subscription][10].
+
+This article walks you through the following steps to configure and deploy Cloud Shell in a virtual
+network:
+
+1. Register resource providers.
+1. Collect the required information.
+1. Create the virtual networks by using the **Azure Cloud Shell - VNet** Azure Resource Manager
+ template (ARM template).
+1. Create the virtual network storage account by using the **Azure Cloud Shell - VNet storage** ARM
+ template.
+1. Configure and use Cloud Shell in a virtual network.
+
+## 1. Register resource providers
+
+Cloud Shell needs access to certain Azure resources. You make that access available through
+resource providers. The following resource providers must be registered in your subscription:
+
+- **Microsoft.CloudShell**
+- **Microsoft.ContainerInstances**
+- **Microsoft.Relay**
+
+Depending on when your tenant was created, some of these providers might already be registered.
+
+To see all resource providers and the registration status for your subscription:
+
+1. Sign in to the [Azure portal][04].
+1. On the Azure portal menu, search for **Subscriptions**. Select it from the available options.
+1. Select the subscription that you want to view.
+1. On the left menu, under **Settings**, select **Resource providers**.
+1. In the search box, enter `cloudshell` to search for the resource provider.
+1. Select the **Microsoft.CloudShell** resource provider from the provider list.
+1. Select **Register** to change the status from **unregistered** to **registered**.
+1. Repeat the previous steps for the **Microsoft.ContainerInstances** and **Microsoft.Relay**
+ resource providers.
+
+[![Screenshot of selecting resource providers in the Azure portal.][98]][98a]
+
+## 2. Collect the required information
+
+You need to collect several pieces of information before you can deploy Cloud Shell.
+
+You can use the default Cloud Shell instance to gather the required information and create the
+necessary resources. You should create dedicated resources for the Cloud Shell virtual network
+deployment. All resources must be in the same Azure region and in the same resource group.
+
+Fill in the following values:
+
+- **Subscription**: The name of your subscription that contains the resource group for the Cloud
+ Shell virtual network deployment.
+- **Resource Group**: The name of the resource group for the Cloud Shell virtual network deployment.
+- **Region**: The location of the resource group.
+- **Virtual Network**: The name of the Cloud Shell virtual network.
+- **Azure Container Instance OID**: The ID of the Azure container instance for your resource group.
+- **Azure Relay Namespace**: The name that you want to assign to the Azure Relay resource that the
+ template creates.
+
+### Create a resource group
+
+You can create the resource group by using the Azure portal, the Azure CLI, or Azure PowerShell. For
+more information, see the following articles:
+
+- [Manage Azure resource groups by using the Azure portal][02]
+- [Manage Azure resource groups by using Azure CLI][01]
+- [Manage Azure resource groups by using Azure PowerShell][03]
+
+### Create a virtual network
+
+You can create the virtual network by using the Azure portal, the Azure CLI, or Azure PowerShell.
+For more information, see the following articles:
+
+- [Use the Azure portal to create a virtual network][05]
+- [Use Azure PowerShell to create a virtual network][06]
+- [Use Azure CLI to create a virtual network][04]
+
+> [!NOTE]
+> When you're setting the container subnet address prefix for the Cloud Shell subnet, it's important
+> to consider the number of Cloud Shell sessions that you need to run concurrently. If the number of
+> Cloud Shell sessions exceeds the available IP addresses in the container subnet, users of those
+> sessions can't connect to Cloud Shell. Increase the container subnet range to accommodate your
+> specific needs. For more information, see the "Change subnet settings" section of
+> [Add, change, or delete a virtual network subnet][07].
+
+### Get the Azure container instance ID
+
+The Azure container instance ID is a unique value for every tenant. You use this identifier in
+the [quickstart templates][07] to configure a virtual network for Cloud Shell.
+
+1. Sign in to the [Azure portal][09]. From the home page, select **Microsoft Entra ID**. If the icon
+ isn't displayed, enter `Microsoft Entra ID` in the top search bar.
+1. On the left menu, select **Overview**. Then enter `azure container instance service` in the
+ search bar.
+
+ [![Screenshot of searching for Azure Container Instance Service.][95]][95a]
+
+1. In the results, under **Enterprise applications**, select **Azure Container Instance Service**.
+1. On the **Overview** page for **Azure Container Instance Service**, find the **Object ID** value
+ that's listed as a property.
+
+ You use this ID in the quickstart template for the virtual network.
+
+ [![Screenshot of Azure Container Instance Service details.][96]][96a]
+
+## 3. Create the virtual network by using the ARM template
+
+Use the [Azure Cloud Shell - VNet][08] template to create Cloud Shell resources in a virtual
+network. The template creates three subnets under the virtual network that you created earlier. You
+might choose to change the supplied names of the subnets or use the defaults.
+
+The virtual network, along with the subnets, requires valid IP address assignments. You need at
+least one IP address for the Relay subnet and enough IP addresses in the container subnet to support
+the number of concurrent sessions that you expect to use.
+
+The ARM template requires specific information about the resources that you created earlier, along
+with naming information for new resources. This information is filled out along with the prefilled
+information in the form.
+
+Information that you need for the template includes:
+
+- **Subscription**: The name of your subscription that contains the resource group for the Cloud
+ Shell virtual network.
+- **Resource Group**: The name of an existing or newly created resource group.
+- **Region**: The location of the resource group.
+- **Virtual Network**: The name of the Cloud Shell virtual network.
+- **Network Security Group**: The name that you want to assign to the network security group (NSG)
+ that the template creates.
+- **Azure Container Instance OID**: The ID of the Azure container instance for your resource group.
+
+Fill out the form with the following information:
+
+| Project details | Value |
+| | -- |
+| **Subscription** | Defaults to the current subscription context.<br>The example in this article uses `Contoso (carolb)`. |
+| **Resource group** | Enter the name of the resource group from the prerequisite information.<br>The example in this article uses `rg-cloudshell-eastus`. |
+
+| Instance details | Value |
+| - | - |
+| **Region** | Prefilled with your default region.<br>The example in this article uses `East US`. |
+| **Existing VNET Name** | Fill in the value from the prerequisite information that you gathered.<br>The example in this article uses `vnet-cloudshell-eastus`. |
+| **Relay Namespace Name** | Create a name that you want to assign to the Relay resource that the template creates.<br>The example in this article uses `arn-cloudshell-eastus`. |
+| **Nsg Name** | Enter the name of the NSG. The deployment creates this NSG and assigns an access rule to it. |
+| **Azure Container Instance OID** | Fill in the value from the prerequisite information that you gathered.<br>The example in this article uses `8fe7fd25-33fe-4f89-ade3-0e705fcf4370`. |
+| **Container Subnet Name** | Defaults to `cloudshellsubnet`. Enter the name of the subnet for your container. |
+| **Container Subnet Address Prefix** | The example in this article uses `10.1.0.0/16`, which provides 65,543 IP addresses for Cloud Shell instances. |
+| **Relay Subnet Name** | Defaults to `relaysubnet`. Enter the name of the subnet that contains your relay. |
+| **Relay Subnet Address Prefix** | The example in this article uses `10.0.2.0/24`. |
+| **Storage Subnet Name** | Defaults to `storagesubnet`. Enter the name of the subnet that contains your storage. |
+| **Storage Subnet Address Prefix** | The example in this article uses `10.0.3.0/24`. |
+| **Private Endpoint Name** | Defaults to `cloudshellRelayEndpoint`. Enter the name of the subnet that contains your container. |
+| **Tag Name** | Defaults to `{"Environment":"cloudshell"}`. Leave unchanged or add more tags. |
+| **Location** | Defaults to `[resourceGroup().location]`. Leave unchanged. |
+
+After the form is complete, select **Review + Create** and deploy the network ARM template to your
+subscription.
+
+## 4. Create the virtual network storage by using the ARM template
+
+Use the [Azure Cloud Shell - VNet storage][09] template to create Cloud Shell resources in a virtual
+network. The template creates the storage account and assigns it to the private virtual network.
+
+The ARM template requires specific information about the resources that you created earlier, along
+with naming information for new resources.
+
+Information that you need for the template includes:
+
+- **Subscription**: The name of the subscription that contains the resource group for the Cloud
+ Shell virtual network.
+- **Resource Group**: The name of an existing or newly created resource group.
+- **Region**: The location of the resource group.
+- **Existing virtual network name**: The name of the virtual network that you created earlier.
+- **Existing Storage Subnet Name**: The name of the storage subnet that you created by using the
+ network quickstart template.
+- **Existing Container Subnet Name**: The name of the container subnet that you created by using the
+ network quickstart template.
+
+Fill out the form with the following information:
+
+| Project details | Value |
+| | -- |
+| **Subscription** | Defaults to the current subscription context.<br>The example in this article uses `Contoso (carolb)`. |
+| **Resource group** | Enter the name of the resource group from the prerequisite information.<br>The example in this article uses `rg-cloudshell-eastus`. |
+
+| Instance details | Value |
+| | |
+| **Region** | Prefilled with your default region.<br>The example in this article uses `East US`. |
+| **Existing VNET Name** | The example in this article uses `vnet-cloudshell-eastus`. |
+| **Existing Storage Subnet Name** | Fill in the name of the resource that the network template creates. |
+| **Existing Container Subnet Name** | Fill in the name of the resource that the network template creates. |
+| **Storage Account Name** | Create a name for the new storage account.<br>The example in this article uses `myvnetstorage1138`. |
+| **File Share Name** | Defaults to `acsshare`. Enter the name of the file share that you want to create. |
+| **Resource Tags** | Defaults to `{"Environment":"cloudshell"}`. Leave unchanged or add more tags. |
+| **Location** | Defaults to `[resourceGroup().location]`. Leave unchanged. |
+
+After the form is complete, select **Review + Create** and deploy the network ARM template to your
+subscription.
+
+## 5. Configure Cloud Shell to use a virtual network
+
+After you deploy your private Cloud Shell instance, each Cloud Shell user must change their
+configuration to use the new private instance.
+
+If you used the default Cloud Shell instance before you deployed the private instance, you must
+reset your user settings:
+
+1. Open Cloud Shell.
+1. Select **Cloud Shell settings** from the menu bar (gear icon).
+1. Select **Reset user settings**, and then select **Reset**.
+
+Resetting the user settings triggers the first-time user experience the next time you start Cloud
+Shell.
+
+[![Screenshot of the Cloud Shell storage dialog.][97]][97a]
+
+1. Choose your preferred shell experience (Bash or PowerShell).
+1. Select **Show advanced settings**.
+1. Select the **Show VNET isolation settings** checkbox.
+1. Choose the subscription that contains your private Cloud Shell instance.
+1. Choose the region that contains your private Cloud Shell instance.
+1. For **Resource group**, select the resource group that contains your private Cloud Shell
+ instance.
+
+ If you select the correct resource group, **Virtual network**, **Network profile**, and **Relay
+ namespace** are automatically populated with the correct values.
+1. For **File share**, enter the name of the file share that you created by using the storage
+ template.
+1. Select **Create storage**.
+
+## Next steps
+
+You must complete the Cloud Shell configuration steps for each user who needs to use the new private
+Cloud Shell instance.
+
+<!-- link references -->
+[01]: /azure/azure-resource-manager/management/manage-resource-groups-cli
+[02]: /azure/azure-resource-manager/management/manage-resource-groups-portal
+[03]: /azure/azure-resource-manager/management/manage-resource-groups-powershell
+[04]: /azure/virtual-network/quick-create-cli
+[05]: /azure/virtual-network/quick-create-portal
+[06]: /azure/virtual-network/quick-create-powershell
+[07]: /azure/virtual-network/virtual-network-manage-subnet?tabs=azure-portal#change-subnet-settings
+[08]: https://aka.ms/cloudshell/docs/vnet/template
+[09]: https://azure.microsoft.com/resources/templates/cloud-shell-vnet-storage/
+[10]: /azure/role-based-access-control/role-assignments-list-portal#list-owners-of-a-subscription
+[95]: media/quickstart-deploy-vnet/container-service-search.png
+[95a]: media/quickstart-deploy-vnet/container-service-search.png#lightbox
+[96]: media/quickstart-deploy-vnet/container-service-details.png
+[96a]: media/quickstart-deploy-vnet/container-service-details.png#lightbox
+[97]: media/quickstart-deploy-vnet/setup-cloud-shell-storage.png
+[97a]: media/quickstart-deploy-vnet/setup-cloud-shell-storage.png#lightbox
+[98]: media/quickstart-deploy-vnet/resource-provider.png
+[98a]: media/quickstart-deploy-vnet/resource-provider.png#lightbox
cloud-shell Vnet Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/vnet-overview.md
+
+description: This article describes a scenario for using Azure Cloud Shell in a private virtual network.
+ms.contributor: jahelmic
Last updated : 06/21/2023+
+ Title: Use Cloud Shell in an Azure virtual network
++
+# Use Cloud Shell in an Azure virtual network
+
+By default, Azure Cloud Shell sessions run in a container in a Microsoft network that's separate
+from your resources. Commands that run inside the container can't access resources in a private
+virtual network. For example, you can't use Secure Shell (SSH) to connect from Cloud Shell to a
+virtual machine that has only a private IP address, or use `kubectl` to connect to a Kubernetes
+cluster that has locked down access.
+
+To provide access to your private resources, you can deploy Cloud Shell into an Azure virtual
+network that you control. This technique is called _virtual network isolation_.
+
+## Benefits of virtual network isolation with Cloud Shell
+
+Deploying Cloud Shell in a private virtual network offers these benefits:
+
+- The resources that you want to manage don't need to have public IP addresses.
+- You can use command-line tools, SSH, and PowerShell remoting from the Cloud Shell container to
+ manage your resources.
+- The storage account that Cloud Shell uses doesn't have to be publicly accessible.
+
+## Things to consider before deploying Azure Cloud Shell in a virtual network
+
+- Starting Cloud Shell in a virtual network is typically slower than a standard Cloud Shell session.
+- Virtual network isolation requires you to use [Azure Relay][01], which is a paid service. In the
+ Cloud Shell scenario, one hybrid connection is used for each administrator while they're using
+ Cloud Shell. The connection is automatically closed when the Cloud Shell session ends.
+
+## Architecture
+
+The following diagram shows the resource architecture that you must build to enable this scenario.
+
+![Illustration of a Cloud Shell isolated virtual network architecture.][03]
+
+- **Customer client network**: Client users can be located anywhere on the internet to securely
+ access and authenticate to the Azure portal and use Cloud Shell to manage resources contained in
+ the customer's subscription. For stricter security, you can allow users to open Cloud Shell only
+ from the virtual network contained in your subscription.
+- **Microsoft network**: Customers connect to the Azure portal on Microsoft's network to
+ authenticate and open Cloud Shell.
+- **Customer virtual network**: This is the network that contains the subnets to support virtual
+ network isolation. Resources such as virtual machines and services are directly accessible from
+ Cloud Shell without the need to assign a public IP address.
+- **Azure Relay**: [Azure Relay][01] allows two endpoints that aren't directly reachable to
+ communicate. In this case, it's used to allow the administrator's browser to communicate with the
+ container in the private network.
+- **File share**: Cloud Shell requires a storage account that's accessible from the virtual network.
+ The storage account provides the file share used by Cloud Shell users.
+
+## Related links
+
+For more information, see the [pricing][02] guide.
+
+<!-- link references -->
+[01]: ../azure-relay/relay-what-is-it.md
+[02]: https://azure.microsoft.com/pricing/details/service-bus/
+[03]: media/private-vnet/data-diagram.png
cloud-shell Vnet Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/vnet-troubleshooting.md
Cloud Shell. For best results, and to be supportable, following the deployment i
## Verify you have set the correct permissions To configure Azure Cloud Shell in a virtual network, you must have the **Owner** role assignment on
-the subscription. To view and assign roles, see [List Owners of a Subscription][01]
+the subscription. To view and assign roles, see [List Owners of a Subscription][01].
Unless otherwise noted, all the troubleshooting steps start in **Subscriptions** section of the Azure portal.
communication-services Rooms Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/rooms-metrics.md
# Rooms metrics overview
-Azure Communication Services currently provides metrics for all Communication Services primitives. You can use [Azure Metrics Explorer](../../../azure-monitor\essentials\metrics-getting-started.md) to:
+Azure Communication Services currently provides metrics for all Communication Services primitives. You can use [Azure Monitor metrics explorer](../../../azure-monitor\essentials\analyze-metrics.md) to:
- Plot your own charts. - Investigate abnormalities in your metric values.
communication-services Sms Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/sms-metrics.md
# SMS metrics overview
-Azure Communication Services currently provides metrics for all Communication Services primitives. You can use [Azure Metrics Explorer](../../../azure-monitor\essentials\metrics-getting-started.md) to:
+Azure Communication Services currently provides metrics for all Communication Services primitives. You can use [Azure Monitor metrics explorer](../../../azure-monitor\essentials\analyze-metrics.md) to:
- Plot your own charts. - Investigate abnormalities in your metric values.
communication-services Turn Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/turn-metrics.md
# TURN metrics overview
-Azure Communication Services currently provides metrics for all Communication Services primitives. [Azure Metrics Explorer](../../../azure-monitor\essentials\metrics-getting-started.md) can be used to:
+Azure Communication Services currently provides metrics for all Communication Services primitives. [Azure Monitor metrics explorer](../../../azure-monitor\essentials\analyze-metrics.md) can be used to:
- Plot your own charts. - Investigate abnormalities in your metric values.
communications-gateway Monitor Azure Communications Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/monitor-azure-communications-gateway.md
Azure Communications Gateway collects metrics. See [Monitoring Azure Communicati
## Analyzing, filtering and splitting metrics in Azure Monitor
-You can analyze metrics for Azure Communications Gateway, along with metrics from other Azure services, by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+You can analyze metrics for Azure Communications Gateway, along with metrics from other Azure services, by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
All Azure Communications Gateway metrics support the **Region** dimension, allowing you to filter any metric by the Service Locations defined in your Azure Communications Gateway resource.
container-apps Dapr Functions Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-functions-extension.md
Title: Deploy the Dapr extension for Azure Functions in Azure Container Apps
+ Title: Deploy the Dapr extension for Azure Functions in Azure Container Apps (preview)
description: Learn how to use and deploy the Azure Functions with Dapr extension in your Dapr-enabled container apps.
Previously updated : 10/13/2023 Last updated : 10/30/2023 # Customer Intent: I'm a developer who wants to use the Dapr extension for Azure Functions in my Dapr-enabled container app
-# Deploy the Dapr extension for Azure Functions in Azure Container Apps
+# Deploy the Dapr extension for Azure Functions in Azure Container Apps (preview)
The [Dapr extension for Azure Functions](../azure-functions/functions-bindings-dapr.md) allows you to easily interact with the Dapr APIs from an Azure Function using triggers and bindings. In this guide, you learn how to:
The [Dapr extension for Azure Functions](../azure-functions/functions-bindings-d
- One function that creates an Order and saves it to storage via Dapr statestore - Verify the interaction between the two apps
-> [!NOTE]
-> The Dapr extension for Azure Functions is currently in preview.
- ## Prerequisites - [An Azure account with an active subscription.](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
container-apps Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/metrics.md
From this view, you can pin one or more charts to your dashboard or select a cha
The Azure Monitor metrics explorer lets you create charts from metric data to help you analyze your container app's resource and network usage over time. You can pin charts to a dashboard or in a shared workbook.
-1. Open the metrics explorer in the Azure portal by selecting **Metrics** from the sidebar menu on your container app's page. To learn more about metrics explorer, go to [Getting started with metrics explorer](../azure-monitor/essentials/metrics-getting-started.md).
+1. Open the metrics explorer in the Azure portal by selecting **Metrics** from the sidebar menu on your container app's page. To learn more about metrics explorer, see [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md).
1. Create a chart by selecting **Metric**. You can modify the chart by changing aggregation, adding more metrics, changing time ranges and intervals, adding filters, and applying splitting. :::image type="content" source="media/observability/metrics-main-page.png" alt-text="Screenshot of the metrics explorer from the container app resource page.":::
container-instances Monitor Azure Container Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/monitor-azure-container-instances.md
The metrics and logs you can collect are discussed in the following sections.
## Analyzing metrics
-You can analyze metrics for *Azure Container Instances* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+You can analyze metrics for *Azure Container Instances* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
For a list of the platform metrics collected for Azure Container Instances, see [Monitoring Azure Container Instances data reference metrics](monitor-azure-container-instances-reference.md#metrics).
container-registry Monitor Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/monitor-service.md
The metrics and logs you can collect are discussed in the following sections.
## Analyzing metrics
-You can analyze metrics for an Azure container registry with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+You can analyze metrics for an Azure container registry with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
> [!TIP] > You can also go to the metrics explorer by navigating to your registry in the portal. In the menu, select **Metrics** under **Monitoring**.
cosmos-db Emulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/emulator.md
In some cases, you may wish to manually import the TLS/SS certificate from the e
## Next step > [!div class="nextstepaction"]
-> [Get started using the Azure Comsos DB emulator for development](how-to-develop-emulator.md)
+> [Get started using the Azure Cosmos DB emulator for development](how-to-develop-emulator.md)
cosmos-db Monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-resource-logs.md
[!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)]
-Diagnostic settings in Azure are used to collect resource logs. Resources emit Azure resource Logs and provide rich, frequent data about the operation of that resource. These logs are captured per request and they're also referred to as "data plane logs". Some examples of the data plane operations include delete, insert, and readFeed. The content of these logs varies by resource type.
+Diagnostic settings in Azure are used to collect resource logs. Resources emit Azure resource Logs and provide rich, frequent data about the operation of that resource. These logs are captured per request and they're also referred to as "data plane logs." Some examples of the data plane operations include delete, insert, and readFeed. The content of these logs varies by resource type.
Platform metrics and the Activity logs are collected automatically, whereas you must create a diagnostic setting to collect resource logs or forward them outside of Azure Monitor. You can turn on diagnostic setting for Azure Cosmos DB accounts and send resource logs to the following sources:
Here, we walk through the process of creating diagnostic settings for your accou
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to your Azure Cosmos DB account. Open the **Diagnostic settings** pane under the **Monitoring section**, and then select **Add diagnostic setting** option.
+1. Navigate to your Azure Cosmos DB account. Open the **Diagnostic settings** pane under the **Monitoring section** and then select the **Add diagnostic setting** option.
:::image type="content" source="media/monitor/diagnostics-settings-selection.png" lightbox="media/monitor/diagnostics-settings-selection.png" alt-text="Sreenshot of the diagnostics selection page.":::
+ > [!IMPORTANT]
+ > You might see a prompt to "enable full-text query \[...\] for more detailed logging" if the **full-text query** feature is not enabled in your account. You can safely ignore this warning if you do not wish to enable this feature. For more information, see [enable full-text query](monitor-resource-logs.md#enable-full-text-query-for-logging-query-text).
+ 1. In the **Diagnostic settings** pane, fill the form with your preferred categories. Included here's a list of log categories. | Category | API | Definition | Key Properties |
Here, we walk through the process of creating diagnostic settings for your accou
| **CassandraRequests** | API for Apache Cassandra | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Cassandra. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` | | **GremlinRequests** | API for Apache Gremlin | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Gremlin. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText`, `retriedDueToRateLimiting` | | **QueryRuntimeStatistics** | API for NoSQL | This table details query operations executed against an API for NoSQL account. By default, the query text and its parameters are obfuscated to avoid logging personal data with full text query logging available by request. | `databasename`, `partitionkeyrangeid`, `querytext` |
- | **PartitionKeyStatistics** | All APIs | Logs the statistics of logical partition keys by representing the estimated storage size (KB) of the partition keys. This table is useful when troubleshooting storage skews. This PartitionKeyStatistics log is only emitted if the following conditions are true: 1. At least 1% of the documents in the physical partition have same logical partition key. 2. Out of all the keys in the physical partition, the PartitionKeyStatistics log captures the top three keys with largest storage size. </li></ul> If the previous conditions aren't met, the partition key statistics data isn't available. It's okay if the above conditions aren't met for your account, which typically indicates you have no logical partition storage skew. **Note**: The estimated size of the partition keys is calculated using a sampling approach that assumes the documents in the physical partition are roughly the same size. If the document sizes aren't uniform in the physical partition, the estimated partition key size may not be accurate. | `subscriptionId`, `regionName`, `partitionKey`, `sizeKB` |
+ | **PartitionKeyStatistics** | All APIs | Logs the statistics of logical partition keys by representing the estimated storage size (KB) of the partition keys. This table is useful when troubleshooting storage skews. This PartitionKeyStatistics log is only emitted if the following conditions are true: 1. At least 1% of the documents in the physical partition have same logical partition key. 2. Out of all the keys in the physical partition, the PartitionKeyStatistics log captures the top three keys with largest storage size. </li></ul> If the previous conditions aren't met, the partition key statistics data isn't available. It's okay if the above conditions aren't met for your account, which typically indicates you have no logical partition storage skew. **Note**: The estimated size of the partition keys is calculated using a sampling approach that assumes the documents in the physical partition are roughly the same size. If the document sizes aren't uniform in the physical partition, the estimated partition key size might not be accurate. | `subscriptionId`, `regionName`, `partitionKey`, `sizeKB` |
| **PartitionKeyRUConsumption** | API for NoSQL or API for Apache Gremlin | Logs the aggregated per-second RU/s consumption of partition keys. This table is useful for troubleshooting hot partitions. Currently, Azure Cosmos DB reports partition keys for API for NoSQL accounts only and for point read/write, query, and stored procedure operations. | `subscriptionId`, `regionName`, `partitionKey`, `requestCharge`, `partitionKeyRangeId` | | **ControlPlaneRequests** | All APIs | Logs details on control plane operations, which include, creating an account, adding or removing a region, updating account replication settings etc. | `operationName`, `httpstatusCode`, `httpMethod`, `region` | | **TableApiRequests** | API for Table | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Table. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
cosmos-db Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor.md
The following sections discuss the metrics and logs you can collect.
## Analyzing metrics
-Azure Cosmos DB provides a custom experience for working with metrics. You can analyze metrics for Azure Cosmos DB with metrics from other Azure services using Metrics explorer by opening **Metrics** from the **Azure Monitor** menu. For more information about this tool, see [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md).
+Azure Cosmos DB provides a custom experience for working with metrics. You can analyze metrics for Azure Cosmos DB with metrics from other Azure services using Metrics explorer by opening **Metrics** from the **Azure Monitor** menu. For more information about this tool, see [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md).
You can also monitor [server-side latency](monitor-server-side-latency.md), [request unit usage](monitor-request-unit-usage.md), and [normalized request unit usage](monitor-normalized-request-units.md) for your Azure Cosmos DB resources.
data-factory Connector Microsoft Fabric Lakehouse Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-fabric-lakehouse-files.md
+
+ Title: Copy data in Microsoft Fabric Lakehouse Files (Preview)
+
+description: Learn how to copy data to and from Microsoft Fabric Lakehouse Files (Preview) using Azure Data Factory or Azure Synapse Analytics pipelines.
++++++ Last updated : 09/28/2023++
+# Copy data in Microsoft Fabric Lakehouse Files (Preview) using Azure Data Factory or Azure Synapse Analytics
++
+This article outlines how to use Copy Activity to copy data from and to Microsoft Fabric Lakehouse Files (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+
+> [!IMPORTANT]
+> This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/).
+
+## Supported capabilities
+
+This Microsoft Fabric Lakehouse Files connector is supported for the following capabilities:
+
+| Supported capabilities|IR | Managed private endpoint|
+|| --| --|
+|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|Γ£ô |
+
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+## Get started
++
+## Create a Microsoft Fabric Lakehouse linked service using UI
+
+Use the following steps to create a Microsoft Fabric Lakehouse linked service in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Microsoft Fabric Lakehouse and select the connector.
+
+ :::image type="content" source="media/connector-microsoft-fabric-lakehouse/microsoft-fabric-lakehouse-connector.png" alt-text="Screenshot showing select Microsoft Fabric Lakehouse connector.":::
+
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-microsoft-fabric-lakehouse/configure-microsoft-fabric-lakehouse-linked-service.png" alt-text="Screenshot of configuration for Microsoft Fabric Lakehouse linked service.":::
+
+## Connector configuration details
+
+The following sections provide details about properties that are used to define Data Factory entities specific to Microsoft Fabric Lakehouse.
+
+## Linked service properties
+
+The Microsoft Fabric Lakehouse connector supports the following authentication types. See the corresponding sections for details:
+
+- [Service principal authentication](#service-principal-authentication)
+
+### Service principal authentication
+
+To use service principal authentication, follow these steps.
+
+1. Register an application with the Microsoft Identity platform. To learn how, see [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). Make note of these values, which you use to define the linked service:
+
+ - Application ID
+ - Application key
+ - Tenant ID
+
+2. Grant the service principal at least the **Contributor** role in Microsoft Fabric workspace. Follow these steps:
+ 1. Go to your Microsoft Fabric workspace, select **Manage access** on the top bar. Then select **Add people or groups**.
+
+ :::image type="content" source="media/connector-microsoft-fabric-lakehouse/fabric-workspace-manage-access.png" alt-text="Screenshot shows selecting Fabric workspace Manage access.":::
+
+ :::image type="content" source="media/connector-microsoft-fabric-lakehouse/manage-access-pane.png" alt-text=" Screenshot shows Fabric workspace Manage access pane.":::
+
+ 1. In **Add people** pane, enter your service principal name, and select your service principal from the drop-down list.
+
+ 1. Specify the role as **Contributor** or higher (Admin, Member), then select **Add**.
+
+ :::image type="content" source="media/connector-microsoft-fabric-lakehouse/select-workspace-role.png" alt-text="Screenshot shows adding Fabric workspace role.":::
+
+ 1. Your service principal is displayed on **Manage access** pane.
+
+These properties are supported for the linked service:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The type property must be set to **Lakehouse**. |Yes |
+| workspaceId | The Microsoft Fabric workspace ID. | Yes |
+| artifactId | The Microsoft Fabric Lakehouse object ID. | Yes |
+| tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. Retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Yes |
+| servicePrincipalId | Specify the application's client ID. | Yes |
+| servicePrincipalCredentialType | The credential type to use for service principal authentication. Allowed values are **ServicePrincipalKey** and **ServicePrincipalCert**. | Yes |
+| servicePrincipalCredential | The service principal credential. <br/> When you use **ServicePrincipalKey** as the credential type, specify the application's key. Mark this field as **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> When you use **ServicePrincipalCert** as the credential, reference a certificate in Azure Key Vault, and ensure the certificate content type is **PKCS #12**.| Yes |
+| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure Active Directory application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the data factory or Synapse pipeline's cloud environment is used. | No |
+| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. |No |
+
+**Example: using service principal key authentication**
+
+You can also store service principal key in Azure Key Vault.
+
+```json
+{
+ "name": "MicrosoftFabricLakehouseLinkedService",
+ "properties": {
+ "type": "Lakehouse",
+ "typeProperties": {
+ "workspaceId": "<Microsoft Fabric workspace ID>",
+ "artifactId": "<Microsoft Fabric Lakehouse object ID>",
+ "tenant": "<tenant info, e.g. microsoft.onmicrosoft.com>",
+ "servicePrincipalId": "<service principal id>",
+ "servicePrincipalCredentialType": "ServicePrincipalKey",
+ "servicePrincipalCredential": {
+ "type": "SecureString",
+ "value": "<service principal key>"
+ }
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+
+## Dataset properties
+
+For a full list of sections and properties available for defining datasets, see [Datasets](concepts-datasets-linked-services.md).
+
+Microsoft Fabric Lakehouse Files supports the following file formats. Refer to each article for format-based settings.
+
+- [Avro format](format-avro.md)
+- [Binary format](format-binary.md)
+- [Delimited text format](format-delimited-text.md)
+- [JSON format](format-json.md)
+- [ORC format](format-orc.md)
+- [Parquet format](format-parquet.md)
+
+The following properties are supported for Microsoft Fabric Lakehouse Files under `location` settings in the format-based dataset:
+
+| Property | Description | Required |
+| - | | -- |
+| type | The type property under `location` in the dataset must be set to **LakehouseLocation**. | Yes |
+| folderPath | The path to a folder under the given Microsoft Fabric Lakehouse. If you want to use a wildcard to filter folders, skip this setting and specify it in activity source settings. | No |
+| fileName | The file name under the given Microsoft Fabric Lakehouse + folderPath. If you want to use a wildcard to filter files, skip this setting and specify it in activity source settings. | No |
+
+**Example:**
+
+```json
+{
+ "name": "DelimitedTextDataset",
+ "properties": {
+ "type": "DelimitedText",
+ "linkedServiceName": {
+ "referenceName": "<Microsoft Fabric Lakehouse linked service name>",
+ "type": "LinkedServiceReference"
+ },
+ "typeProperties": {
+ "location": {
+ "type": "LakehouseLocation",
+ "fileName": "<file name>",
+ "folderPath": "<folder name>"
+ },
+ "columnDelimiter": ",",
+ "compressionCodec": "gzip",
+ "escapeChar": "\\",
+ "firstRowAsHeader": true,
+ "quoteChar": "\""
+ },
+ "schema": [ < physical schema, optional, auto retrieved during authoring > ]
+ }
+}
+```
++
+## Copy activity properties
+
+For a full list of sections and properties available for defining activities, see [Copy activity configurations](copy-activity-overview.md#configuration) and [Pipelines and activities](concepts-pipelines-activities.md). This section provides a list of properties supported by the Microsoft Fabric Lakehouse Files source and sink.
+
+### Microsoft Fabric Lakehouse Files as a source type
+
+Microsoft Fabric Lakehouse Files connector supports the following file formats. Refer to each article for format-based settings.
+
+- [Avro format](format-avro.md)
+- [Binary format](format-binary.md)
+- [Delimited text format](format-delimited-text.md)
+- [JSON format](format-json.md)
+- [ORC format](format-orc.md)
+- [Parquet format](format-parquet.md)
+
+You have several options to copy data from Microsoft Fabric Lakehouse Files:
+
+- Copy from the given path specified in the dataset.
+- Wildcard filter against folder path or file name, see `wildcardFolderPath` and `wildcardFileName`.
+- Copy the files defined in a given text file as file set, see `fileListPath`.
+
+The following properties are supported for Microsoft Fabric Lakehouse Files under `storeSettings` settings in format-based copy source:
+
+| Property | Description | Required |
+| | | |
+| type | The type property under `storeSettings` must be set to **LakehouseReadSettings**. | Yes |
+| ***Locate the files to copy:*** | | |
+| OPTION 1: static path<br> | Copy from the given file system or folder/file path specified in the dataset. If you want to copy all files from a file system/folder, additionally specify `wildcardFileName` as `*`. | |
+| OPTION 2: wildcard<br>- wildcardFolderPath | The folder path with wildcard characters under the given file system configured in dataset to filter source folders. <br>Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character); use `^` to escape if your actual folder name has wildcard or this escape char inside. <br>See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | No |
+| OPTION 2: wildcard<br>- wildcardFileName | The file name with wildcard characters under the given file system + folderPath/wildcardFolderPath to filter source files. <br>Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character); use `^` to escape if your actual file name has wildcard or this escape char inside. See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | Yes |
+| OPTION 3: a list of files<br>- fileListPath | Indicates to copy a given file set. Point to a text file that includes a list of files you want to copy, one file per line, which is the relative path to the path configured in the dataset.<br/>When using this option, do not specify file name in dataset. See more examples in [File list examples](#file-list-examples). |No |
+| ***Additional settings:*** | | |
+| recursive | Indicates whether the data is read recursively from the subfolders or only from the specified folder. Note that when recursive is set to true and the sink is a file-based store, an empty folder or subfolder isn't copied or created at the sink. <br>Allowed values are **true** (default) and **false**.<br>This property doesn't apply when you configure `fileListPath`. |No |
+| deleteFilesAfterCompletion | Indicates whether the binary files will be deleted from source store after successfully moving to the destination store. The file deletion is per file, so when copy activity fails, you will see some files have already been copied to the destination and deleted from source, while others are still remaining on source store. <br/>This property is only valid in binary files copy scenario. The default value: false. |No |
+| modifiedDatetimeStart | Files filter based on the attribute: Last Modified. <br>The files will be selected if their last modified time is greater than or equal to `modifiedDatetimeStart` and less than `modifiedDatetimeEnd`. The time is applied to UTC time zone in the format of "2018-12-01T05:00:00Z". <br> The properties can be NULL, which means no file attribute filter will be applied to the dataset. When `modifiedDatetimeStart` has datetime value but `modifiedDatetimeEnd` is NULL, it means the files whose last modified attribute is greater than or equal with the datetime value will be selected. When `modifiedDatetimeEnd` has datetime value but `modifiedDatetimeStart` is NULL, it means the files whose last modified attribute is less than the datetime value will be selected.<br/>This property doesn't apply when you configure `fileListPath`. | No |
+| modifiedDatetimeEnd | Same as above. | No |
+| enablePartitionDiscovery | For files that are partitioned, specify whether to parse the partitions from the file path and add them as additional source columns.<br/>Allowed values are **false** (default) and **true**. | No |
+| partitionRootPath | When partition discovery is enabled, specify the absolute root path in order to read partitioned folders as data columns.<br/><br/>If it is not specified, by default,<br/>- When you use file path in dataset or list of files on source, partition root path is the path configured in dataset.<br/>- When you use wildcard folder filter, partition root path is the sub-path before the first wildcard.<br/><br/>For example, assuming you configure the path in dataset as "root/folder/year=2020/month=08/day=27":<br/>- If you specify partition root path as "root/folder/year=2020", copy activity will generate two more columns `month` and `day` with value "08" and "27" respectively, in addition to the columns inside the files.<br/>- If partition root path is not specified, no extra column will be generated. | No |
+| maxConcurrentConnections | The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No |
+
+**Example:**
+
+```json
+"activities": [
+ {
+ "name": "CopyFromLakehouseFiles",
+ "type": "Copy",
+ "inputs": [
+ {
+ "referenceName": "<Delimited text input dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "outputs": [
+ {
+ "referenceName": "<output dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "typeProperties": {
+ "source": {
+ "type": "DelimitedTextSource",
+ "storeSettings": {
+ "type": "LakehouseReadSettings",
+ "recursive": true,
+ "enablePartitionDiscovery": false
+ },
+ "formatSettings": {
+ "type": "DelimitedTextReadSettings"
+ }
+ },
+ "sink": {
+ "type": "<sink type>"
+ }
+ }
+ }
+]
+```
++
+### Microsoft Fabric Lakehouse Files as a sink type
+
+Microsoft Fabric Lakehouse Files connector supports the following file formats. Refer to each article for format-based settings.
+
+- [Avro format](format-avro.md)
+- [Binary format](format-binary.md)
+- [Delimited text format](format-delimited-text.md)
+- [JSON format](format-json.md)
+- [ORC format](format-orc.md)
+- [Parquet format](format-parquet.md)
+
+The following properties are supported for Microsoft Fabric Lakehouse Files under `storeSettings` settings in format-based copy sink:
+
+| Property | Description | Required |
+| | | -- |
+| type | The type property under `storeSettings` must be set to **LakehouseWriteSettings**. | Yes |
+| copyBehavior | Defines the copy behavior when the source is files from a file-based data store.<br/><br/>Allowed values are:<br/><b>- PreserveHierarchy (default)</b>: Preserves the file hierarchy in the target folder. The relative path of the source file to the source folder is identical to the relative path of the target file to the target folder.<br/><b>- FlattenHierarchy</b>: All files from the source folder are in the first level of the target folder. The target files have autogenerated names. <br/><b>- MergeFiles</b>: Merges all files from the source folder to one file. If the file name is specified, the merged file name is the specified name. Otherwise, it's an autogenerated file name. | No |
+| maxConcurrentConnections | The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No |
+| metadata |Set custom metadata when copy to sink. Each object under the `metadata` array represents an extra column. The `name` defines the metadata key name, and the `value` indicates the data value of that key. If [preserve attributes feature](./copy-activity-preserve-metadata.md#preserve-metadata) is used, the specified metadata will union/overwrite with the source file metadata.<br/><br/>Allowed data values are:<br/>- `$$LASTMODIFIED`: a reserved variable indicates to store the source files' last modified time. Apply to file-based source with binary format only.<br/><b>- Expression<b><br/>- <b>Static value<b>| No |
+
+**Example:**
+
+```json
+"activities": [
+ {
+ "name": "CopyToLakehouseFiles",
+ "type": "Copy",
+ "inputs": [
+ {
+ "referenceName": "<input dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "outputs": [
+ {
+ "referenceName": "<Parquet output dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "typeProperties": {
+ "source": {
+ "type": "<source type>"
+ },
+ "sink": {
+ "type": "ParquetSink",
+ "storeSettings": {
+ "type": "LakehouseWriteSettings",
+ "copyBehavior": "PreserveHierarchy",
+ "metadata": [
+ {
+ "name": "testKey1",
+ "value": "value1"
+ },
+ {
+ "name": "testKey2",
+ "value": "value2"
+ }
+ ]
+ },
+ "formatSettings": {
+ "type": "ParquetWriteSettings"
+ }
+ }
+ }
+ }
+]
+```
++
+### Folder and file filter examples
+
+This section describes the resulting behavior of the folder path and file name with wildcard filters.
+
+| folderPath | fileName | recursive | Source folder structure and filter result (files in **bold** are retrieved)|
+|: |: |: |: |
+| `Folder*` | (Empty, use default) | false | FolderA<br/>&nbsp;&nbsp;&nbsp;&nbsp;**File1.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;**File2.json**<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3.csv<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5.csv<br/>AnotherFolderB<br/>&nbsp;&nbsp;&nbsp;&nbsp;File6.csv |
+| `Folder*` | (Empty, use default) | true | FolderA<br/>&nbsp;&nbsp;&nbsp;&nbsp;**File1.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;**File2.json**<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File3.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File4.json**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File5.csv**<br/>AnotherFolderB<br/>&nbsp;&nbsp;&nbsp;&nbsp;File6.csv |
+| `Folder*` | `*.csv` | false | FolderA<br/>&nbsp;&nbsp;&nbsp;&nbsp;**File1.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3.csv<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5.csv<br/>AnotherFolderB<br/>&nbsp;&nbsp;&nbsp;&nbsp;File6.csv |
+| `Folder*` | `*.csv` | true | FolderA<br/>&nbsp;&nbsp;&nbsp;&nbsp;**File1.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File3.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File5.csv**<br/>AnotherFolderB<br/>&nbsp;&nbsp;&nbsp;&nbsp;File6.csv |
+
+### File list examples
+
+This section describes the resulting behavior of using file list path in copy activity source.
+
+Assuming you have the following source folder structure and want to copy the files in bold:
+
+| Sample source structure | Content in FileListToCopy.txt | ADF configuration |
+| | | |
+| filesystem<br/>&nbsp;&nbsp;&nbsp;&nbsp;FolderA<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File1.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File2.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File3.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File5.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;Metadata<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;FileListToCopy.txt | File1.csv<br>Subfolder1/File3.csv<br>Subfolder1/File5.csv | **In dataset:**<br>- File system: `filesystem`<br>- Folder path: `FolderA`<br><br>**In copy activity source:**<br>- File list path: `filesystem/Metadata/FileListToCopy.txt` <br><br>The file list path points to a text file in the same data store that includes a list of files you want to copy, one file per line with the relative path to the path configured in the dataset. |
++
+### Some recursive and copyBehavior examples
+
+This section describes the resulting behavior of the copy operation for different combinations of recursive and copyBehavior values.
+
+| recursive | copyBehavior | Source folder structure | Resulting target |
+|: |: |: |: |
+| true |preserveHierarchy | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target Folder1 is created with the same structure as the source:<br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 |
+| true |flattenHierarchy | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target Folder1 is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File5 |
+| true |mergeFiles | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target Folder1 is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1 + File2 + File3 + File4 + File5 contents are merged into one file with an autogenerated file name. |
+| false |preserveHierarchy | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target Folder1 is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/><br/>Subfolder1 with File3, File4, and File5 isn't picked up. |
+| false |flattenHierarchy | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target Folder1 is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File2<br/><br/>Subfolder1 with File3, File4, and File5 isn't picked up. |
+| false |mergeFiles | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target Folder1 is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1 + File2 contents are merged into one file with an autogenerated file name. autogenerated name for File1<br/><br/>Subfolder1 with File3, File4, and File5 isn't picked up. |
++
+## Next steps
+
+For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Microsoft Fabric Lakehouse Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-fabric-lakehouse-table.md
+
+ Title: Copy data in Microsoft Fabric Lakehouse Table (Preview)
+
+description: Learn how to copy data to and from Microsoft Fabric Lakehouse Table (Preview) using Azure Data Factory or Azure Synapse Analytics pipelines.
++++++ Last updated : 09/28/2023++
+# Copy data in Microsoft Fabric Lakehouse Table (Preview) using Azure Data Factory or Azure Synapse Analytics
++
+This article outlines how to use Copy Activity to copy data from and to Microsoft Fabric Lakehouse Table (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+
+> [!IMPORTANT]
+> This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/).
+
+## Supported capabilities
+
+This Microsoft Fabric Lakehouse Table connector is supported for the following capabilities:
+
+| Supported capabilities|IR | Managed private endpoint|
+|| --| --|
+|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|Γ£ô |
+
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+## Get started
++
+## Create a Microsoft Fabric Lakehouse linked service using UI
+
+Use the following steps to create a Microsoft Fabric Lakehouse linked service in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Microsoft Fabric Lakehouse and select the connector.
+
+ :::image type="content" source="media/connector-microsoft-fabric-lakehouse/microsoft-fabric-lakehouse-connector.png" alt-text="Screenshot showing select Microsoft Fabric Lakehouse connector.":::
+
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-microsoft-fabric-lakehouse/configure-microsoft-fabric-lakehouse-linked-service.png" alt-text="Screenshot of configuration for Microsoft Fabric Lakehouse linked service.":::
++
+## Connector configuration details
+
+The following sections provide details about properties that are used to define Data Factory entities specific to Microsoft Fabric Lakehouse.
+
+## Linked service properties
+
+The Microsoft Fabric Lakehouse connector supports the following authentication types. See the corresponding sections for details:
+
+- [Service principal authentication](#service-principal-authentication)
+
+### Service principal authentication
+
+To use service principal authentication, follow these steps.
+
+1. Register an application with the Microsoft Identity platform. To learn how, see [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). Make note of these values, which you use to define the linked service:
+
+ - Application ID
+ - Application key
+ - Tenant ID
+
+2. Grant the service principal at least the **Contributor** role in Microsoft Fabric workspace. Follow these steps:
+ 1. Go to your Microsoft Fabric workspace, select **Manage access** on the top bar. Then select **Add people or groups**.
+
+ :::image type="content" source="media/connector-microsoft-fabric-lakehouse/fabric-workspace-manage-access.png" alt-text="Screenshot shows selecting Fabric workspace Manage access.":::
+
+ :::image type="content" source="media/connector-microsoft-fabric-lakehouse/manage-access-pane.png" alt-text=" Screenshot shows Fabric workspace Manage access pane.":::
+
+ 1. In **Add people** pane, enter your service principal name, and select your service principal from the drop-down list.
+
+ 1. Specify the role as **Contributor** or higher (Admin, Member), then select **Add**.
+
+ :::image type="content" source="media/connector-microsoft-fabric-lakehouse/select-workspace-role.png" alt-text="Screenshot shows adding Fabric workspace role.":::
+
+ 1. Your service principal is displayed on **Manage access** pane.
+
+These properties are supported for the linked service:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The type property must be set to **Lakehouse**. |Yes |
+| workspaceId | The Microsoft Fabric workspace ID. | Yes |
+| artifactId | The Microsoft Fabric Lakehouse object ID. | Yes |
+| tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. Retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Yes |
+| servicePrincipalId | Specify the application's client ID. | Yes |
+| servicePrincipalCredentialType | The credential type to use for service principal authentication. Allowed values are **ServicePrincipalKey** and **ServicePrincipalCert**. | Yes |
+| servicePrincipalCredential | The service principal credential. <br/> When you use **ServicePrincipalKey** as the credential type, specify the application's key. Mark this field as **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> When you use **ServicePrincipalCert** as the credential, reference a certificate in Azure Key Vault, and ensure the certificate content type is **PKCS #12**.| Yes |
+| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure Active Directory application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the data factory or Synapse pipeline's cloud environment is used. | No |
+| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. |No |
+
+**Example: using service principal key authentication**
+
+You can also store service principal key in Azure Key Vault.
+
+```json
+{
+ "name": "MicrosoftFabricLakehouseLinkedService",
+ "properties": {
+ "type": "Lakehouse",
+ "typeProperties": {
+ "workspaceId": "<Microsoft Fabric workspace ID>",
+ "artifactId": "<Microsoft Fabric Lakehouse object ID>",
+ "tenant": "<tenant info, e.g. microsoft.onmicrosoft.com>",
+ "servicePrincipalId": "<service principal id>",
+ "servicePrincipalCredentialType": "ServicePrincipalKey",
+ "servicePrincipalCredential": {
+ "type": "SecureString",
+ "value": "<service principal key>"
+ }
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+
+## Dataset properties
+
+For a full list of sections and properties available for defining datasets, see the [Datasets](concepts-datasets-linked-services.md) article.
+
+The following properties are supported for Microsoft Fabric Lakehouse Table dataset:
+
+| Property | Description | Required |
+| :-- | :-- | :-- |
+| type | The **type** property of the dataset must be set to **LakehouseTable**. | Yes |
+| schema | Name of the schema. |No for source. Yes for sink |
+| table | Name of the table/view. |No for source. Yes for sink |
+
+### Dataset properties example
+
+```json
+{
+    "name": "LakehouseTableDataset",
+    "properties": {
+        "type": "LakehouseTable",
+        "linkedServiceName": {
+            "referenceName": "<Microsoft Fabric Lakehouse linked service name>",
+            "type": "LinkedServiceReference"
+        },
+        "typeProperties": {
+ "table": "<table_name>"
+        },
+        "schema": [< physical schema, optional, retrievable during authoring >]
+    }
+}
+```
+
+## Copy activity properties
+
+For a full list of sections and properties available for defining activities, see [Copy activity configurations](copy-activity-overview.md#configuration) and [Pipelines and activities](concepts-pipelines-activities.md). This section provides a list of properties supported by the Microsoft Fabric Lakehouse Table source and sink.
+
+### Microsoft Fabric Lakehouse Table as a source type
+
+To copy data from Microsoft Fabric Lakehouse Table, set the **type** property in the Copy Activity source to **LakehouseTableSource**. The following properties are supported in the Copy Activity **source** section:
+
+| Property | Description | Required |
+| : | :-- | :- |
+| type | The **type** property of the Copy Activity source must be set to **LakehouseTableSource**. | Yes |
+| timestampAsOf | The timestamp to query an older snapshot. | No |
+| versionAsOf | The version to query an older snapshot. | No |
+
+**Example: Microsoft Fabric Lakehouse Table source**
+
+```json
+"activities":[
+ {
+ "name": "CopyFromLakehouseTable",
+ "type": "Copy",
+ "inputs": [
+ {
+ "referenceName": "<Microsoft Fabric Lakehouse Table input dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "outputs": [
+ {
+ "referenceName": "<output dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "typeProperties": {
+ "source": {
+ "type": "LakehouseTableSource",
+ "timestampAsOf": "2023-09-23T00:00:00.000Z",
+ "versionAsOf": 2
+ },
+ "sink": {
+ "type": "<sink type>"
+ }
+ }
+ }
+]
+```
+
+### Microsoft Fabric Lakehouse Table as a sink type
+
+To copy data from Microsoft Fabric Lakehouse Table, set the **type** property in the Copy Activity source to **LakehouseTableSink**. The following properties are supported in the Copy activity **sink** section:
+
+| Property | Description | Required |
+| : | :-- | :- |
+| type | The **type** property of the Copy Activity source must be set to **LakehouseTableSink**. | Yes |
+| tableActionOption | The way to write data to the sink table. Allowed values are `Append` and `Overwrite`. | No |
+| partitionOption | Allowed values are `None` and `PartitionByKey`. Create partitions in folder structure based on one or multiple columns when the value is `PartitionByKey`. Each distinct column value (pair) will be a new partition (e.g. year=2000/month=01/file). It supports insert-only mode and requires an empty directory in sink. | No |
+| partitionNameList | The destination columns in schemas mapping. Supported data types are string, integer, boolean and datetime. Format respects type conversion settings under "Mapping" tab. | No |
+
+**Example: Microsoft Fabric Lakehouse Table sink**
+
+```json
+"activities":[
+ {
+ "name": "CopyToLakehouseTable",
+ "type": "Copy",
+ "inputs": [
+ {
+ "referenceName": "<input dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "outputs": [
+ {
+ "referenceName": "<Microsoft Fabric Lakehouse Table output dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "typeProperties": {
+ "source": {
+ "type": "<source type>"
+ },
+ "sink": {
+ "type": "LakehouseTableSink",
+ "tableActionOption ": "Append"
+ }
+ }
+ }
+]
+```
+
+## Next steps
+
+For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Monitor Ssis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-ssis.md
To send all metrics and logs generated from SSIS IR operations and SSIS package
SSIS operational [metrics](../azure-monitor/essentials/data-platform-metrics.md) are performance counters or numerical values that describe the status of SSIS IR start and stop operations, as well as SSIS package executions at a particular point in time. They're part of [ADF metrics in Azure Monitor](monitor-metrics-alerts.md).
-When you configure diagnostic settings and workspace for your ADF on Azure Monitor, selecting the _AllMetrics_ check box will make SSIS operational metrics available for [interactive analysis using Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md), [presentation on Azure dashboard](../azure-monitor/app/overview-dashboard.md#create-custom-kpi-dashboards-using-application-insights), and [near-real time alerts](../azure-monitor/alerts/alerts-metric.md).
+When you configure diagnostic settings and workspace for your ADF on Azure Monitor, selecting the _AllMetrics_ check box will make SSIS operational metrics available for [interactive analysis using Azure metrics explorer](../azure-monitor/essentials/analyze-metrics.md), [presentation on Azure dashboard](../azure-monitor/app/tutorial-app-dashboards.md), and [near-real time alerts](../azure-monitor/alerts/alerts-metric.md).
:::image type="content" source="media/data-factory-monitor-oms/monitor-oms-image2.png" alt-text="Name your settings and select a log-analytics workspace":::
data-manager-for-agri Concepts Ingest Weather Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-ingest-weather-data.md
Run the install command through Azure Resource Manager ARM Client tool. The comm
```azurepowershell-interactive armclient PUT /subscriptions/<subscriptionid>/resourceGroups/<resource-group-name>/providers/Microsoft.AgFoodPlatform/farmBeats/<farmbeats-resource-name>/extensions/<extensionid>?api-version=2020-05-12-preview '{}' ```
-For more information, see API documentation [here](/rest/api/data-manager-for-agri).
- > [!NOTE] > All values within < > is to be replaced with your respective environment values. >
armclient put /subscriptions/<subscriptionid>/resourceGroups/<resource-group-nam
## Step 2: Fetch weather data
-Once the credentials required to access the APIs is obtained, you need to call the fetch weather data API [here](/rest/api/data-manager-for-agri/dataplane-version2022-11-01-preview/weather) to fetch weather data.
+Once the credentials required to access the APIs is obtained, you need to call the fetch weather data API [here](/rest/api/data-manager-for-agri/dataplane-version2022-11-01-preview/weather-data) to fetch weather data.
data-manager-for-agri How To Set Up Isv Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-isv-solution.md
Follow the following guidelines to install and use an ISV solution.
## Install an ISV solution
-1. Once you've installed an instance of Azure Data Manager for Agriculture from Azure portal, navigate to Settings ->Solutions tab on the left hand side in your instance.
+1. Once you've installed an instance of Azure Data Manager for Agriculture from Azure portal, navigate to Settings ->Solutions tab on the left hand side in your instance. Ensure you have application admin permission.
2. Click on **Add** to view the list of Solutions available for installation. Select the solution of your choice and click on **Add** button against it. > [!NOTE] >
databox-online Azure Stack Edge Create Vm With Custom Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-create-vm-with-custom-size.md
Title: Create a VM image for Azure Stack Edge with custom number of cores, memory, and GPU count.
-description: Learn how to create a VM image of custom size for Azure Stack Edge.
+ Title: Update a VM for Azure Stack Edge with custom number of cores, memory, and GPU count.
+description: Learn how to update a VM with custom size for Azure Stack Edge.
Previously updated : 09/07/2023 Last updated : 10/30/2023
-# Customer intent: As an IT admin, I need to understand how to create VM images with custom number of cores, memory, and GPU count.
+# Customer intent: As an IT admin, I need to understand how to update a VM with custom number of cores, memory, and GPU count.
-# Create a VM image with custom size
+# Update custom VM size
[!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
-This article describes how to create a VM image for Azure Stack edge with a custom number of cores, memory, and GPU count.
+This article describes how to modify a VM size with a custom number of cores, memory, and GPU count, which can be used to create a VM image for Azure Stack Edge.
-If the standard VM sizes for Azure Stack Edge do not meet your needs, you can configure a standard VM size with custom number of cores, memory, and GPU count.
+## Get existing custom VM sizes
-## Create a new VM
-
-Use the following steps to create a new VM for Azure Stack Edge.
+Use the following steps to get custom VM sizes for Azure Stack Edge.
1. Connect to the PowerShell interface of your Azure Stack Edge device. For detailed steps, see [Connect to the PowerShell interface](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface).
-1. Run the following command to see available VM sizes on your device, including the custom sizes:
+1. Run the following command to see available VM sizes on your device, including custom sizes:
```azurepowershell Get-AzVmSize -Location dbelocal
Use the following steps to create a new VM for Azure Stack Edge.
[{'Name':'Custom_NonGPU','Cores':8,'MemoryMb':14336},{'Name':'Custom_GPU_A2','Cores':8,'MemoryMb':28672,'GpuCount': 1}] ```
-## Update an existing VM
+## Update custom VM size
-1. Run the following command to update the `Cores` or `MemoryMb` values for a VM you deploy to your device.
+1. Run the following command to update the **Custom VM size** with the `Cores` or `MemoryMb` values for a VM you deploy to your device.
Consider the following requirements and restrictions: - The `Name` for these sizes cannot be modified.
Use the following steps to create a new VM for Azure Stack Edge.
Get-AzVmSize -Location dbelocal ```
- In Azure portal, the VM size dropdown will update after five minutes with the new VM options you just created.
+ In Azure portal, the VM size dropdown will update in about five minutes with the new VM options you just created.
[![Screenshot of Azure portal dropdown menu with custom VM size.](./media/azure-stack-edge-create-vm-with-custom-size/azure-stack-edge-custom-vm-size.png)](./media/azure-stack-edge-create-vm-with-custom-size/azure-stack-edge-custom-vm-size.png#lightbox) ## Next steps
+ - [Create a VM](azure-stack-edge-gpu-virtual-machine-overview.md#create-a-vm).
defender-for-cloud Adaptive Network Hardening https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-network-hardening.md
For example, let's say the existing NSG rule is to allow traffic from 140.20.30.
## Modify a rule <a name ="modify-rule"> </a>
-You may want to modify the parameters of a rule that has been recommended. For example, you may want to change the recommended IP ranges.
+You might want to modify the parameters of a rule that has been recommended. For example, you might want to change the recommended IP ranges.
Some important guidelines for modifying an adaptive network hardening rule:
To add an adaptive network hardening rule:
## Delete a rule <a name ="delete-rule"> </a>
-When necessary, you can delete a recommended rule for the current session. For example, you may determine that applying a suggested rule could block legitimate traffic.
+When necessary, you can delete a recommended rule for the current session. For example, you might determine that applying a suggested rule could block legitimate traffic.
To delete an adaptive network hardening rule for your current session:
defender-for-cloud Agentless Container Registry Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/agentless-container-registry-vulnerability-assessment.md
Container vulnerability assessment powered by MDVM (Microsoft Defender Vulnerabi
The triggers for an image scan are: - **One-time triggering**:
- - each image pushed or imported to a container registry is scanned after being pushed or imported to a registry. In most cases, the scan is completed within a few minutes, but sometimes it may take up to an hour.
+ - each image pushed or imported to a container registry is scanned after being pushed or imported to a registry. In most cases, the scan is completed within a few minutes, but sometimes it might take up to an hour.
- [Preview] each image pulled from a registry is triggered to be scanned within 24 hours. > [!NOTE]
A detailed description of the scan process is described as follows:
## If I remove an image from my registry, how long before vulnerabilities reports on that image would be removed?
-Azure Container Registries notifies Defender for Cloud when images are deleted, and removes the vulnerability assessment for deleted images within one hour. In some rare cases, Defender for Cloud may not be notified on the deletion, and deletion of associated vulnerabilities in such cases may take up to three days.
+Azure Container Registries notifies Defender for Cloud when images are deleted, and removes the vulnerability assessment for deleted images within one hour. In some rare cases, Defender for Cloud might not be notified on the deletion, and deletion of associated vulnerabilities in such cases might take up to three days.
## Next steps
defender-for-cloud Alert Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alert-validation.md
You can simulate alerts for resources running on [App Service](/azure/app-servic
:::image type="content" source="media/alert-validation/storage-atp-navigate-container.png" alt-text="Screenshot showing where to navigate to select a container." lightbox="media/alert-validation/storage-atp-navigate-container.png"::: 1. Navigate to an existing container or create a new one.
-1. Upload a file to that container. Avoid uploading any file that may contain sensitive data.
+1. Upload a file to that container. Avoid uploading any file that might contain sensitive data.
:::image type="content" source="media/alert-validation/storage-atp-upload-image.png" alt-text="Screenshot showing where to upload a file to the container." lightbox="media/alert-validation/storage-atp-upload-image.png":::
defender-for-cloud Concept Attack Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-attack-path.md
Defender for Cloud then uses the generated graph to perform an attack path analy
## What is attack path analysis?
-Attack path analysis is a graph-based algorithm that scans the cloud security graph. The scans expose exploitable paths that attackers may use to breach your environment to reach your high-impact assets. Attack path analysis exposes attack paths and suggests recommendations as to how best remediate issues that will break the attack path and prevent successful breach.
+Attack path analysis is a graph-based algorithm that scans the cloud security graph. The scans expose exploitable paths that attackers might use to breach your environment to reach your high-impact assets. Attack path analysis exposes attack paths and suggests recommendations as to how best remediate issues that will break the attack path and prevent successful breach.
-When you take your environment's contextual information into account, attack path analysis identifies issues that may lead to a breach on your environment, and helps you to remediate the highest risk ones first. For example its exposure to the internet, permissions, lateral movement, and more.
+When you take your environment's contextual information into account, attack path analysis identifies issues that might lead to a breach on your environment, and helps you to remediate the highest risk ones first. For example its exposure to the internet, permissions, lateral movement, and more.
:::image type="content" source="media/concept-cloud-map/attack-path.png" alt-text="Image that shows a sample attack path from attacker to your sensitive data.":::
defender-for-cloud Concept Defender For Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-defender-for-cosmos.md
Alerts include details of the incident that triggered them, and recommendations
Threat intelligence security alerts are triggered for: - **Potential SQL injection attacks**: <br>
- Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks canΓÇÖt work in Azure Cosmos DB. However, there are some variations of SQL injections that can succeed and may result in exfiltrating data from your Azure Cosmos DB accounts. Defender for Azure Cosmos DB detects both successful and failed attempts, and helps you harden your environment to prevent these threats.
+ Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks canΓÇÖt work in Azure Cosmos DB. However, there are some variations of SQL injections that can succeed and might result in exfiltrating data from your Azure Cosmos DB accounts. Defender for Azure Cosmos DB detects both successful and failed attempts, and helps you harden your environment to prevent these threats.
- **Anomalous database access patterns**: <br> For example, access from a TOR exit node, known suspicious IP addresses, unusual applications, and unusual locations.
defender-for-cloud Data Security Posture Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-security-posture-enable.md
Follow these steps to enable data-aware security posture. Don't forget to review
- Don't forget to: [review the requirements](concept-data-security-posture-prepare.md#discovery) for AWS discovery, and [required permissions](concept-data-security-posture-prepare.md#whats-supported). - Check that there's no policy that blocks the connection to your Amazon S3 buckets.-- For RDS instances: cross-account KMS encryption is supported, but additional policies on KMS access may prevent access.
+- For RDS instances: cross-account KMS encryption is supported, but additional policies on KMS access might prevent access.
### Enable for AWS resources
defender-for-cloud Data Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-security.md
Defender for Cloud analyzes data from the following sources to provide visibilit
## Data sharing
-When you enable Defender for Storage Malware Scanning, it may share metadata, including metadata classified as customer data (e.g. SHA-256 hash), with Microsoft Defender for Endpoint.
+When you enable Defender for Storage Malware Scanning, it might share metadata, including metadata classified as customer data (e.g. SHA-256 hash), with Microsoft Defender for Endpoint.
## Data protection
Data is kept logically separate on each component throughout the service. All da
### Data access
-To provide security recommendations and investigate potential security threats, Microsoft personnel may access information collected or analyzed by Azure services, including process creation events, and other artifacts, which may unintentionally include customer data or personal data from your machines.
+To provide security recommendations and investigate potential security threats, Microsoft personnel might access information collected or analyzed by Azure services, including process creation events, and other artifacts, which might unintentionally include customer data or personal data from your machines.
We adhere to the [Microsoft Online Services Data Protection Addendum](https://www.microsoftvolumelicensing.com/Downloader.aspx?DocumentId=17880), which states that Microsoft won't use Customer Data or derive information from it for any advertising or similar commercial purposes. We only use Customer Data as needed to provide you with Azure services, including purposes compatible with providing those services. You retain all rights to Customer Data.
defender-for-cloud Defender For Cloud Planning And Operations Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-planning-and-operations-guide.md
In the next section, you'll learn how to plan for each one of those areas and ap
## Security roles and access controls
-Depending on the size and structure of your organization, multiple individuals and teams may use Defender for Cloud to perform different security-related tasks. In the following diagram, you have an example of fictitious personas and their respective roles and security responsibilities:
+Depending on the size and structure of your organization, multiple individuals and teams might use Defender for Cloud to perform different security-related tasks. In the following diagram, you have an example of fictitious personas and their respective roles and security responsibilities:
:::image type="content" source="./media/defender-for-cloud-planning-and-operations-guide/defender-for-cloud-planning-and-operations-guide-fig01-new.png" alt-text="Roles.":::
The personas explained in the previous diagram need these Azure Role-based acces
- Subscription Owner/Contributor required to dismiss alerts. -- Access to the workspace may be required.
+- Access to the workspace might be required.
Some other important information to consider:
Defender for Cloud uses the Log Analytics agent and the Azure Monitor Agent to c
When automatic provisioning is enabled in the security policy, the [data collection agent](monitoring-components.md) is installed on all supported Azure VMs and any new supported VMs that are created. If the VM or computer already has the Log Analytics agent installed, Defender for Cloud uses the current installed agent. The agent's process is designed to be non-invasive and have minimal effect on VM performance.
-If at some point you want to disable Data Collection, you can turn it off in the security policy. However, because the Log Analytics agent may be used by other Azure management and monitoring services, the agent won't be uninstalled automatically when you turn off data collection in Defender for Cloud. You can manually uninstall the agent if needed.
+If at some point you want to disable Data Collection, you can turn it off in the security policy. However, because the Log Analytics agent might be used by other Azure management and monitoring services, the agent won't be uninstalled automatically when you turn off data collection in Defender for Cloud. You can manually uninstall the agent if needed.
> [!NOTE] > To find a list of supported VMs, read the [Defender for Cloud common questions](faq-vms.yml).
The following example shows a suspicious RDP activity taking place:
:::image type="content" source="./media/defender-for-cloud-planning-and-operations-guide/defender-for-cloud-planning-and-operations-guide-fig5-ga.png" alt-text="Suspicious activity.":::
-This page shows the details regarding the time that the attack took place, the source hostname, the target VM and also gives recommendation steps. In some circumstances, the source information of the attack may be empty. Read [Missing Source Information in Defender for Cloud alerts](/archive/blogs/azuresecurity/missing-source-information-in-azure-security-center-alerts) for more information about this type of behavior.
+This page shows the details regarding the time that the attack took place, the source hostname, the target VM and also gives recommendation steps. In some circumstances, the source information of the attack might be empty. Read [Missing Source Information in Defender for Cloud alerts](/archive/blogs/azuresecurity/missing-source-information-in-azure-security-center-alerts) for more information about this type of behavior.
Once you identify the compromised system, you can run a [workflow automation](workflow-automation.md) that was previously created. Workflow automations are a collection of procedures that can be executed from Defender for Cloud once triggered by an alert.
defender-for-cloud Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-introduction.md
Yes. If you have an organizational need to ignore a finding, rather than remedia
### Why is Defender for Cloud alerting me to vulnerabilities about an image that isnΓÇÖt in my registry?
-Defender for Cloud provides vulnerability assessments for every image pushed or pulled in a registry. Some images may reuse tags from an image that was already scanned. For example, you may reassign the tag ΓÇ£LatestΓÇ¥ every time you add an image to a digest. In such cases, the ΓÇÿoldΓÇÖ image does still exist in the registry and may still be pulled by its digest. If the image has security findings and is pulled, it'll expose security vulnerabilities.
+Defender for Cloud provides vulnerability assessments for every image pushed or pulled in a registry. Some images might reuse tags from an image that was already scanned. For example, you might reassign the tag ΓÇ£LatestΓÇ¥ every time you add an image to a digest. In such cases, the ΓÇÿoldΓÇÖ image does still exist in the registry and might still be pulled by its digest. If the image has security findings and is pulled, it'll expose security vulnerabilities.
## Next steps
defender-for-cloud Defender For Containers Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md
Container vulnerability assessment powered by Qualys has the following capabilit
## Scan triggers - **One-time triggering**
- - Each image pushed/imported to a container registry is scanned shortly after being pushed to a registry. In most cases, the scan is completed within a few minutes, but sometimes it may take up to an hour.
+ - Each image pushed/imported to a container registry is scanned shortly after being pushed to a registry. In most cases, the scan is completed within a few minutes, but sometimes it might take up to an hour.
- Each image pulled from a container registry is scanned if it wasn't scanned in the last seven days. - **Continuous rescan triggering** ΓÇô Continuous rescan is required to ensure images that have been previously scanned for vulnerabilities are rescanned to update their vulnerability reports in case a new vulnerability is published. - **Rescan** is performed once every 7 days for:
defender-for-cloud Defender For Devops Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-devops-introduction.md
Defender for DevOps uses a central console to empower security teams with the ab
Defender for DevOps helps unify, strengthen and manage multi-pipeline DevOps security. ## Availability+ | Aspect | Details | |--|--|
-| Release state: | Preview<br>The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. |
+| Release state: | Preview<br>The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. |
| Clouds | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet) | | Regions: | Australia East, Central US, West Europe | | Source Code Management Systems | [Azure DevOps](https://portal.azure.com/#home) <br>[GitHub](https://github.com/) supported versions: GitHub Free, Pro, Team, and GitHub Enterprise Cloud |
Defender for DevOps helps unify, strengthen and manage multi-pipeline DevOps sec
## Manage your DevOps environments in Defender for Cloud
-Defender for DevOps allows you to manage your connected environments and provides your security teams with a high level overview of discovered issues that may exist within them through the [Defender for DevOps console](https://portal.azure.com/#view/Microsoft_Azure_Security/SecurityMenuBlade/~/DevOpsSecurity).
+Defender for DevOps allows you to manage your connected environments and provides your security teams with a high level overview of discovered issues that might exist within them through the [Defender for DevOps console](https://portal.azure.com/#view/Microsoft_Azure_Security/SecurityMenuBlade/~/DevOpsSecurity).
:::image type="content" source="media/defender-for-devops-introduction/devops-dashboard.png" alt-text="Screenshot of the Defender for DevOps dashboard." lightbox="media/defender-for-devops-introduction/devops-dashboard.png":::
defender-for-cloud Defender For Resource Manager Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-resource-manager-usage.md
To investigate security alerts from Defender for Resource
## Step 3: Immediate mitigation 1. Remediate compromised user accounts:
- - If theyΓÇÖre unfamiliar, delete them as they may have been created by a threat actor
+ - If theyΓÇÖre unfamiliar, delete them as they might have been created by a threat actor
- If theyΓÇÖre familiar, change their authentication credentials - Use Azure Activity Logs to review all activities performed by the user and identify any that are suspicious
defender-for-cloud Defender For Sql Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-introduction.md
# Overview of Microsoft Defender for Azure SQL
-Microsoft Defender for Azure SQL helps you discover and mitigate potential [database vulnerabilities](sql-azure-vulnerability-assessment-overview.md) and alerts you to [anomalous activities](#advanced-threat-protection) that may be an indication of a threat to your databases.
+Microsoft Defender for Azure SQL helps you discover and mitigate potential [database vulnerabilities](sql-azure-vulnerability-assessment-overview.md) and alerts you to [anomalous activities](#advanced-threat-protection) that might be an indication of a threat to your databases.
- [Vulnerability assessment](#discover-and-mitigate-vulnerabilities): Scan databases to discover, track, and remediate vulnerabilities. Learn more about [vulnerability assessment](sql-azure-vulnerability-assessment-overview.md). - [Threat protection](#advanced-threat-protection): Receive detailed security alerts and recommended actions based on SQL Advanced Threat Protection to provide to mitigate threats. Learn more about [SQL Advanced Threat Protection](/azure/azure-sql/database/threat-detection-overview).
defender-for-cloud Defender For Sql On Machines Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-on-machines-vulnerability-assessment.md
Metadata information about the connected machine is also collected. Specifically
- UUID (BIOS ID) - SQL server name and underlying database names
-You can specify the region where your SQL Vulnerability Assessment data will be stored by choosing the Log Analytics workspace location. Microsoft may replicate to other regions for data resiliency, but Microsoft does not replicate data outside the geography.
+You can specify the region where your SQL Vulnerability Assessment data will be stored by choosing the Log Analytics workspace location. Microsoft might replicate to other regions for data resiliency, but Microsoft does not replicate data outside the geography.
## Next steps
defender-for-cloud Defender For Storage Classic Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-classic-migrate.md
The new plan includes advanced security capabilities to help protect against mal
The new plan also provides a more predictable and flexible pricing structure for better control over coverage and costs.
-The new pricing plan charges based on the number of storage accounts you protect, which simplifies cost calculations and allows for easy scaling as your needs change. You can enable it at the subscription or resource level and can also exclude specific storage accounts from protected subscriptions, providing more granular control over your security coverage. Extra charges may apply to storage accounts with high-volume transactions that exceed a high monthly threshold.
+The new pricing plan charges based on the number of storage accounts you protect, which simplifies cost calculations and allows for easy scaling as your needs change. You can enable it at the subscription or resource level and can also exclude specific storage accounts from protected subscriptions, providing more granular control over your security coverage. Extra charges might apply to storage accounts with high-volume transactions that exceed a high monthly threshold.
## Deprecation of Defender for Storage (classic)
Storage accounts that were previously excluded from protected subscriptions in t
### Migrating from the classic Defender for Storage plan enabled with per-storage account pricing
-If the classic Defender for Storage plan is enabled with per-storage account pricing, you can switch to the new plan at either the subscription or resource level. The new Defender for Storage plan has the same pricing plan with the exception of malware scanning which may incur extra charges and is billed per GB scanned.
+If the classic Defender for Storage plan is enabled with per-storage account pricing, you can switch to the new plan at either the subscription or resource level. The new Defender for Storage plan has the same pricing plan with the exception of malware scanning which might incur extra charges and is billed per GB scanned.
You can learn more about Defender for Storage's pricing model on the [Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h).
If you're looking to quickly identify which pricing plans are active on your sub
To help you better understand the differences between the classic plan and the new plan, here's a comparison table:
-| Category | New Defender for Storage plan | Classic (per-transaction plan) | Classic (per-storage account plan) |
+| Category | New Defender for Storage plan | Classic (per-transaction plan) | Classic (per-storage account plan) |
| | | | |
-| Pricing structure | Cost is based on the number of storage accounts you protect\*. Add-on costs for GB scanned for malware, if enabled| Cost is based on the number of transactions processed | Cost is based on the number of storage accounts you protect* |
-| Enablement options | Subscription and resource level | Subscription and resource level | Subscription only |
-| Exclusion of storage accounts from protected subscriptions | Yes | Yes | No |
-| Activity monitoring (security alerts) | Yes | Yes | Yes |
-| Malware scanning in uploaded Blobs | Yes (add-on) | No (only hash-reputation analysis) | No (only hash-reputation analysis) |
-| Sensitive data threat detection | Yes (add-on) | No | No |
-| Detection of leaked/compromised SAS tokens (entities without identities) | Yes | No | No |
-
-\* extra charges may apply to storage accounts with high-volume transactions.
+| Pricing structure | Cost is based on the number of storage accounts you protect\*. Add-on costs for GB scanned for malware, if enabled| Cost is based on the number of transactions processed | Cost is based on the number of storage accounts you protect* |
+| Enablement options | Subscription and resource level | Subscription and resource level | Subscription only |
+| Exclusion of storage accounts from protected subscriptions | Yes | Yes | No |
+| Activity monitoring (security alerts) | Yes | Yes | Yes |
+| Malware scanning in uploaded Blobs | Yes (add-on) | No (only hash-reputation analysis) | No (only hash-reputation analysis) |
+| Sensitive data threat detection | Yes (add-on) | No | No |
+| Detection of leaked/compromised SAS tokens (entities without identities) | Yes | No | No |
+
+\* extra charges might apply to storage accounts with high-volume transactions.
The new plan offers a more comprehensive feature set designed to better protect your data. It also provides a more predictable pricing plan compared to the classic plan. We recommend you migrate to the new plan to take full advantage of its benefits.
defender-for-cloud Defender For Storage Configure Malware Scan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-configure-malware-scan.md
You can use code or workflow automation to delete or move malicious files to qua
- **Delete the malicious file** - Before setting up automated deletion, enabling [soft delete](../storage/blobs/soft-delete-blob-overview.md) on the storage account is recommended. It allows to ΓÇ£undeleteΓÇ¥ files if there are false positives or in cases where security professionals want to investigate the malicious files. - **Move the malicious file to quarantine** - You can move files to a dedicated storage container or storage account that are considered as ΓÇ£quarantineΓÇ¥.
-You may want only certain users, such as a security admin or a SOC analyst, to have permission to access this dedicated container or storage account.
+You might want only certain users, such as a security admin or a SOC analyst, to have permission to access this dedicated container or storage account.
- - Using [Microsoft Entra ID to control access to blob storage](../storage/blobs/authorize-access-azure-active-directory.md) is considered a best practice. To control access to the dedicated quarantine storage container, you can use [container-level role assignments using Microsoft Entra role-based access control (RBAC)](../storage/blobs/authorize-access-azure-active-directory.md). Users with storage account-level permissions may still be able to access the ΓÇ£quarantineΓÇ¥ container. You can either edit their permissions to be container-level or choose a different approach and move the malicious file to a dedicated storage account.
+ - Using [Microsoft Entra ID to control access to blob storage](../storage/blobs/authorize-access-azure-active-directory.md) is considered a best practice. To control access to the dedicated quarantine storage container, you can use [container-level role assignments using Microsoft Entra role-based access control (RBAC)](../storage/blobs/authorize-access-azure-active-directory.md). Users with storage account-level permissions might still be able to access the ΓÇ£quarantineΓÇ¥ container. You can either edit their permissions to be container-level or choose a different approach and move the malicious file to a dedicated storage account.
- If you must use other methods, such as [SAS (shared access signatures)](../storage/common/storage-sas-overview.md) tokens on the protected storage account, it's best practice to move malicious files to another storage account (quarantine). Then, it's best only to grant Microsoft Entra permission to access the quarantined storage account. ### Set up automation
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md
Defender for Storage provides the following:
- **Improved threat detection and protection of sensitive data**: The sensitive data threat detection capability enables security professionals to efficiently prioritize and examine security alerts by considering the sensitivity of the data that could be at risk, leading to better detection and protection against potential threats. By quickly identifying and addressing the most significant risks, this capability lowers the likelihood of data breaches and enhances sensitive data protection by detecting exposure events and suspicious activities on resources containing sensitive data. Learn more about [sensitive data threat detection](defender-for-storage-data-sensitivity.md). -- **Detection of entities without identities**: Defender for Storage detects suspicious activities generated by entities without identities that access your data using misconfigured and overly permissive Shared Access Signatures (SAS tokens) that may have leaked or compromised so that you can improve the security hygiene and reduce the risk of unauthorized access. This capability is an expansion of the Activity Monitoring security alerts suite.
+- **Detection of entities without identities**: Defender for Storage detects suspicious activities generated by entities without identities that access your data using misconfigured and overly permissive Shared Access Signatures (SAS tokens) that might have leaked or compromised so that you can improve the security hygiene and reduce the risk of unauthorized access. This capability is an expansion of the Activity Monitoring security alerts suite.
- **Coverage of the top cloud storage threats**: Powered by Microsoft Threat Intelligence, behavioral models, and machine learning models to detect unusual and suspicious activities. The Defender for Storage security alerts covers the top cloud storage threats, such as sensitive data exfiltration, data corruption, and malicious file uploads.
Defender for Storage provides the following:
### Activity monitoring
-Defender for Storage continuously analyzes data and control plane logs from protected storage accounts when enabled. There's no need to turn on resource logs for security benefits. Use Microsoft Threat Intelligence to identify suspicious signatures such as malicious IP addresses, Tor exit nodes, and potentially dangerous apps. It also builds data models and uses statistical and machine-learning methods to spot baseline activity anomalies, which may indicate malicious behavior. You receive security alerts for suspicious activities, but Defender for Storage ensures you won't get too many similar alerts. Activity monitoring won't affect performance, ingestion capacity, or access to your data.
+Defender for Storage continuously analyzes data and control plane logs from protected storage accounts when enabled. There's no need to turn on resource logs for security benefits. Use Microsoft Threat Intelligence to identify suspicious signatures such as malicious IP addresses, Tor exit nodes, and potentially dangerous apps. It also builds data models and uses statistical and machine-learning methods to spot baseline activity anomalies, which might indicate malicious behavior. You receive security alerts for suspicious activities, but Defender for Storage ensures you won't get too many similar alerts. Activity monitoring won't affect performance, ingestion capacity, or access to your data.
:::image type="content" source="media/defender-for-storage-introduction/activity-monitoring.png" alt-text="Diagram showing how activity monitoring identifies threats to your data.":::
For more details, visit [Sensitive data threat detection](defender-for-storage-d
### Per storage account pricing
-The new Microsoft Defender for Storage plan has predictable pricing based on the number of storage accounts you protect. With the option to enable at the subscription or resource level and exclude specific storage accounts from protected subscriptions, you have increased flexibility to manage your security coverage. The pricing plan simplifies the cost calculation process, allowing you to scale easily as your needs change. Other charges may apply to storage accounts with high-volume transactions.
+The new Microsoft Defender for Storage plan has predictable pricing based on the number of storage accounts you protect. With the option to enable at the subscription or resource level and exclude specific storage accounts from protected subscriptions, you have increased flexibility to manage your security coverage. The pricing plan simplifies the cost calculation process, allowing you to scale easily as your needs change. Other charges might apply to storage accounts with high-volume transactions.
### Malware Scanning - Billing per GB, monthly capping, and configuration
defender-for-cloud Defender For Storage Threats Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-threats-alerts.md
These threats can result in malware uploads, data corruption, and sensitive data
:::image type="content" source="media/defender-for-storage-threats-alerts/malware-risks.png" alt-text="Diagram showing common risks to data that can result from malware.":::
-In addition to security threats, configuration errors may inadvertently expose sensitive resources. Some common misconfiguration issues include:
+In addition to security threats, configuration errors might inadvertently expose sensitive resources. Some common misconfiguration issues include:
- Inadequate access controls and networking rules, leading to unintended data exposure on the internet - Insufficient authentication mechanisms
defender-for-cloud Detect Exposed Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/detect-exposed-secrets.md
Title: Detect exposed secrets in code
-description: Prevent passwords and other secrets that may be stored in your code from being accessed by outside individuals by using Defender for Cloud's secret scanning for Defender for DevOps.
+description: Prevent passwords and other secrets that might be stored in your code from being accessed by outside individuals by using Defender for Cloud's secret scanning for Defender for DevOps.
Last updated 01/31/2023
If your Azure service is listed, you can [manage your identities for Azure resou
## Suppress false positives
-When the scanner runs, it may detect credentials that are false positives. Inline-suppression tools can be used to suppress false positives.
+When the scanner runs, it might detect credentials that are false positives. Inline-suppression tools can be used to suppress false positives.
Some reasons to suppress false positives include: - Fake or mocked credentials in the test files. These credentials can't access resources. -- Placeholder strings. For example, placeholder strings may be used to initialize a variable, which is then populated using a secret store such as AKV.
+- Placeholder strings. For example, placeholder strings might be used to initialize a variable, which is then populated using a secret store such as AKV.
- External library or SDKs that 's directly consumed. For example, openssl. - Hard-coded credentials for an ephemeral test resource that only exists for the lifetime of the test being run. -- Self-signed certificates that are used locally and not used as a root. For example, they may be used when running localhost to allow HTTPS.
+- Self-signed certificates that are used locally and not used as a root. For example, they might be used when running localhost to allow HTTPS.
- Source-controlled documentation with non-functional credential for illustration purposes only - Invalid results. The output isn't a credential or a secret.
-You may want to suppress fake secrets in unit tests or mock paths, or inaccurate results. We don't recommend using suppression to suppress test credentials. Test credentials can still pose a security risk and should be securely stored.
+You might want to suppress fake secrets in unit tests or mock paths, or inaccurate results. We don't recommend using suppression to suppress test credentials. Test credentials can still pose a security risk and should be securely stored.
> [!NOTE] > Valid inline suppression syntax depends on the language, data format and CredScan version you are using.
To suppress the secret found in the next line, add the following code as a comme
``` ## Next steps
-+ Learn how to [configure pull request annotations](enable-pull-request-annotations.md) in Defender for Cloud to remediate secrets in code before they're shipped to production.
+
+- Learn how to [configure pull request annotations](enable-pull-request-annotations.md) in Defender for Cloud to remediate secrets in code before they're shipped to production.
defender-for-cloud Episode Three https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-three.md
Last updated 04/27/2023
# Microsoft Defender for Containers
-**Episode description**: In this episode of Defender for Cloud in the field, Maya Herskovic joins Yuri Diogenes to talk about Microsoft Defender for Containers. Maya explains what's new in Microsoft Defender for Containers, the new capabilities that are available, the new pricing model, and the multicloud coverage. Maya also demonstrates the overall experience of Microsoft Defender for Containers from the recommendations to the alerts that you may receive.
+**Episode description**: In this episode of Defender for Cloud in the field, Maya Herskovic joins Yuri Diogenes to talk about Microsoft Defender for Containers. Maya explains what's new in Microsoft Defender for Containers, the new capabilities that are available, the new pricing model, and the multicloud coverage. Maya also demonstrates the overall experience of Microsoft Defender for Containers from the recommendations to the alerts that you might receive.
> [!VIDEO https://aka.ms/docs/player?id=b8624912-ef9e-4fc6-8c0c-ea65e86d9128]
Last updated 04/27/2023
Learn more about [Microsoft Defender for Containers](defender-for-containers-introduction.md). -- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
-- Follow us on social media:
+- Follow us on social media:
[LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F) [Twitter](https://twitter.com/msftsecurity) -- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
-- For more about [Microsoft Security](https://msft.it/6002T9HQY)
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
## Next steps > [!div class="nextstepaction"]
-> [Security posture management improvements](episode-four.md)
+> [Security posture management improvements](episode-four.md)
defender-for-cloud File Integrity Monitoring Enable Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-enable-log-analytics.md
In this article, you'll learn how to:
- [Compare baselines using File Integrity Monitoring](#compare-baselines-using-file-integrity-monitoring) > [!NOTE]
-> File Integrity Monitoring may create the following account on monitored SQL Servers: `NT Service\HealthService` \
+> File Integrity Monitoring might create the following account on monitored SQL Servers: `NT Service\HealthService` \
> If you delete the account, it will be automatically recreated. ## Availability
FIM is only available from Defender for Cloud's pages in the Azure portal. There
The following information is provided for each workspace:
- - Total number of changes that occurred in the last week (you may see a dash "-ΓÇ£ if FIM isn't enabled on the workspace)
+ - Total number of changes that occurred in the last week (you might see a dash "-ΓÇ£ if FIM isn't enabled on the workspace)
- Total number of computers and VMs reporting to the workspace - Geographic location of the workspace - Azure subscription that the workspace is under
Use wildcards to simplify tracking across directories. The following rules apply
### Enable built-in recursive registry checks
-The FIM registry hive defaults provide a convenient way to monitor recursive changes within common security areas. For example, an adversary may configure a script to execute in LOCAL_SYSTEM context by configuring an execution at startup or shutdown. To monitor changes of this type, enable the built-in check.
+The FIM registry hive defaults provide a convenient way to monitor recursive changes within common security areas. For example, an adversary might configure a script to execute in LOCAL_SYSTEM context by configuring an execution at startup or shutdown. To monitor changes of this type, enable the built-in check.
![Registry.](./media/file-integrity-monitoring-enable-log-analytics/baselines-registry.png)
defender-for-cloud Governance Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/governance-rules.md
Last updated 01/23/2023
# Drive remediation with security governance
-Security teams are responsible for improving the security posture of their organizations but they may not have the resources or authority to actually implement security recommendations. [Assigning owners with due dates](#manually-assigning-owners-and-due-dates-for-recommendation-remediation) and [defining governance rules](#building-an-automated-process-for-improving-security-with-governance-rules) creates accountability and transparency so you can drive the process of improving the security posture in your organization.
+Security teams are responsible for improving the security posture of their organizations but they might not have the resources or authority to actually implement security recommendations. [Assigning owners with due dates](#manually-assigning-owners-and-due-dates-for-recommendation-remediation) and [defining governance rules](#building-an-automated-process-for-improving-security-with-governance-rules) creates accountability and transparency so you can drive the process of improving the security posture in your organization.
Stay on top of the progress on the recommendations in the security posture. Weekly email notifications to the owners and managers make sure that they take timely action on the recommendations that can improve your security posture and recommendations.
defender-for-cloud How To Manage Attack Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-attack-path.md
You can use Attack path analysis to locate the biggest risks to your environmen
:::image type="content" source="media/how-to-manage-cloud-map/attack-path.png" alt-text="Screenshot that shows a sample of attack paths." lightbox="media/how-to-manage-cloud-map/attack-path.png" ::: > [!NOTE]
- > An attack path may have more than one path that is at risk. The path count will tell you how many paths need to be remediated. If the attack path has more than one path, you will need to select each path within that attack path to remediate all risks.
+ > An attack path might have more than one path that is at risk. The path count will tell you how many paths need to be remediated. If the attack path has more than one path, you will need to select each path within that attack path to remediate all risks.
1. Select a node.
defender-for-cloud How To Test Attack Path And Security Explorer With Vulnerable Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-test-attack-path-and-security-explorer-with-vulnerable-container-image.md
Last updated 07/17/2023
## Observing potential threats in the attack path experience
-Attack path analysis is a graph-based algorithm that scans the cloud security graph. The scans expose exploitable paths that attackers may use to breach your environment to reach your high-impact assets. Attack path analysis exposes attack paths and suggests recommendations as to how best remediate issues that will break the attack path and prevent successful breach.
+Attack path analysis is a graph-based algorithm that scans the cloud security graph. The scans expose exploitable paths that attackers might use to breach your environment to reach your high-impact assets. Attack path analysis exposes attack paths and suggests recommendations as to how best remediate issues that will break the attack path and prevent successful breach.
Explore and investigate [attack paths](how-to-manage-attack-path.md) by sorting them based on name, environment, path count, and risk categories. Explore cloud security graph Insights on the resource. Examples of Insight types are:
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
When you integrate Defender for Endpoint with Defender for Cloud, you gain acces
A Defender for Endpoint tenant is automatically created, when you use Defender for Cloud to monitor your machines. -- **Location:** Data collected by Defender for Endpoint is stored in the geo-location of the tenant as identified during provisioning. Customer data - in pseudonymized form - may also be stored in the central storage and processing systems in the United States. After you've configured the location, you can't change it. If you have your own license for Microsoft Defender for Endpoint and need to move your data to another location, [contact Microsoft support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) to reset the tenant.
+- **Location:** Data collected by Defender for Endpoint is stored in the geo-location of the tenant as identified during provisioning. Customer data - in pseudonymized form - might also be stored in the central storage and processing systems in the United States. After you've configured the location, you can't change it. If you have your own license for Microsoft Defender for Endpoint and need to move your data to another location, [contact Microsoft support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) to reset the tenant.
- **Moving subscriptions:** If you've moved your Azure subscription between Azure tenants, some manual preparatory steps are required before Defender for Cloud will deploy Defender for Endpoint. For full details, [contact Microsoft support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
You'll deploy Defender for Endpoint to your Linux machines in one of these ways,
- Enable for multiple subscriptions with a PowerShell script > [!NOTE]
-> When you enable automatic deployment, Defender for Endpoint for Linux installation will abort on machines with pre-existing running services using [fanotify](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint-linux#system-requirements) and other services that can also cause Defender for Endpoint to malfunction or may be affected by Defender for Endpoint, such as security services.
+> When you enable automatic deployment, Defender for Endpoint for Linux installation will abort on machines with pre-existing running services using [fanotify](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint-linux#system-requirements) and other services that can also cause Defender for Endpoint to malfunction or might be affected by Defender for Endpoint, such as security services.
> After you validate potential compatibility issues, we recommend that you manually install Defender for Endpoint on these servers. ##### Existing users with Defender for Cloud's enhanced security features enabled and Microsoft Defender for Endpoint for Windows
defender-for-cloud Monitoring Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/monitoring-components.md
The following use cases explain how deployment of the Log Analytics agent works
- **System Center Operations Manager agent is installed on the machine** - Defender for Cloud will install the Log Analytics agent extension side by side to the existing Operations Manager. The existing Operations Manager agent will continue to report to the Operations Manager server normally. The Operations Manager agent and Log Analytics agent share common run-time libraries, which will be updated to the latest version during this process. - **A pre-existing VM extension is present**:
- - When the Monitoring Agent is installed as an extension, the extension configuration allows reporting to only a single workspace. Defender for Cloud doesn't override existing connections to user workspaces. Defender for Cloud will store security data from the VM in the workspace already connected, if the "Security" or "SecurityCenterFree" solution has been installed on it. Defender for Cloud may upgrade the extension version to the latest version in this process.
+ - When the Monitoring Agent is installed as an extension, the extension configuration allows reporting to only a single workspace. Defender for Cloud doesn't override existing connections to user workspaces. Defender for Cloud will store security data from the VM in the workspace already connected, if the "Security" or "SecurityCenterFree" solution has been installed on it. Defender for Cloud might upgrade the extension version to the latest version in this process.
- To see to which workspace the existing extension is sending data to, run the *TestCloudConnection.exe* tool to validate connectivity with Microsoft Defender for Cloud, as described in [Verify Log Analytics Agent connectivity](/services-hub/unified/health/assessments-troubleshooting#verify-log-analytics-agent-connectivity). Alternatively, you can open Log Analytics workspaces, select a workspace, select the VM, and look at the Log Analytics agent connection. - If you have an environment where the Log Analytics agent is installed on client workstations and reporting to an existing Log Analytics workspace, review the list of [operating systems supported by Microsoft Defender for Cloud](security-center-os-coverage.md) to make sure your operating system is supported.
defender-for-cloud Onboard Machines With Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/onboard-machines-with-defender-for-endpoint.md
Before you begin:
:::image type="content" source="media/onboard-machines-with-defender-for-endpoint/onboard-with-defender-for-endpoint.png" alt-text="Screenshot of Onboard non-Azure servers with Defender for Endpoint.":::
-You've now successfully enabled direct onboarding on your tenant. After you enable it for the first time, it may take up to 24 hours to see your non-Azure servers in your designated subscription.
+You've now successfully enabled direct onboarding on your tenant. After you enable it for the first time, it might take up to 24 hours to see your non-Azure servers in your designated subscription.
### Deploying Defender for Endpoint on your servers
Deploying the Defender for Endpoint agent on your on-premises Windows and Linux
- **Multi-cloud support**: You can directly onboard VMs in AWS and GCP using the Defender for Endpoint agent. However, if you plan to simultaneously connect your AWS or GCP account to Defender for Servers using multicloud connectors, it's currently still recommended to deploy Azure Arc. -- **Simultaneous onboarding limited support**: Defender for Cloud makes a best effort to correlate servers onboarded using multiple billing methods. However, in certain server deployment use cases, there may be limitations where Defender for Cloud is unable to correlate your machines. This may result in overcharges on certain devices if direct onboarding is also enabled on your tenant.
+- **Simultaneous onboarding limited support**: Defender for Cloud makes a best effort to correlate servers onboarded using multiple billing methods. However, in certain server deployment use cases, there might be limitations where Defender for Cloud is unable to correlate your machines. This might result in overcharges on certain devices if direct onboarding is also enabled on your tenant.
The following are deployment use cases currently with this limitation when used with direct onboarding of your tenant:
defender-for-cloud Partner Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/partner-integration.md
The status of a security solution can be:
* **Healthy** (green) - no health issues. * **Unhealthy** (red) - there's a health issue that requires immediate attention. * **Stopped reporting** (orange) - the solution has stopped reporting its health.
-* **Not reported** (gray) - the solution hasn't reported anything yet and no health data is available. A solution's status may be unreported if it was connected recently and is still deploying.
+* **Not reported** (gray) - the solution hasn't reported anything yet and no health data is available. A solution's status might be unreported if it was connected recently and is still deploying.
> [!NOTE] > If health status data is not available, Defender for Cloud shows the date and time of the last event received to indicate whether the solution is reporting or not. If no health data is available and no alerts were received within the last 14 days, Defender for Cloud indicates that the solution is unhealthy or not reporting.
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Deploy the CloudFormation template by using Stack (or StackSet if you have a man
``` > [!NOTE]
- > When running the CloudFormation StackSets when onboarding an AWS management account, you may encounter the following error message:
+ > When running the CloudFormation StackSets when onboarding an AWS management account, you might encounter the following error message:
> `You must enable organizations access to operate a service managed stack set` > > This error indicates that you have noe enabled [the trusted access for AWS Organizations](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-activate-trusted-access.html).
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Microsoft Defender for Containers brings threat detection and advanced defenses
> [!NOTE] > > - If you choose to disable the available configuration options, no agents or components will be deployed to your clusters. [Learn more about feature availability](supported-machines-endpoint-solutions-clouds-containers.md).
-> - Defender for Containers when deployed on GCP, may incur external costs such as [logging costs](https://cloud.google.com/stackdriver/pricing), [pub/sub costs](https://cloud.google.com/pubsub/pricing) and [egress costs](https://cloud.google.com/vpc/network-pricing#:~:text=Platform%20SKUs%20apply.-%2cInternet%20egress%20rates%2c-Premium%20Tier%20pricing).
+> - Defender for Containers when deployed on GCP, might incur external costs such as [logging costs](https://cloud.google.com/stackdriver/pricing), [pub/sub costs](https://cloud.google.com/pubsub/pricing) and [egress costs](https://cloud.google.com/vpc/network-pricing#:~:text=Platform%20SKUs%20apply.-%2cInternet%20egress%20rates%2c-Premium%20Tier%20pricing).
- **Kubernetes audit logs to Defender for Cloud**: Enabled by default. This configuration is available at the GCP project level only. It provides agentless collection of the audit log data through [GCP Cloud Logging](https://cloud.google.com/logging/) to the Microsoft Defender for Cloud back end for further analysis. - **Azure Arc-enabled Kubernetes, the Defender agent, and Azure Policy for Kubernetes**: Enabled by default. You can install Azure Arc-enabled Kubernetes and its extensions on your GKE clusters in three ways:
defender-for-cloud Regulatory Compliance Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/regulatory-compliance-dashboard.md
Use the regulatory compliance dashboard to help focus your attention on the gaps
## Investigate regulatory compliance issues
-You can use the information in the regulatory compliance dashboard to investigate any issues that may be affecting your compliance posture.
+You can use the information in the regulatory compliance dashboard to investigate any issues that might be affecting your compliance posture.
**To investigate your compliance issues**:
You can use the information in the regulatory compliance dashboard to investigat
## Remediate an automated assessment
-The regulatory compliance has both automated and manual assessments that may need to be remediated. Using the information in the regulatory compliance dashboard, improve your compliance posture by resolving recommendations directly within the dashboard.
+The regulatory compliance has both automated and manual assessments that might need to be remediated. Using the information in the regulatory compliance dashboard, improve your compliance posture by resolving recommendations directly within the dashboard.
**To remediate an automated assessment**:
The regulatory compliance has both automated and manual assessments that may nee
## Remediate a manual assessment
-The regulatory compliance has automated and manual assessments that may need to be remediated. Manual assessments are assessments that require input from the customer to remediate them.
+The regulatory compliance has automated and manual assessments that might need to be remediated. Manual assessments are assessments that require input from the customer to remediate them.
**To remediate a manual assessment**:
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 10/25/2023 Last updated : 10/30/2023 # What's new in Microsoft Defender for Cloud?
If you're looking for items older than six months, you can find them in the [Arc
|Date |Update | |-|-|
+| October 30 | [Changing adaptive application controlΓÇÖs security alert's severity](#changing-adaptive-application-controls-security-alerts-severity)
| October 25 | [Offline Azure API Management revisions removed from Defender for APIs](#offline-azure-api-management-revisions-removed-from-defender-for-apis) | | October 19 |[DevOps security posture management recommendations available in public preview](#devops-security-posture-management-recommendations-available-in-public-preview) | October 18 | [Releasing CIS Azure Foundations Benchmark v2.0.0 in Regulatory Compliance dashboard](#releasing-cis-azure-foundations-benchmark-v200-in-regulatory-compliance-dashboard) |
+## Changing adaptive application controls security alert's severity
+
+Announcement date: October 30, 2023
+
+As part of security alert quality improvement process of Defender for Servers, and as part of the [adaptive application controls](adaptive-application-controls.md) feature, the severity of the following security alert is changing to ΓÇ£InformationalΓÇ¥:
+
+| Alert [Alert Type] | Alert Description |
+|--|--|
+| Adaptive application control policy violation was audited.[VM_AdaptiveApplicationControlWindowsViolationAudited, VM_AdaptiveApplicationControlWindowsViolationAudited] | The below users ran applications that are violating the application control policy of your organization on this machine. It can possibly expose the machine to malware or application vulnerabilities.|
+
+To keep viewing this alert in the ΓÇ£Security alertsΓÇ¥ blade in the Microsoft Defender for Cloud portal, change the default view filter **Severity** to include **informational** alerts in the grid.
+
+ :::image type="content" source="media/release-notes/add-informational-severity.png" alt-text="Screenshot that shows you where to add the informational severity for alerts." lightbox="media/release-notes/add-informational-severity.png":::
+ ## Offline Azure API Management revisions removed from Defender for APIs October 25, 2023
defender-for-cloud Secret Scanning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secret-scanning.md
For requirements for agentless scanning, see [Learn about agentless scanning](co
## Remediate secrets with attack path
-Attack path analysis is a graph-based algorithm that scans your [cloud security graph](concept-attack-path.md#what-is-cloud-security-graph). These scans expose exploitable paths that attackers may use to breach your environment to reach your high-impact assets. Attack path analysis exposes attack paths and suggests recommendations as to how best remediate issues that break the attack path and prevent successful breach.
+Attack path analysis is a graph-based algorithm that scans your [cloud security graph](concept-attack-path.md#what-is-cloud-security-graph). These scans expose exploitable paths that attackers might use to breach your environment to reach your high-impact assets. Attack path analysis exposes attack paths and suggests recommendations as to how best remediate issues that break the attack path and prevent successful breach.
-Attack path analysis takes into account the contextual information of your environment to identify issues that may compromise it. This analysis helps prioritize the riskiest issues for faster remediation.
+Attack path analysis takes into account the contextual information of your environment to identify issues that might compromise it. This analysis helps prioritize the riskiest issues for faster remediation.
The attack path page shows an overview of your attack paths, affected resources and a list of active attack paths.
defender-for-cloud Secure Score Access And Track https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-access-and-track.md
Defender for Cloud's workbooks page includes a ready-made report for visually tr
If you're a Power BI user with a Pro account, you can use the **Secure Score Over Time** Power BI dashboard to track your secure score over time and investigate any changes. > [!TIP]
-> You can find this dashboard, as well as other tools for working programmatically with secure score, in the dedicated area of the Microsoft Defender for Cloud community on GitHub: https://github.com/Azure/Azure-Security-Center/tree/master/Secure%20Score
+> You can find this dashboard, as well as other tools for working programmatically with secure score, in the dedicated area of the Microsoft Defender for Cloud community on GitHub: <https://github.com/Azure/Azure-Security-Center/tree/master/Secure%20Score>
The dashboard contains the following two reports to help you analyze your security status: - **Resources Summary** - provides summarized data regarding your resourcesΓÇÖ health. -- **Secure Score Summary** - provides summarized data regarding your score progress. Use the ΓÇ£Secure score over time per subscriptionΓÇ¥ chart to view changes in the score. If you notice a dramatic change in your score, check the ΓÇ£detected changes that may affect your secure scoreΓÇ¥ table for possible changes that could have caused the change. This table presents deleted resources, newly deployed resources, or resources that their security status changed for one of the recommendations.
+- **Secure Score Summary** - provides summarized data regarding your score progress. Use the ΓÇ£Secure score over time per subscriptionΓÇ¥ chart to view changes in the score. If you notice a dramatic change in your score, check the ΓÇ£detected changes that might affect your secure scoreΓÇ¥ table for possible changes that could have caused the change. This table presents deleted resources, newly deployed resources, or resources that their security status changed for one of the recommendations.
:::image type="content" source="./media/secure-score-security-controls/power-bi-secure-score-dashboard.png" alt-text="The optional Secure Score Over Time Power BI dashboard for tracking your secure score over time and investigating changes.":::
defender-for-cloud Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-security-controls.md
The contribution of each security control towards the overall secure score is sh
To get all the possible points for a security control, all of your resources must comply with all of the security recommendations within the security control. For example, Defender for Cloud has multiple recommendations regarding how to secure your management ports. You'll need to remediate them all to make a difference to your secure score. > [!NOTE]
-> Each control is calculated every eight hours per subscription or cloud connector. Recommendations within a control are updated more frequently than the control, and so there may be discrepancies between the resources count on the recommendations versus the one found on the control.
+> Each control is calculated every eight hours per subscription or cloud connector. Recommendations within a control are updated more frequently than the control, and so there might be discrepancies between the resources count on the recommendations versus the one found on the control.
### Example scores for a control
defender-for-cloud Sql Azure Vulnerability Assessment Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-manage.md
Express configuration doesn't store scan results if they're identical to previou
If you have an organizational need to ignore a finding rather than remediate it, you can disable the finding. Disabled findings don't impact your secure score or generate unwanted noise. You can see the disabled finding in the "Not applicable" section of the scan results.
-When a finding matches the criteria you've defined in your disable rules, it won't appear in the list of findings. Typical scenarios may include:
+When a finding matches the criteria you've defined in your disable rules, it won't appear in the list of findings. Typical scenarios might include:
- Disable findings with medium or lower severity - Disable findings that are non-patchable
To change an Azure SQL database from the express vulnerability assessment config
-ScanResultsContainerName "vulnerability-assessment" ```
- You may have to tweak `Update-AzSqlServerVulnerabilityAssessmentSetting` according to [Store Vulnerability Assessment scan results in a storage account accessible behind firewalls and VNets](/azure/azure-sql/database/sql-database-vulnerability-assessment-storage).
+ You might have to tweak `Update-AzSqlServerVulnerabilityAssessmentSetting` according to [Store Vulnerability Assessment scan results in a storage account accessible behind firewalls and VNets](/azure/azure-sql/database/sql-database-vulnerability-assessment-storage).
#### Errors
Select **Scan History** in the vulnerability assessment pane to view a history o
If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't impact your secure score or generate unwanted noise. When a finding matches the criteria you've defined in your disable rules, it won't appear in the list of findings.
-Typical scenarios may include:
+Typical scenarios might include:
- Disable findings with medium or lower severity - Disable findings that are non-patchable
defender-for-cloud Tenant Wide Permissions Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tenant-wide-permissions-management.md
For more information of the Microsoft Entra elevation process, see [Elevate acce
## Request tenant-wide permissions when yours are insufficient
-When you navigate to Defender for Cloud, you may see a banner that alerts you to the fact that your view is limited. If you see this banner, select it to send a request to the global administrator for your organization. In the request, you can include the role you'd like to be assigned and the global administrator will make a decision about which role to grant.
+When you navigate to Defender for Cloud, you might see a banner that alerts you to the fact that your view is limited. If you see this banner, select it to send a request to the global administrator for your organization. In the request, you can include the role you'd like to be assigned and the global administrator will make a decision about which role to grant.
It's the global administrator's decision whether to accept or reject these requests.
defender-for-cloud Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/troubleshooting-guide.md
If you're having trouble onboarding the Log Analytics agent, make sure to read [
## Antimalware protection isn't working properly
-The guest agent is the parent process of everything the [Microsoft Antimalware](../security/fundamentals/antimalware.md) extension does. When the guest agent process fails, the Microsoft Antimalware protection that runs as a child process of the guest agent may also fail.
+The guest agent is the parent process of everything the [Microsoft Antimalware](../security/fundamentals/antimalware.md) extension does. When the guest agent process fails, the Microsoft Antimalware protection that runs as a child process of the guest agent might also fail.
Here are some other troubleshooting tips: - If the target VM was created from a custom image, make sure that the creator of the VM installed guest agent. - If the target is a Linux VM, then installing the Windows version of the antimalware extension will fail. The Linux guest agent has specific OS and package requirements. - If the VM was created with an old version of guest agent, the old agents might not have the ability to auto-update to the newer version. Always use the latest version of guest agent when you create your own images.-- Some third-party administration software may disable the guest agent, or block access to certain file locations. If third-party administration software is installed on your VM, make sure that the antimalware agent is on the exclusion list.
+- Some third-party administration software might disable the guest agent, or block access to certain file locations. If third-party administration software is installed on your VM, make sure that the antimalware agent is on the exclusion list.
- Make sure that firewall settings and Network Security Group (NSG) aren't blocking network traffic to and from guest agent. - Make sure that there are no Access Control Lists (ACLs) that prevent disk access. - The guest agent requires sufficient disk space in order to function properly.
defender-for-cloud Tutorial Security Incident https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-security-incident.md
Once you have selected an alert, you will then be able to investigate it.
1. For more detailed information that can help you investigate the suspicious activity, examine the **Alert details** tab.
-1. When you've reviewed the information on this page, you may have enough to proceed with a response. If you need further details:
+1. When you've reviewed the information on this page, you might have enough to proceed with a response. If you need further details:
- Contact the resource owner to verify whether the detected activity is a false positive. - Investigate the raw logs generated by the attacked resource
defender-for-cloud Tutorial Security Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-security-policy.md
To enable a disabled recommendation and ensure it's assessed for your resources:
## Manage a security recommendation's settings
-It may be necessary to configure additional parameters for some recommendations.
+It might be necessary to configure additional parameters for some recommendations.
As an example, diagnostic logging recommendations have a default retention period of 1 day. You can change the default value if your organizational security requirements require logs to be kept for more than that, for example: 30 days. The **additional parameters** column indicates whether a recommendation has associated additional parameters:
defender-for-cloud Understand Malware Scan Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/understand-malware-scan-results.md
When a blob is successfully scanned, the scan result indicates either:
## Error states
-Malware scanning may fail to scan a blob. When this happens, the scan result indicates what the error was.
+Malware scanning might fail to scan a blob. When this happens, the scan result indicates what the error was.
|Error Message |Cause of Error | Guidance | Does this failed scanning attempt incur a charge? | ||||| | SAM259201: "Scan failed - internal service error." | An unexpected internal system error occurred during the scan. | This is a transient error and subsequent upload of blobs that failed to be scanned with this error should succeed. | No | | SAM259203: "Scan failed - couldn't access the requested blob." | The blob couldn't be accessed due to permission restrictions. This can happen if someone has accidentally removed the malware scannerΓÇÖs permission to read blobs. Permissions can also be removed by an Azure Policy. | Look at the storage accountΓÇÖs Activity Log to determine who or what removed the scannerΓÇÖs permissions. Re-enable Malware scanning. | No |
-| SAM259204: "Scan failed - the requested blob wasn't found." | The blob wasn't found. This may be due to deletion, relocation, or renaming after uploading. | N/A | No |
+| SAM259204: "Scan failed - the requested blob wasn't found." | The blob wasn't found. This might be due to deletion, relocation, or renaming after uploading. | N/A | No |
| SAM259205: "Scan failed due to ETag mismatch - blob was possibly overwritten." | During the process of scanning a blob, Malware Scanning ensures that the ETag value of the blob remains consistent with what it was when first uploaded. If the ETag doesn't match the expected value, it could indicate that the blob has been altered by another process or user after the upload. | N/A | No | | SAM259206: "Scan aborted - the requested blob exceeded the maximum allowed size of 2 GB." | The blob size exceeded the existing size limit, preventing the scan. For more information, see the [malware scanning limitations](defender-for-storage-malware-scan.md#limitations) documentation. | N/A | No |
-| SAM259207: "Scan timed out - the requested scan exceeded the `ScanTimeout` minutes time limitation." | The scan timed out before completion. This error may also occur if a preceding step, such as downloading the blob for scanning, takes too long. | This is a transient error and subsequent upload of blobs that failed to be scanned with this error should succeed. | No |
+| SAM259207: "Scan timed out - the requested scan exceeded the `ScanTimeout` minutes time limitation." | The scan timed out before completion. This error might also occur if a preceding step, such as downloading the blob for scanning, takes too long. | This is a transient error and subsequent upload of blobs that failed to be scanned with this error should succeed. | No |
| SAM259208: "Scan failed - archive access tier isn't supported." | Blobs in Azure's archive storage tier can't be scanned. For more information, see the [malware scanning limitations](defender-for-storage-malware-scan.md#limitations) documentation. | N/A | No | | SAM259209: "Scan failed - blobs encrypted with customer provided keys aren't supported." | Client-side encrypted blobs can't be decrypted for scanning. For more information, see the [malware scanning limitations](defender-for-storage-malware-scan.md#limitations) documentation. | N/A | No | | SAM259210: "Scan aborted - the requested blob is protected by password." | The blob is password-protected and can't be scanned. For more information, see the [malware scanning limitations](defender-for-storage-malware-scan.md#limitations) documentation. | N/A | Yes |
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you can find them in the [What's
| Planned change | Announcement date | Estimated date for change | |--|--|--|
+| [General availability of Containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management (MDVM) in Defender for Containers and Defender for Container Registries](#general-availability-of-containers-vulnerability-assessment-powered-by-microsoft-defender-vulnerability-management-mdvm-in-defender-for-containers-and-defender-for-container-registries) | October 30, 2023 | November 15, 2023 |
| [Changes to how Microsoft Defender for Cloud's costs are presented in Microsoft Cost Management](#changes-to-how-microsoft-defender-for-clouds-costs-are-presented-in-microsoft-cost-management) | October 25, 2023 | November 2023 | | [Four alerts are set to be deprecated](#four-alerts-are-set-to-be-deprecated) | October 23, 2023 | November 23, 2023 | | [Replacing the "Key Vaults should have purge protection enabled" recommendation with combined recommendation "Key Vaults should have deletion protection enabled"](#replacing-the-key-vaults-should-have-purge-protection-enabled-recommendation-with-combined-recommendation-key-vaults-should-have-deletion-protection-enabled) | | June 2023|
If you're looking for the latest release notes, you can find them in the [What's
| [Deprecating two security incidents](#deprecating-two-security-incidents) | | November 2023 | | [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) | | August 2024 |
+## General availability of Containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management (MDVM) in Defender for Containers and Defender for Container Registries
+
+**Announcement date: October 30, 2023**
+
+**Estimated date for change: November 15, 2023**
+
+Vulnerability assessment (VA) for Linux container images in Azure container registries powered by Microsoft Defender Vulnerability Management (MDVM) will soon be released for General Availability (GA) in Defender for Containers and Defender for Container Registries.
+
+As part of this change, the following recommendations will also be released for GA and renamed:
+
+|Current recommendation name|New recommendation name|Description|Assessment key|
+|--|--|--|--|
+|Container registry images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)|Azure registry container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)|Container image vulnerability assessments scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. |c0b7cfc6-3172-465a-b378-53c7ff2cc0d5|
+|Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)|Azure running container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management|Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads.|c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5|
+
+Once the recommendations are released for GA, they will be included in the secure score calculation, and will also incur charges as per [plan pricing](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h#pricing).
+
+> [!NOTE]
+> Images scanned both by our container VA offering powered by Qualys and Container VA offering powered by MDVM, will only be billed once.
+ ## Changes to how Microsoft Defender for Cloud's costs are presented in Microsoft Cost Management
-**Annoucement date: October 26, 2023**
+**Announcement date: October 26, 2023**
**Estimated date for change: November 2023**
Following quality improvement process, the following security incidents are set
## Next steps For all recent changes to Defender for Cloud, see [What's new in Microsoft Defender for Cloud?](release-notes.md).+
defender-for-cloud Update Regulatory Compliance Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/update-regulatory-compliance-packages.md
If a subscription, account, or project has *any* Defender plan enabled, more sta
| PCI-DSS v3.2.1 **(deprecated)** | CIS AWS Foundations v1.2.0 | CIS GCP Foundations v1.1.0 | | PCI DSS v4 | CIS AWS Foundations v1.5.0 | CIS GCP Foundations v1.2.0 | | SOC TSP | PCI DSS v3.2.1 | PCI DSS v3.2.1 |
-| SOC 2 Type 2 | | NIST 800-53 |
+| SOC 2 Type 2 | AWS Foundational Security Best Practices | NIST 800-53 |
| ISO 27001:2013 | | ISO 27001 | | CIS Azure Foundations v1.1.0 ||| | CIS Azure Foundations v1.3.0 |||
To add standards to your dashboard:
- The user must have owner or policy contributor permissions > [!NOTE]
-> It may take a few hours for a newly added standard to appear in the compliance dashboard.
+> It might take a few hours for a newly added standard to appear in the compliance dashboard.
### Add a standard to your Azure subscriptions
To assign regulatory compliance standards on GCP projects:
1. Select the three dots alongside an unassigned standard and select **Assign standard**. :::image type="content" source="media/update-regulatory-compliance-packages/assign-standard-gcp-from-list.png" alt-text="Screenshot that shows where to select a GCP standard to assign." lightbox="media/update-regulatory-compliance-packages/assign-standard-gcp-from-list.png":::
-
+ 1. At the prompt, select **Yes**. The standard is assigned to your GCP project. :::image type="content" source="media/update-regulatory-compliance-packages/assign-standard-gcp.png" alt-text="Screenshot of the prompt to assign a regulatory compliance standard to the GCP project." lightbox="media/update-regulatory-compliance-packages/assign-standard-gcp.png":::
defender-for-cloud View And Remediate Vulnerabilities For Images Running On Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/view-and-remediate-vulnerabilities-for-images-running-on-aks.md
Use these steps to remediate each of the affected images found either in a speci
1. Follow the steps in the remediation section of the recommendation pane. 1. When you've completed the steps required to remediate the security issue, replace each affected image in your cluster, or replace each affected image for a specific vulnerability: 1. Build a new image (including updates for each of the packages) that resolves the vulnerability according to the remediation details.
- 1. Push the updated image to trigger a scan and delete the old image. It may take up to 24 hours for the previous image to be removed from the results, and for the new image to be included in the results.
+ 1. Push the updated image to trigger a scan and delete the old image. It might take up to 24 hours for the previous image to be removed from the results, and for the new image to be included in the results.
1. Use the new image across all vulnerable workloads. 1. Check the recommendations page for the recommendation [Running container images should have vulnerability findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c). 1. If the recommendation still appears and the image you've handled still appears in the list of vulnerable images, check the remediation steps again.
defender-for-cloud View And Remediate Vulnerability Assessment Findings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/view-and-remediate-vulnerability-assessment-findings.md
Use these steps to remediate each of the affected images found either in a speci
1. Follow the steps in the remediation section of the recommendation pane. 1. When you've completed the steps required to remediate the security issue, replace each affected image in your registry or replace each affected image for a specific vulnerability: 1. Build a new image (including updates for each of the packages) that resolves the vulnerability according to the remediation details.
- 1. Push the updated image to trigger a scan and delete the old image. It may take up to 24 hours for the previous image to be removed from the results, and for the new image to be included in the results.
+ 1. Push the updated image to trigger a scan and delete the old image. It might take up to 24 hours for the previous image to be removed from the results, and for the new image to be included in the results.
1. Check the recommendations page for the recommendation [Container registry images should have vulnerability findings resolved (powered by MDVM)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5). If the recommendation still appears and the image you've handled still appears in the list of vulnerable images, check the remediation steps again.
defender-for-cloud Windows Admin Center Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/windows-admin-center-integration.md
Through the combination of these two tools, Defender for Cloud becomes your sing
- The Log Analytics agent is installed on the server and configured to report to the selected workspace. If the server already reports to another workspace, it's configured to report to the newly selected workspace as well. > [!NOTE]
- > It may take some time after onboarding for recommendations to appear. In fact, depending on on your server activity you may not receive *any* alerts. To generate test alerts to test your alerts are working correctly, follow the instructions in [the alert validation procedure](alert-validation.md).
+ > It might take some time after onboarding for recommendations to appear. In fact, depending on on your server activity you might not receive *any* alerts. To generate test alerts to test your alerts are working correctly, follow the instructions in [the alert validation procedure](alert-validation.md).
## View security recommendations and alerts in Windows Admin Center
defender-for-cloud Workflow Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workflow-automation.md
This article describes the workflow automation feature of Microsoft Defender for
|-|:-| |Release state:|General availability (GA)| |Pricing:|Free|
-|Required roles and permissions:|**Security admin role** or **Owner** on the resource group<br>Must also have write permissions for the target resource<br><br>To work with Azure Logic Apps workflows, you must also have the following Logic Apps roles/permissions:<br> - [Logic App Operator](../role-based-access-control/built-in-roles.md#logic-app-operator) permissions are required or Logic App read/trigger access (this role can't create or edit logic apps; only *run* existing ones)<br> - [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) permissions are required for logic app creation and modification<br>If you want to use Logic Apps connectors, you may need other credentials to sign in to their respective services (for example, your Outlook/Teams/Slack instances)|
+|Required roles and permissions:|**Security admin role** or **Owner** on the resource group<br>Must also have write permissions for the target resource<br><br>To work with Azure Logic Apps workflows, you must also have the following Logic Apps roles/permissions:<br> - [Logic App Operator](../role-based-access-control/built-in-roles.md#logic-app-operator) permissions are required or Logic App read/trigger access (this role can't create or edit logic apps; only *run* existing ones)<br> - [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) permissions are required for logic app creation and modification<br>If you want to use Logic Apps connectors, you might need other credentials to sign in to their respective services (for example, your Outlook/Teams/Slack instances)|
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet)| ## Create a logic app and define when it should automatically run
defender-for-cloud Working With Log Analytics Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/working-with-log-analytics-agent.md
When you select a data collection tier in Microsoft Defender for Cloud, the secu
The enhanced security protections of Defender for Cloud are required for storing Windows security event data. Learn more about [the enhanced protection plans](defender-for-cloud-introduction.md).
-You may be charged for storing data in Log Analytics. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+You might be charged for storing data in Log Analytics. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
### Information for Microsoft Sentinel users
event-hubs Monitor Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/monitor-event-hubs.md
If you use **Log Analytics** to store the diagnostic logging information, the in
The metrics and logs you can collect are discussed in the following sections. ## Analyze metrics
-You can analyze metrics for Azure Event Hubs, along with metrics from other Azure services, by selecting **Metrics** from the **Azure Monitor** section on the home page for your Event Hubs namespace. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool. For a list of the platform metrics collected, see [Monitoring Azure Event Hubs data reference metrics](monitor-event-hubs-reference.md#metrics).
+You can analyze metrics for Azure Event Hubs, along with metrics from other Azure services, by selecting **Metrics** from the **Azure Monitor** section on the home page for your Event Hubs namespace. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool. For a list of the platform metrics collected, see [Monitoring Azure Event Hubs data reference metrics](monitor-event-hubs-reference.md#metrics).
![Metrics Explorer with Event Hubs namespace selected](./media/monitor-event-hubs/metrics.png)
expressroute Monitor Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/monitor-expressroute.md
The metrics and logs you can collect are discussed in the following sections.
## Analyzing metrics
-You can analyze metrics for *Azure ExpressRoute* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+You can analyze metrics for *Azure ExpressRoute* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/metrics-page.png" alt-text="Screenshot of the metrics dashboard for ExpressRoute.":::
external-attack-surface-management Easm Copilot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/easm-copilot.md
+
+# required metadata
+
+ Title: Security Copilot (preview) and Defender EASM
+description: You can use Security Copilot to get information about your EASM data.
++ Last updated : 10/25/2023++
+ms.localizationpriority: high
+++
+# Microsoft Security Copilot (preview) and Defender EASM
+
+> [!IMPORTANT]
+> The information in this article applies to the Microsoft Security Copilot Early Access Program, which is an invite-only paid preview program. Some information in this article relates to prereleased product, which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided in this article.
++
+Security Copilot is a cloud-based AI platform that provides a natural language copilot experience. It can help support security professionals in different scenarios, like incident response, threat hunting, and intelligence gathering. For more information about what it can do, go to [What is Microsoft Security Copilot?](/security-copilot/microsoft-security-copilot).
+
+**Security Copilot integrates with Defender EASM**.
+
+Security Copilot can surface insights from Defender EASM about an organization's attack surface. You can use the system features built into Security Copilot, and use prompts to get more information. This information can help you understand your security posture and mitigate vulnerabilities.
+
+This article introduces you to Security Copilot and includes sample prompts that can help Defender EASM users.
+++
+## Know before you begin
+
+- Ensure that you reference the company name in your first prompt. Unless otherwise specified, all future prompts will provide data about the initially specified company.
+
+- Be clear and specific with your prompts. You might get better results if you include specific asset names or metadata values (e.g. CVE IDs) in your prompts.
+
+ It might also help to add **Defender EASM** to your prompt, like:
+
+ - **According to Defender EASM, what are my expired domains?**
+ - **Tell me about Defender EASM high priority attack surface insights.**
+
+- Experiment with different prompts and variations to see what works best for your use case. Chat AI models vary, so iterate and refine your prompts based on the results you receive.
+
+- Security Copilot saves your prompt sessions. To see the previous sessions, in Security Copilot, go to the menu > **My investigations**:
+
+ ![Screenshot that shows the Microsoft Security Copilot menu and My investigations with previous sessions.](media/copilot-1.png)
++
+ For a walkthrough on Security Copilot, including the pin and share feature, go to [Navigating Microsoft Security Copilot](/security-copilot/navigating-security-copilot).
+
+For more information on writing Security Copilot prompts, go to [Microsoft Security Copilot prompting tips](/security-copilot/prompting-tips).
+++
+## Open Security Copilot
+
+1. Go to [Microsoft Security Copilot](https://go.microsoft.com/fwlink/?linkid=2247989) and sign in with your credentials.
+2. By default, Defender EASM should be enabled. To confirm, select **plugins** (bottom left corner):
+
+ ![Screenshot that shows the plugins that are available, enabled, and disabled in Microsoft Security Copilot.](media/copilot-2.png)
++
+ In **My plugins**, confirm Defender EASM is on. Close **Plugins**.
+
+ > [!NOTE]
+ > Some roles can enable or disable plugins, like Defender EASM. For more information, go to [Manage plugins in Microsoft Security Copilot](/security-copilot/manage-plugins).
+
+3. Enter your prompt.
+++
+## Built-in system features
+
+In Security Copilot, there are built in system features. These features can get data from the different plugins that are enabled.
+
+To view the list of built-in system capabilities for Defender EASM, use the following steps:
+
+1. In the prompt, enter **/**.
+2. Select **See all system capabilities**.
+3. In the Defender EASM section, you can:
+
+ - Get attack surface summary.
+ - Get attack surface insights.
+ - Get assets affected by CVEs by priority or CVE ID.
+ - Get assets by CVSS score.
+ - Get expired domains.
+ - Get expired SSL certificates.
+ - Get SHA1 certificates.
+++
+## Sample prompts for Defender EASM?
+
+There are many prompts you can use to get information about your Defender EASM data. This section lists some ideas and examples.
+
+### General information about your attack surface
+
+Get **general information** about your Defender EASM data, like an attack surface summary or insights about your inventory.
+
+**Sample prompts**:
+
+- Get the external attack surface for my organization.
+- What are the high priority attack surface insights for my organization?
+++
+### CVE vulnerability data
+
+Get details on **CVEs that are applicable to your inventory**.
+
+**Sample prompts**:
+
+- Is my external attack surface impacted by CVE-2023-21709?
+- Get assets affected by high priority CVSS's in my attack surface.
+- How many assets have critical CVSS's for my organization?
+++
+### Domain and SSL certificate posture
+
+Get information about **domain and SSL certificate posture**, like expired domains and usage of SHA1 certificates.
+
+**Sample prompts**:
+
+- How many domains are expired in my organization's attack surface?
+- How many SSL certificates are expired for my organization?
+- How many assets are using SSL SHA1 for my organization?
+- Get list of expired SSL certificates.
+++
+## Provide feedback
+
+Your feedback on the Defender EASM integration with Security Copilot helps with development. To provide feedback, in Security Copilot, use the feedback buttons at the bottom of each completed prompt. Your options are "Looks Right," "Needs Improvement" and "Inappropriate."
++
+Your options:
+
+- **Confirm**: The results match expectations.
+- **Off-target**: The results don't match expectations.
+- **Report**: The results are harmful in some way.
+
+Whenever possible, and when the result is **Off-target**, write a few words explaining what can be done to improve the outcome. If you entered Defender EASM-specific prompts and the results aren't EASM related, then include that information.
+++
+## Data processing and privacy
+
+When you interact with the Security Copilot to get Defender EASM data, Security Copilot pulls that data from Defender EASM. The prompts, the data that's retrieved, and the output shown in the prompt results is processed and stored within the Security Copilot service.
+
+For more information about data privacy in Security Copilot, go to [Privacy and data security in Microsoft Security Copilot](/security-copilot/privacy-data-security).
+++
+## Related articles
+
+- [What is Microsoft Security Copilot?](/security-copilot/microsoft-security-copilot)
+- [Privacy and data security in Microsoft Security Copilot](/security-copilot/privacy-data-security)
hdinsight-aks Cosmos Db For Apache Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/cosmos-db-for-apache-cassandra.md
Title: Using Azure Cosmos DB for Apache Cassandra® with HDInsight on AKS for Ap
description: Learn how to Sink Apache Kafka® message into Azure Cosmos DB for Apache Cassandra®, with Apache Flink® running on HDInsight on AKS. Previously updated : 08/29/2023 Last updated : 10/30/2023 # Sink Apache Kafka® messages into Azure Cosmos DB for Apache Cassandra, with Apache Flink® on HDInsight on AKS
drwxr-xr-x 2 root root 4096 May 15 02:43 util/
**CassandraUtils.java** > [!NOTE]
-> Change ssl_keystore_file_path depends on the java cert location. On HDInsight on AKS Apache Flink, the path is `/usr/lib/jvm/msopenjdk-11-jre/lib/security`
+> Change ssl_keystore_file_path depends on the java cert location. Apache Flink cluster on HDInsight on AKS, the path is `/usr/lib/jvm/msopenjdk-11-jre/lib/security`
``` java package com.azure.cosmosdb.cassandra.util;
hdinsight-aks Datastream Api Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/datastream-api-mongodb.md
Title: Use DataStream API for MongoDB as a source and sink with Apache Flink®
description: Learn how to use Apache Flink® DataStream API on HDInsight on AKS for MongoDB as a source and sink Previously updated : 10/27/2023 Last updated : 10/30/2023 # Use Apache Flink® DataStream API on HDInsight on AKS for MongoDB as a source and sink
Last updated 10/27/2023
Apache Flink provides a MongoDB connector for reading and writing data from and to MongoDB collections with at-least-once guarantees.
-This example demonstrates on how to use HDInsight on AKS Apache Flink 1.16.0 along with your existing MongoDB as Sink and Source with Flink DataStream API MongoDB connector.
+This example demonstrates on how to use Apache Flink 1.16.0 on HDInsight on AKS along with your existing MongoDB as Sink and Source with Flink DataStream API MongoDB connector.
MongoDB is a non-relational document database that provides support for JSON-like storage that helps store complex structures easily.
hdinsight-aks Flink Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-overview.md
Apache Flink clusters in HDInsight on AKS are a fully managed service. Benefits
| Checkpoints | Checkpointing is enabled in HDInsight on AKS clusters by default. Default settings on HDInsight on AKS maintain the last five checkpoints in persistent storage. In case, your job fails, the job can be restarted from the latest checkpoint.| | Incremental Checkpoints | RocksDB supports Incremental Checkpoints. We encourage the use of incremental checkpoints for large state, you need to enable this feature manually. Setting a default in your `flink-conf.yaml: state.backend.incremental: true` enables incremental checkpoints, unless the application overrides this setting in the code. This statement is true by default. You can alternatively configure this value directly in the code (overrides the config default) ``EmbeddedRocksDBStateBackend` backend = new `EmbeddedRocksDBStateBackend(true);`` . By default, we preserve the last five checkpoints in the checkpoint dir configured. This value can be changed by changing the configuration on configuration management section `state.checkpoints.num-retained: 5`|
-Apache Flink clusters in HDInsight include the following components, they are available on the clusters by default.
+Apache Flink clusters in HDInsight on AKS include the following components, they are available on the clusters by default.
* [DataStreamAPI](https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/datastream/overview/#what-is-a-datastream) * [TableAPI & SQL](https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/table/overview/#table-api--sql).
hdinsight-aks Use Hive Metastore Datastream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-hive-metastore-datastream.md
Title: Use Hive Metastore with Apache Flink® DataStream API
description: Use Hive Metastore with Apache Flink® DataStream API Previously updated : 08/29/2023 Last updated : 10/30/2023 # Use Hive Metastore with Apache Flink® DataStream API
If you're building your own program, you need the following dependencies in your
## Connect to Hive
-This example illustrates various snippets of connecting to hive, using HDInsight on AKS - Flink, you're required to use `/opt/hive-conf` as hive configuration directory to connect with Hive metastore
+This example illustrates various snippets of connecting to hive, using Apache Flink on HDInsight on AKS, you're required to use `/opt/hive-conf` as hive configuration directory to connect with Hive metastore
``` public static void main(String[] args) throws Exception
hdinsight-aks Create Spark Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/create-spark-cluster.md
Title: How to create Spark cluster in HDInsight on AKS
description: Learn how to create Spark cluster in HDInsight on AKS Previously updated : 08/29/2023 Last updated : 10/30/2023 # Create Spark cluster in HDInsight on AKS (Preview)
You can use the Azure portal to create an Apache Spark cluster in cluster pool.
|Property| Description | |-|-|
- |Name |Optional. Enter a name such as HDInsight on AKSPrivatePreview to easily identify all resources associated with your resources|
+ |Name |Optional. Enter a name such as HDInsight on AKS Private Preview to easily identify all resources associated with your resources|
|Value |Leave this blank| |Resource |Select All resources selected| 1. Click **Next: Review + create**. 1. On the **Review + create page**, look for the Validation succeeded message at the top of the page and then click **Create**.
-1. The **Deployment is in process** page is displayed which the cluster is created. It takes 5-10 minutes to create the cluster. Once the cluster is created, **Your deployment is complete" message is displayed**. If you navigate away from the page, you can check your Notifications for the status.
+1. The **Deployment is in process** page is displayed which the cluster is created. It takes 5-10 minutes to create the cluster. Once the cluster is created, **Your deployment is complete** message is displayed. If you navigate away from the page, you can check your Notifications for the status.
1. Go to the **cluster overview page**, you can see endpoint links there. :::image type="content" source="./media/create-spark-cluster/cluster-overview.png" alt-text="Screenshot showing cluster overview page."border="true" lightbox="./media/create-spark-cluster/cluster-overview.png":::
hdinsight-aks Trademarks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trademarks.md
Product names, logos and other material used on this Azure HDInsight on AKS lear
- Apache HBase, HBase and the HBase logo are trademarks of the [Apache Software Foundation](https://www.apache.org/) (ASF). - Apache, Apache Cassandra, Cassandra and Cassandra logo are trademarks of the [Apache Software Foundation](https://www.apache.org/) (ASF). - Apache®, Apache Spark™, Apache HBase®, Apache Kafka®, Apache Cassandra® and Apache Flink® are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. The use of these marks does not imply endorsement by The Apache Software Foundation.+
+All product and service names used in these pages are for identification purposes only and do not imply endorsement.
healthcare-apis Export Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/export-files.md
+
+ Title: Export DICOM files by using the export API of the DICOM service
+description: This how-to guide explains how to export DICOM files to an Azure Blob Storage account.
++++ Last updated : 10/30/2023+++
+# Export DICOM files
+
+The DICOM&reg; service provides the ability to easily export DICOM data in a file format. The service simplifies the process of using medical imaging in external workflows, such as AI and machine learning. You can use the export API to export DICOM studies, series, and instances in bulk to an [Azure Blob Storage account](../../storage/blobs/storage-blobs-introduction.md). DICOM data that's exported to a storage account is exported as a `.dcm` file in a folder structure that organizes instances by `StudyInstanceID` and `SeriesInstanceID`.
+
+There are three steps to exporting data from the DICOM service:
+
+- Enable a system-assigned managed identity for the DICOM service.
+- Configure a new or existing storage account and give permission to the system-assigned managed identity.
+- Use the export API to create a new export job to export the data.
+
+## Enable managed identity for the DICOM service
+
+The first step to export data from the DICOM service is to enable a system-assigned managed identity. This managed identity is used to authenticate the DICOM service and give permission to the storage account used as the destination for export. For more information about managed identities in Azure, see [About managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
+
+1. In the Azure portal, browse to the DICOM service that you want to export from and select **Identity**.
+
+ :::image type="content" source="media/dicom-export-identity.png" alt-text="Screenshot that shows selection of Identity view." lightbox="media/dicom-export-identity.png":::
+
+1. Set the **Status** option to **On**, and then select **Save**.
+
+ :::image type="content" source="media/dicom-export-enable-system-identity.png" alt-text="Screenshot that shows the system-assigned identity toggle." lightbox="media/dicom-export-enable-system-identity.png":::
+
+1. Select **Yes** in the confirmation dialog that appears.
+
+ :::image type="content" source="media/dicom-export-confirm-enable.png" alt-text="Screenshot that shows the dialog confirming enabling system identity." lightbox="media/dicom-export-confirm-enable.png":::
+
+It takes a few minutes to create the system-assigned managed identity. After the system identity is enabled, an **Object (principal) ID** appears.
+
+## Assign storage account permissions
+
+The system-assigned managed identity needs **Storage Blob Data Contributor** permission to write data to the destination storage account.
+
+1. Under **Permissions**, select **Azure role assignments**.
+
+ :::image type="content" source="media/dicom-export-azure-role-assignments.png" alt-text="Screenshot that shows the Azure role assignments button on the Identity view." lightbox="media/dicom-export-azure-role-assignments.png":::
+
+1. Select **Add role assignment**. On the **Add role assignment** pane, make the following selections:
+
+ * Under **Scope**, select **Storage**.
+ * Under **Resource**, select the destination storage account for the export operation.
+ * Under **Role**, select **Storage Blob Data Contributor**.
+
+ :::image type="content" source="media/dicom-export-add-role-assignment.png" alt-text="Screenshot that shows the Add role assignment pane." lightbox="media/dicom-export-add-role-assignment.png":::
+
+1. Select **Save** to add the permission to the system-assigned managed identity.
+
+## Use the export API
+
+The export API exposes one `POST` endpoint for exporting data.
+
+```
+POST <dicom-service-url>/<version>/export
+```
+
+Given a *source*, the set of data to be exported, and a *destination*, the location to which data will be exported, the endpoint returns a reference to a new, long-running export operation. The duration of this operation depends on the volume of data to be exported. For more information about monitoring progress of export operations, see the [Operation status](#operation-status) section.
+
+Any errors encountered while you attempt to export are recorded in an error log. For more information, see the [Errors](#errors) section.
+
+#### Request
+
+The request body consists of the export source and destination.
+
+```json
+{
+ "source": {
+ "type": "identifiers",
+ "settings": {
+ "values": [
+ "..."
+ ]
+ }
+ },
+ "destination": {
+ "type": "azureblob",
+ "settings": {
+ "setting": "<value>"
+ }
+ }
+}
+```
+
+#### Source settings
+
+The only setting is the list of identifiers to export.
+
+| Property | Required | Default | Description |
+| :- | :- | : | :- |
+| `Values` | Yes | | A list of one or more DICOM studies, series, and/or SOP instance identifiers in the format of `"<StudyInstanceUID>[/<SeriesInstanceUID>[/<SOPInstanceUID>]]"` |
+
+#### Destination settings
+
+The connection to the Blob Storage account is specified with `BlobContainerUri`.
+
+| Property | Required | Default | Description |
+| :- | :- | : | :- |
+| `BlobContainerUri` | No | `""` | The complete URI for the blob container |
+| `UseManagedIdentity` | Yes | `false` | A required flag that indicates whether managed identity should be used to authenticate to the blob container |
+
+#### Example
+
+The following example requests the export of the following DICOM resources to the blob container named `export` in the storage account named `dicomexport`:
+
+- All instances within the study whose `StudyInstanceUID` is `1.2.3`
+- All instances within the series whose `StudyInstanceUID` is `12.3` and `SeriesInstanceUID` is `4.5.678`
+- The instance whose `StudyInstanceUID` is `123.456`, `SeriesInstanceUID` is `7.8`, and `SOPInstanceUID` is `9.1011.12`
+
+```http
+POST /export HTTP/1.1
+Accept: */*
+Content-Type: application/json
+{
+ "sources": {
+ "type": "identifiers",
+ "settings": {
+ "values": [
+ "1.2.3",
+ "12.3/4.5.678",
+ "123.456/7.8/9.1011.12"
+ ]
+ }
+ },
+ "destination": {
+ "type": "azureblob",
+ "settings": {
+ "blobContainerUri": "https://dicomexport.blob.core.windows.net/export",
+ "UseManagedIdentity": true
+ }
+ }
+}
+```
+
+#### Response
+
+The export API returns a `202` status code when an export operation is started successfully. The body of the response contains a reference to the operation, while the value of the `Location` header is the URL for the export operation's status (the same as `href` in the body).
+
+Inside the destination container, use the path format `<operation id>/results/<study>/<series>/<sop instance>.dcm` to find the DCM files.
+
+```http
+HTTP/1.1 202 Accepted
+Content-Type: application/json
+{
+ "id": "df1ff476b83a4a3eaf11b1eac2e5ac56",
+ "href": "https://example-dicom.dicom.azurehealthcareapis.com/v1/operations/df1ff476b83a4a3eaf11b1eac2e5ac56"
+}
+```
+
+#### Operation status
+
+Poll the preceding `href` URL for the current status of the export operation until completion. After the job has reached a terminal state, the API returns a 200 status code instead of 202. The value of its status property is updated accordingly.
+
+```http
+HTTP/1.1 200 OK
+Content-Type: application/json
+{
+ "operationId": "df1ff476b83a4a3eaf11b1eac2e5ac56",
+ "type": "export",
+ "createdTime": "2022-09-08T16:40:36.2627618Z",
+ "lastUpdatedTime": "2022-09-08T16:41:01.2776644Z",
+ "status": "completed",
+ "results": {
+ "errorHref": "https://dicomexport.blob.core.windows.net/export/4853cda8c05c44e497d2bc071f8e92c4/errors.log",
+ "exported": 1000,
+ "skipped": 3
+ }
+}
+```
+
+## Errors
+
+If there are any user errors when you export a DICOM file, the file is skipped and its corresponding error is logged. This error log is also exported alongside the DICOM files and the caller can review it. You can find the error log at `<export blob container uri>/<operation ID>/errors.log`.
+
+#### Format
+
+Each line of the error log is a JSON object with the following properties. A given error identifier might appear multiple times in the log as each update to the log is processed *at least once*.
+
+| Property | Description |
+| | -- |
+| `Timestamp` | The date and time when the error occurred |
+| `Identifier` | The identifier for the DICOM study, series, or SOP instance in the format of `"<study instance UID>[/<series instance UID>[/<SOP instance UID>]]"` |
+| `Error` | The detailed error message |
+
healthcare-apis How To Configure Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-configure-metrics.md
Metric category|Metric name|Metric description|
> To learn how to to create an Azure portal dashboard and pin tiles, see [How to create an Azure portal dashboard and pin tiles](how-to-configure-metrics.md#how-to-create-an-azure-portal-dashboard-and-pin-tiles) > [!TIP]
- > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md)
+ > To learn more about advanced metrics display and sharing options, see [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md).
## How to create an Azure portal dashboard and pin tiles
healthcare-apis How To Use Monitoring And Health Checks Tabs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-monitoring-and-health-checks-tabs.md
In this article, learn how to use the MedTech service monitoring and health chec
:::image type="content" source="media\how-to-use-monitoring-and-health-checks-tabs\pin-metrics-to-dashboard.png" alt-text="Screenshot the MedTech service monitoring tile with red box around the pin icon." lightbox="media\how-to-use-monitoring-and-health-checks-tabs\pin-metrics-to-dashboard.png"::: > [!TIP]
- > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md)
+ > To learn more about advanced metrics display and sharing options, see [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md).
## Available metrics for the MedTech service
iot-dps Monitor Iot Dps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/monitor-iot-dps.md
The metrics and logs you can collect are discussed in the following sections.
## Analyzing metrics
-You can analyze metrics for DPS with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+You can analyze metrics for DPS with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
In Azure portal, you can select **Metrics** under **Monitoring** on the left-pane of your DPS instance to open metrics explorer scoped, by default, to the platform metrics emitted by your instance:
iot-edge Configure Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/configure-device.md
username = "username"
password = "password" [agent.env]
-"RuntimeLogLevel" = "debug"
-"UpstreamProtocol" = "AmqpWs"
-"storageFolder" = "/iotedge/storage"
+RuntimeLogLevel = "debug"
+UpstreamProtocol = "AmqpWs"
+storageFolder = "/iotedge/storage"
``` ## Daemon management and workload API endpoints
iot-hub Monitor Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/monitor-iot-hub.md
When routing IoT Hub platform metrics to other locations:
## Analyzing metrics
-You can analyze metrics for Azure IoT Hub with metrics from other Azure services using metrics explorer. For more information on this tool, see [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md).
+You can analyze metrics for Azure IoT Hub with metrics from other Azure services using metrics explorer. For more information on this tool, see [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md).
To open metrics explorer, go to the Azure portal and open your IoT hub, then select **Metrics** under **Monitoring**. This explorer is scoped, by default, to the platform metrics emitted by your IoT hub.
key-vault Monitor Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/monitor-key-vault.md
To create a diagnostic setting for you key vault, see [Enable Key Vault logging]
## Analyzing metrics
-You can analyze metrics for Key Vault with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+You can analyze metrics for Key Vault with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
For a list of the platform metrics collected for Key Vault, see [Monitoring Key Vault data reference metrics](monitor-key-vault-reference.md#metrics)
load-balancer Monitor Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/monitor-load-balancer.md
The metrics and logs you can collect are discussed in the following sections.
## Analyzing metrics
-You can analyze metrics for Load Balancer with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+You can analyze metrics for Load Balancer with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
For a list of the platform metrics collected for Load Balancer, see [Monitoring Load Balancer data reference metrics](monitor-load-balancer-reference.md#metrics)
load-balancer Upgrade Basic Standard With Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard-with-powershell.md
The PowerShell module performs the following functions:
- Basic Load Balancers with a Virtual Machine Scale Set backend pool member where one or more Virtual Machine Scale Set instances have ProtectFromScaleSetActions Instance Protection policies enabled - Migrating a Basic Load Balancer to an existing Standard Load Balancer
+## Install the 'AzureBasicLoadBalancerUpgrade' module
+ ### Prerequisites -- **PowerShell**: A supported version of PowerShell version 7 or higher is the recommended version of PowerShell for use with the AzureBasicLoadBalancerUpgrade module on all platforms including Windows, Linux, and macOS. However, Windows PowerShell 5.1 is supported.
+- **PowerShell**: A supported version of PowerShell version 7 or higher is recommended for use with the AzureBasicLoadBalancerUpgrade module on all platforms including Windows, Linux, and macOS. However, PowerShell 5.1 on Windows is supported.
- **Az PowerShell Module**: Determine whether you have the latest Az PowerShell module installed - Install the latest [Az PowerShell module](/powershell/azure/install-azure-powershell) - **Az.ResourceGraph PowerShell Module**: The Az.ResourceGraph PowerShell module is used to query resource configuration during upgrade and is a separate install from the Az PowerShell module. It will be automatically installed if you install the `AzureBasicLoadBalancerUpgrade` module using the `Install-Module` command as shown below.
-## Install the 'AzureBasicLoadBalancerUpgrade' module
+### Module Installation
Install the module from [PowerShell gallery](https://www.powershellgallery.com/packages/AzureBasicLoadBalancerUpgrade)
Install the module from [PowerShell gallery](https://www.powershellgallery.com/p
PS C:\> Install-Module -Name AzureBasicLoadBalancerUpgrade -Scope CurrentUser -Repository PSGallery -Force ```
+## Pre- and Post-migration Steps
+
+### Pre-migration steps
+
+- [Validate](#example-validate-a-scenario) that your scenario is supported
+- Plan for [application downtime](#how-long-does-the-upgrade-take) during migration
+- Develop inbound and outbound connectivity tests for your traffic
+- Plan for instance-level Public IP changes on Virtual Machine Scale Set instances (see note above)
+- [Recommended] Create Network Security Groups or add security rules to an existing Network Security Group for your backend pool members, allowing the traffic through the Load Balancer and any other traffic which will need to be explicitly allowed on public Standard SKU resources
+- [Recommended] Prepare your [outbound connectivity](../virtual-network/ip-services/default-outbound-access.md), taking one of the following approaches:
+ - Add a NAT Gateway to your backend member's subnets
+ - Add Public IP addresses to each backend Virtual Machine or [Virtual Machine Scale Set instance](../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md#public-ipv4-per-virtual-machine)
+ - Plan to create [Outbound Rules](./outbound-rules.md) for Public Load Balancers with multiple backend pools post-migration
+
+### Post-migration steps
+
+- [Validate that your migration was successful](#example-validate-completed-migration)
+- Test inbound application connectivity through the Load Balancer
+- Test outbound connectivity from backend pool members to the Internet
+- For Public Load Balancers with multiple backend pools, create [Outbound Rules](./outbound-rules.md) for each backend pool
+ ## Use the module 1. Use `Connect-AzAccount` to connect to the required Microsoft Entra tenant and Azure subscription
PS C:\> Install-Module -Name AzureBasicLoadBalancerUpgrade -Scope CurrentUser -R
4. Run the Upgrade command.
-### Example: upgrade a Basic Load Balancer to a Standard Load Balancer with the same name, providing the Basic Load Balancer name and resource group
+### Example: validate a scenario
+
+Validate that a Basic Load Balancer is supported for upgrade
+
+```powershell
+PS C:\> Start-AzBasicLoadBalancerUpgrade -ResourceGroupName <loadBalancerRGName> -BasicLoadBalancerName <basicLBName> -validateScenarioOnly
+```
+
+### Example: upgrade by name
+
+Upgrade a Basic Load Balancer to a Standard Load Balancer with the same name, providing the Basic Load Balancer name and resource group name
```powershell
-PS C:\> Start-AzBasicLoadBalancerUpgrade -ResourceGroupName <load balancer resource group name> -BasicLoadBalancerName <existing Basic Load Balancer name>
+PS C:\> Start-AzBasicLoadBalancerUpgrade -ResourceGroupName <loadBalancerRGName> -BasicLoadBalancerName <basicLBName>
```
-### Example: upgrade a Basic Load Balancer to a Standard Load Balancer with the specified name, displaying logged output on screen
+### Example: upgrade, change name, and show logs
+
+Upgrade a Basic Load Balancer to a Standard Load Balancer with the specified name, displaying logged output on screen
```powershell
-PS C:\> Start-AzBasicLoadBalancerUpgrade -ResourceGroupName <load balancer resource group name> -BasicLoadBalancerName <existing Basic Load Balancer name> -StandardLoadBalancerName <new Standard Load Balancer name> -FollowLog
+PS C:\> Start-AzBasicLoadBalancerUpgrade -ResourceGroupName <loadBalancerRGName> -BasicLoadBalancerName <basicLBName> -StandardLoadBalancerName <newStandardLBName> -FollowLog
```
-### Example: upgrade a Basic Load Balancer to a Standard Load Balancer with the specified name and store the Basic Load Balancer backup file at the specified path
+### Example: upgrade with alternate backup path
+
+Upgrade a Basic Load Balancer to a Standard Load Balancer with the specified name and store the Basic Load Balancer backup file at the specified path
```powershell
-PS C:\> Start-AzBasicLoadBalancerUpgrade -ResourceGroupName <load balancer resource group name> -BasicLoadBalancerName <existing Basic Load Balancer name> -StandardLoadBalancerName <new Standard Load Balancer name> -RecoveryBackupPath C:\BasicLBRecovery
+PS C:\> Start-AzBasicLoadBalancerUpgrade -ResourceGroupName <loadBalancerRGName> -BasicLoadBalancerName <basicLBName> -StandardLoadBalancerName <newStandardLBName> -RecoveryBackupPath C:\BasicLBRecovery
```
-### Example: validate a completed migration by passing the Basic Load Balancer state file backup and the Standard Load Balancer name
+### Example: validate completed migration
+
+Validate a completed migration by passing the Basic Load Balancer state file backup and the Standard Load Balancer name
```powershell PS C:\> Start-AzBasicLoadBalancerUpgrade -validateCompletedMigration -basicLoadBalancerStatePath C:\RecoveryBackups\State_mybasiclb_rg-basiclbrg_20220912T1740032148.json ```
-### Example: migrate multiple Load Balancers with shared backend members at the same time
+### Example: migrate multiple, related Load Balancers
+
+Migrate multiple Load Balancers with shared backend members at the same time, usually when an application has an internal and an external Load Balancer
```powershell # build array of multiple basic load balancers PS C:\> $multiLBConfig = @( @{
- 'standardLoadBalancerName' = 'myStandardLB01'
+ 'standardLoadBalancerName' = 'myStandardInternalLB01' # specifying the standard load balancer name is optional
'basicLoadBalancer' = (Get-AzLoadBalancer -ResourceGroupName myRG -Name myBasicInternalLB01) }, @{
- 'standardLoadBalancerName' = 'myStandardLB02'
+ 'standardLoadBalancerName' = 'myStandardExternalLB02'
'basicLoadBalancer' = (Get-AzLoadBalancer -ResourceGroupName myRG -Name myBasicExternalLB02) } )
+# pass the array of load balancer configurations to the -MultiLBConfig parameter
PS C:\> Start-AzBasicLoadBalancerUpgrade -MultiLBConfig $multiLBConfig ```
-### Example: retry a failed upgrade for a virtual machine scale set's load balancer (due to error or script termination) by providing the Basic Load Balancer and Virtual Machine Scale Set backup state file
+### Example: retry failed virtual machine scale set migration
+
+Retry a failed upgrade for a virtual machine scale set's load balancer (due to error or script termination) by providing the Basic Load Balancer and Virtual Machine Scale Set backup state file
```powershell PS C:\> Start-AzBasicLoadBalancerUpgrade -FailedMigrationRetryFilePathLB C:\RecoveryBackups\State_mybasiclb_rg-basiclbrg_20220912T1740032148.json -FailedMigrationRetryFilePathVMSS C:\RecoveryBackups\VMSS_myVMSS_rg-basiclbrg_20220912T1740032148.json ```
-### Example: retry a failed upgrade for a VM load balancer (due to error or script termination) by providing the Basic Load Balancer backup state file
+### Example: retry failed virtual machine migration
+
+Retry a failed upgrade for a VM load balancer (due to error or script termination) by providing the Basic Load Balancer backup state file
```powershell PS C:\> Start-AzBasicLoadBalancerUpgrade -FailedMigrationRetryFilePathLB C:\RecoveryBackups\State_mybasiclb_rg-basiclbrg_20220912T1740032148.json
The script migrates the following from the Basic Load Balancer to the Standard L
### How do I migrate when my backend pool members belong to multiple Load Balancers?
-In a scenario where your backend pool members are also members of backend pools on another Load Balancer, such as when you have internal and external Load Balancers for the same application, the Basic Load Balancers need to be migrated at the same time. Trying to migrate the Load Balancers one at a time would attempt to mix Basic and Standard SKU resources, which is not allowed. The migration script supports this by passing multiple Basic Load Balancers into the same [script execution using the `-MultiLBConfig` parameter](#example-migrate-multiple-load-balancers-with-shared-backend-members-at-the-same-time).
+In a scenario where your backend pool members are also members of backend pools on another Load Balancer, such as when you have internal and external Load Balancers for the same application, the Basic Load Balancers need to be migrated at the same time. Trying to migrate the Load Balancers one at a time would attempt to mix Basic and Standard SKU resources, which is not allowed. The migration script supports this by passing multiple Basic Load Balancers into the same [script execution using the `-MultiLBConfig` parameter](#example-migrate-multiple-related-load-balancers).
### How do I validate that a migration was successful?
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
ms.suite: integration Previously updated : 10/09/2023 Last updated : 10/30/2023 # Limits and configuration reference for Azure Logic Apps
The following tables list the values for a single workflow definition:
| - | -- | -- | | Workflows per region per Azure subscription | - Consumption: 1,000 workflows where each logic app is limited to 1 workflow <br><br>- Standard: Unlimited, based on the selected hosting plan, app activity, size of machine instances, and resource usage, where each logic app can have multiple workflows || | Workflow - Maximum name length | - Consumption: 80 characters <br><br>- Standard: 32 characters ||
-| Triggers per workflow | 10 triggers | This limit applies only when you work on the JSON workflow definition, whether in code view or an Azure Resource Manager (ARM) template, not the designer. |
+| Triggers per workflow | - Consumption (designer): 1 trigger <br>- Consumption (JSON): 10 triggers <br><br>- Standard: 1 trigger | - Consumption: Multiple triggers are possible only when you work on the JSON workflow definition, whether in code view or an Azure Resource Manager (ARM) template, not the designer. <br><br>- Standard: Only one trigger is possible, whether in the designer, code view, or an Azure Resource Manager (ARM) template. |
| Actions per workflow | 500 actions | To extend this limit, you can use nested workflows as necessary. | | Actions nesting depth | 8 actions | To extend this limit, you can use nested workflows as necessary. | | Single trigger or action - Maximum name length | 80 characters ||
logic-apps Single Tenant Overview Compare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/single-tenant-overview-compare.md
ms.suite: integration Previously updated : 08/30/2023 Last updated : 10/30/2023
For the **Standard** logic app workflow, these capabilities have changed, or the
* [Custom managed connectors](../connectors/introduction.md#custom-connectors-and-apis) currently aren't currently supported. However, you can create *custom built-in operations* when you use Visual Studio Code. For more information, review [Create single-tenant based workflows using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#enable-built-in-connector-authoring).
+ * A Standard logic app workflow can have only one trigger and doesn't support multiple triggers.
+ * **Authentication**: The following authentication types are currently unavailable for **Standard** workflows: * Microsoft Entra ID Open Authentication (Microsoft Entra ID OAuth) for inbound calls to request-based triggers, such as the Request trigger and HTTP Webhook trigger.
machine-learning Monitor Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/monitor-azure-machine-learning.md
The metrics and logs you can collect are discussed in the following sections.
## Analyzing metrics
-You can analyze metrics for Azure Machine Learning, along with metrics from other Azure services, by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+You can analyze metrics for Azure Machine Learning, along with metrics from other Azure services, by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
For a list of the platform metrics collected, see [Monitoring Azure Machine Learning data reference metrics](monitor-resource-reference.md#metrics).
managed-grafana Concept Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/concept-whats-new.md
Previously updated : 02/06/2023 Last updated : 10/30/2023
-# What's New in Azure Managed Grafana
+# What's new in Azure Managed Grafana
-## May 2023
+## October 2023
-### Managed Private Endpoint
+* Azure Managed Grafana has a new [Essential pricing plan](overview.md#service-tiers) available in preview. This plan provides core Grafana functionalities at a reduced price and is designed to be used in non-production environments.
-Connecting Azure Managed Grafana instances to data sources using private links is now supported as a preview.
+## September 2023
-For more information, go to [Connect to a data source privately](how-to-connect-to-data-source-privately.md).
+* [Microsoft Entra groups](how-to-sync-teams-with-azure-ad-groups.md) is available in preview in Azure Managed Grafana.
-### Support for SMTP settings
+* Plugin management is available in preview. This feature lets you manage installed Grafana plugins directly within an Azure Managed Grafana workspace.
-SMTP support in Azure Managed Grafana is now generally available.
+* Azure Monitor workspaces integration is available in preview. This feature allows you to link your Grafana dashboard to Azure Monitor workspaces. This integration simplifies the process of connecting AKS clusters to an Azure Managed Grafana workspace and collecting metrics.
-For more information, go to [Configure SMTP settings](how-to-smtp-settings.md).
+## May 2023
-### Reporting
+* Connecting Azure Managed Grafana instances to data sources using managed private endpoints is available in preview. For more information about managed private endpoints, go to [Connect to a data source privately](how-to-connect-to-data-source-privately.md).
-Reporting is now supported in Azure Managed Grafana as a preview.
+* SMTP support in Azure Managed Grafana is now generally available. For more information, go to [Configure SMTP settings](how-to-smtp-settings.md).
-For more information, go to [Use reporting and image rendering](how-to-use-reporting-and-image-rendering.md).
+* Reporting is now supported in Azure Managed Grafana as a preview. For more information, go to [Use reporting and image rendering](how-to-use-reporting-and-image-rendering.md).
## February 2023
-### Support for SMTP settings
-
-Configuring SMTP settings for Azure Managed Grafana is now supported.
-
-For more information, go to [SMTP settings](how-to-smtp-settings.md).
+* Configuring SMTP settings for Azure Managed Grafana is now supported. For more information, go to [SMTP settings](how-to-smtp-settings.md).
## January 2023
-### Support for Grafana Enterprise
-
-Grafana Enterprise is now supported.
-
-For more information, go to [Subscribe to Grafana Enterprise](how-to-grafana-enterprise.md).
-
-### Support for service accounts
-
-Service accounts are now supported.
+* Grafana Enterprise is supported. For more information, go to [Subscribe to Grafana Enterprise](how-to-grafana-enterprise.md).
-For more information, go to [How to use service accounts](how-to-service-accounts.md).
+* Service accounts are supported. For more information, go to [How to use service accounts](how-to-service-accounts.md).
## Next steps
managed-grafana Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/overview.md
Azure Managed Grafana is available in the two service tiers presented below.
> [!NOTE] > The Essential plan (preview) is currently being rolled out and will be available in all cloud regions on October 30, 2023.
-The [Azure Managed Grafana pricing page](https://azure.microsoft.com/pricing/details/managed-grafana/) gives more information on these tiers and the following table lists which main features are supported in each tier:
+The following table lists the main features supported in each tier:
| Feature | Essential (preview) | Standard | ||-|--|
migrate Discover And Assess Using Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/discover-and-assess-using-private-endpoints.md
In the **Advanced** configuration section, provide the following details to crea
Azure Migrate: Discovery and assessment use a lightweight Azure Migrate appliance. The appliance performs server discovery and sends server configuration and performance metadata to Azure Migrate. > [!Note]
-> If you have deployed an appliance using a template (OVA for servers on a VMware environment and VHD for a Hyper-V environment), you can use the same appliance and register it with an Azure Migrate project with private endpoint connectivity.
+> If you have deployed an appliance using a template (OVA for servers on a VMware environment and VHD for a Hyper-V environment), you can use the same appliance and register it with an Azure Migrate project with private endpoint connectivity. You will need to run the Azure Migrate installer script and select the private endpoint connectivity option mentioned in the instructions below.
To set up the appliance: 1. Download the zipped file that contains the installer script from the portal.
After the script has executed successfully, the appliance configuration manager
> [!NOTE] > If you come across any issues, you can access the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log for troubleshooting.
+## Enabling DNS Resolution to Private Endpoints
+
+1. The DNS records required for the private endpoints can be downloaded from the Azure Migrate project. Instructions on how to download the DNS entries is [here](./troubleshoot-network-connectivity.md#verify-dns-resolution)
+2. Add these DNS records to your DNS server on-premises using our [Private Endpoint Connectivity documentation](../private-link/private-endpoint-dns.md) or add these DNS records to the local host file in the Azure Migrate appliance.
+ ## Configure the appliance and start continuous discovery Open a browser on any machine that can connect to the appliance server. Open the URL of the appliance configuration manager, `https://appliance name or IP address: 44368`. Or, you can open the configuration manager from the appliance server desktop by selecting the shortcut for the configuration manager.
You can also [assess your on-premises machines](./tutorial-discover-import.md#pr
## Next steps -- [Migrate servers to Azure using Private Link](migrate-servers-to-azure-using-private-link.md).
+- [Migrate servers to Azure using Private Link](migrate-servers-to-azure-using-private-link.md).
mysql Concepts Data Out Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-data-out-replication.md
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
-Data-out replication allows you to synchronize data out of a Azure Database for MySQL flexible server to another MySQL server using MySQL native replication. The MySQL server (replica) can be on-premises, in virtual machines, or a database service hosted by other cloud providers. While [Data-in replication](concepts-data-in-replication.md) helps to move data into an Azure Database for MySQL flexible server (replica), Data-out replication would allow you to transfer data out of an Azure Database for MySQL flexible server (Primary). With Data-out replication, the binary log (binlog) is made community consumable allowing the an Azure Database for MySQL flexible server to act as a Primary server for the external replicas. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
+Data-out replication allows you to synchronize data out of an Azure Database for MySQL flexible server to another MySQL server using MySQL native replication. The MySQL server (replica) can be on-premises, in virtual machines, or a database service hosted by other cloud providers. While [Data-in replication](concepts-data-in-replication.md) helps to move data into an Azure Database for MySQL flexible server (replica), Data-out replication would allow you to transfer data out of an Azure Database for MySQL flexible server (Primary). With Data-out replication, the binary log (binlog) is made community consumable allowing the an Azure Database for MySQL flexible server to act as a Primary server for the external replicas. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
> [!NOTE] > Data-out replication is not supported on Azure Database for MySQL - Flexible Server, which has Azure authentication configured.
mysql How To Maintenance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-maintenance-portal.md
description: Learn how to configure scheduled maintenance settings for an Azure
--+++ Last updated 9/21/2020
To complete this how-to guide, you need:
1. On the MySQL server page, under the **Settings** heading, choose **Maintenance** to open scheduled maintenance options. 2. The default (system-managed) schedule is a random day of the week, and 60-minute window for maintenance start between 11pm and 7am local server time. If you want to customize this schedule, choose **Custom schedule**. You can then select a preferred day of the week, and a 60-minute window for maintenance start time.
+## Reschedule Maintenance (Public Preview)
+
+1. In the Maintenance window, you'll notice a new button labeled Reschedule.
+2. Upon clicking **Reschedule**, a "Maintenance Reschedule" window appears where you can select a new date and time for the scheduled maintenance activity.
+3. After selecting your preferred date and time, select **Reschedule** to confirm your choice.
+4. You also have an option for on-demand maintenance by clicking **Reschedule to Now**. A confirmation dialog appears to verify that you understand the potential effect, including possible server downtime.
+
+Rescheduling maintenance also triggers email notifications to keep you informed.
+
+The availability of the rescheduling window isn't fixed, it often depends on the size of the overall maintenance window for the region in which your server resides. This means the reschedule options can vary based on regional operations and workload.
+
+> [!NOTE]
+> Maintenance Reschedule is only available for General Purpose and Business Critical service tiers.
+
+### Considerations and limitations
+
+Be aware of the following when using this feature:
+
+- **Demand Constraints:** Your rescheduled maintenance might be canceled due to a high number of maintenance activities occurring simultaneously in the same region.
+- **Lock-in Period:** Rescheduling is unavailable 15 minutes prior to the initially scheduled maintenance time to maintain the reliability of the service.
+ ## Notifications about scheduled maintenance events You can use Azure Service Health to [view notifications](../../service-health/service-notifications.md) about upcoming and performed scheduled maintenance on your Flexible server. You can also [set up](../../service-health/resource-health-alert-monitor-guide.md) alerts in Azure Service Health to get notifications about maintenance events.
mysql Sample Scripts Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/sample-scripts-azure-cli.md
The following table includes links to sample Azure CLI scripts for Azure Databas
| Sample link | Description | ||| |**Create and connect to a server**||
-| [Create a server and enable public access connectivity](scripts/sample-cli-create-connect-public-access.md) | Creates a Azure Database for MySQL - Flexible Server, configures a server-level firewall rule (public access connectivity method) and connects to the server. |
+| [Create a server and enable public access connectivity](scripts/sample-cli-create-connect-public-access.md) | Creates an Azure Database for MySQL - Flexible Server, configures a server-level firewall rule (public access connectivity method) and connects to the server. |
| [Create a server and enable private access connectivity (VNet Integration)](scripts/sample-cli-create-connect-private-access.md) | Creates an Azure Database for MySQL - Flexible Server in a VNet (private access connectivity method) and connects to the server through a VM within the VNet. | |**Monitor and scale**|| | [Monitor metrics and scale a server](scripts/sample-cli-monitor-and-scale.md) | Monitors and scales a single Azure Database for MySQL - Flexible server up or down to allow for changing performance needs. |
mysql How To Configure Private Link Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-private-link-cli.md
az vm create \
## Create an Azure Database for MySQL server
-Create a Azure Database for MySQL with the az mysql server create command. Remember that the name of your MySQL Server must be unique across Azure, so replace the placeholder value in brackets with your own unique value:
+Create an Azure Database for MySQL with the az mysql server create command. Remember that the name of your MySQL Server must be unique across Azure, so replace the placeholder value in brackets with your own unique value:
```azurecli-interactive # Create a server in the resource group
networking Network Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/network-monitoring-overview.md
description: Overview of network monitoring solutions, including network perform
Previously updated : 03/23/2023 Last updated : 10/30/2023 # Network monitoring solutions Azure offers a host of solutions to monitor your networking assets. Azure has solutions and utilities to monitor network connectivity, the health of ExpressRoute circuits, and analyze network traffic in the cloud.
+> [!IMPORTANT]
+> As of July 1, 2021, you can no longer add new tests in an existing workspace or enable a new workspace in Network Performance Monitor (NPM). You're also no longer able to add new connection monitors in Connection Monitor (Classic). You can continue to use the tests and connection monitors that you've created prior to July 1, 2021.
+>
+> To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor](/azure/network-watcher/migrate-to-connection-monitor-from-network-performance-monitor), or [migrate from Connection Monitor (Classic)](/azure/network-watcher/migrate-to-connection-monitor-from-connection-monitor-classic) to the new Connection Monitor in Azure Network Watcher before February 29, 2024.
+ ## Network Performance Monitor Network Performance Monitor is a suite of capabilities that is geared towards monitoring the health of your network. Network Performance Monitor monitors network connectivity to your applications, and provides insights into the performance of your network. Network Performance Monitor is cloud-based and provides a hybrid network monitoring solution that monitors connectivity between:
operator-nexus Concepts Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-observability.md
The 'InsightMetrics' table in the Logs section contains the metrics collected fr
Figure: Azure Monitor Metrics Selection
-See **[Getting Started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md)** for details on using this tool.
+See **[Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md)** for details on using this tool.
#### Workbooks
operator-nexus Howto Configure Isolation Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-isolation-domain.md
Expected Output
## Create an untrusted L3 isolation domain ```azurecli
-az nf l3domain create --resource-group "ResourceGroupName" --resource-name "l3untrust" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName"
+az networkfabric l3domain create --resource-group "ResourceGroupName" --resource-name "l3untrust" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName"
``` ## Create a trusted L3 isolation domain ```azurecli
-az nf l3domain create --resource-group "ResourceGroupName" --resource-name "l3trust" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName"
+az networkfabric l3domain create --resource-group "ResourceGroupName" --resource-name "l3trust" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName"
``` ## Create a management L3 isolation domain ```azurecli
-az nf l3domain create --resource-group "ResourceGroupName" --resource-name "l3mgmt" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName"
+az networkfabric l3domain create --resource-group "ResourceGroupName" --resource-name "l3mgmt" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName"
``` ### Show L3 isolation-domains
Use the following command to change the administrative state of an L3 isolation
##Note: At least one internal network should be available to change the adminstrative state of an L3 Isolation Domain. ```azurecli
-az nf l3domain update-admin-state --resource-group "ResourceGroupName" --resource-name "example-l3domain" --state Enable/Disable
+az networkfabric l3domain update-admin-state --resource-group "ResourceGroupName" --resource-name "example-l3domain" --state Enable/Disable
``` Expected Output
Use the `az show` command to verify whether the administrative state has changed
Use this command to delete an L3 isolation domain: ```azurecli
- az nf l3domain delete --resource-group "ResourceGroupName" --resource-name "example-l3domain"
+ az networkfabric l3domain delete --resource-group "ResourceGroupName" --resource-name "example-l3domain"
``` Use the `show` or `list` commands to validate that the isolation-domain has been deleted.
Expected Output
## Create an untrusted internal network for an L3 isolation domain ```azurecli
-az nf internalnetwork create --resource-group "ResourceGroupName" --l3-isolation-domain-name l3untrust --resource-name untrustnetwork --location "eastus" --vlan-id 502 --fabric-asn 65048 --peer-asn 65047--connected-i-pv4-subnets prefix="10.151.3.11/24" --mtu 1500
+az networkfabric internalnetwork create --resource-group "ResourceGroupName" --l3-isolation-domain-name l3untrust --resource-name untrustnetwork --location "eastus" --vlan-id 502 --fabric-asn 65048 --peer-asn 65047--connected-i-pv4-subnets prefix="10.151.3.11/24" --mtu 1500
``` ## Create a trusted internal network for an L3 isolation domain ```azurecli
-az nf internalnetwork create --resource-group "ResourceGroupName" --l3-isolation-domain-name l3trust --resource-name trustnetwork --location "eastus" --vlan-id 503 --fabric-asn 65048 --peer-asn 65047--connected-i-pv4-subnets prefix="10.151.1.11/24" --mtu 1500
+az networkfabric internalnetwork create --resource-group "ResourceGroupName" --l3-isolation-domain-name l3trust --resource-name trustnetwork --location "eastus" --vlan-id 503 --fabric-asn 65048 --peer-asn 65047--connected-i-pv4-subnets prefix="10.151.1.11/24" --mtu 1500
``` ## Create an internal management network for an L3 isolation domain ```azurecli
-az nf internalnetwork create --resource-group "ResourceGroupName" --l3-isolation-domain-name l3mgmt --resource-name mgmtnetwork --location "eastus" --vlan-id 504 --fabric-asn 65048 --peer-asn 65047--connected-i-pv4-subnets prefix="10.151.2.11/24" --mtu 1500
+az networkfabric internalnetwork create --resource-group "ResourceGroupName" --l3-isolation-domain-name l3mgmt --resource-name mgmtnetwork --location "eastus" --vlan-id 504 --fabric-asn 65048 --peer-asn 65047--connected-i-pv4-subnets prefix="10.151.2.11/24" --mtu 1500
```
Expected Output
## Enable an L2 Isolation Domain ```azurecli
-az nf l2domain update-administrative-state --resource-group "ResourceGroupName" --resource-name "l2HAnetwork" --state Enable
+az networkfabric l2domain update-administrative-state --resource-group "ResourceGroupName" --resource-name "l2HAnetwork" --state Enable
``` ## Enable an L3 isolation domain
az nf l2domain update-administrative-state --resource-group "ResourceGroupName"
Use this command to enable an untrusted L3 isolation domain: ```azurecli
-az nf l3domain update-admin-state --resource-group "ResourceGroupName" --resource-name "l3untrust" --state Enable
+az networkfabric l3domain update-admin-state --resource-group "ResourceGroupName" --resource-name "l3untrust" --state Enable
``` Use this command to enable a trusted L3 isolation domain: ```azurecli
-az nf l3domain update-admin-state --resource-group "ResourceGroupName" --resource-name "l3trust" --state Enable
+az networkfabric l3domain update-admin-state --resource-group "ResourceGroupName" --resource-name "l3trust" --state Enable
``` Use this command to enable a management L3 isolation domain: ```azurecli
-az nf l3domain update-admin-state --resource-group "ResourceGroupName" --resource-name "l3mgmt" --state Enable
+az networkfabric l3domain update-admin-state --resource-group "ResourceGroupName" --resource-name "l3mgmt" --state Enable
```
operator-nexus Reference Nexus Kubernetes Cluster Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-nexus-kubernetes-cluster-supported-versions.md
Note the following important changes to make before you upgrade to any of the av
| Kubernetes Version | Version Bundle | Components | OS components | Breaking Changes | Notes | |--|-|--|||--|
-| 1.25.4 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.4<br>etcd v3.5.6-5 | Mariner 2.0 (2023-06-18) | No breaking changes | |
-| 1.25.4 | 2 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.4<br>etcd v3.5.6-5 | Mariner 2.0 (2023-06-18) | No breaking changes | |
-| 1.25.4 | 3 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.4<br>etcd v3.5.6-5 | Mariner 2.0 (2023-06-18) | No breaking changes | |
-| 1.25.4 | 4 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.4<br>etcd v3.5.6-5 | Mariner 2.0 (2023-06-18) | No breaking changes | |
-| 1.25.6 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.6<br>etcd v3.5.6-5 | Mariner 2.0 (2023-06-18) | No breaking changes | |
-| 1.26.3 | 1 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.6<br>etcd v3.5.6-5 | Mariner 2.0 (2023-06-18) | No breaking changes | |
-| 1.27.1 | 1 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5 | Mariner 2.0 (2023-09-21) | Cgroupv2 | Steps to disable cgroupv2 can be found [here](./howto-disable-cgroupsv2.md) |
+| 1.25.4 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.4<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Mariner 2.0 (2023-05-04) | No breaking changes | |
+| 1.25.4 | 2 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.4<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Mariner 2.0 (2023-06-18) | No breaking changes | |
+| 1.25.4 | 3 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.4<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Mariner 2.0 (2023-06-18) | No breaking changes | |
+| 1.25.4 | 4 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.4<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Mariner 2.0 (2023-09-21) | No breaking changes | |
+| 1.25.6 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.6<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Mariner 2.0 (2023-09-21) | No breaking changes | |
+| 1.26.3 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.6<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Mariner 2.0 (2023-09-21) | No breaking changes | |
+| 1.27.1 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Mariner 2.0 (2023-09-21) | Cgroupv2 | Steps to disable cgroupv2 can be found [here](./howto-disable-cgroupsv2.md) |
## Upgrading Kubernetes versions
postgresql How To Autovacuum Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-autovacuum-tuning.md
Title: Autovacuum Tuning description: Troubleshooting guide for autovacuum in Azure Database for PostgreSQL - Flexible Server- ++ Last updated : 10/26/2023 Previously updated : 08/03/2022 # Autovacuum Tuning in Azure Database for PostgreSQL - Flexible Server
-This article provides an overview of the autovacuum feature for [Azure Database for PostgreSQL - Flexible Server](overview.md).
-
-## What is autovacuum
+This article provides an overview of the autovacuum feature for [Azure Database for PostgreSQL - Flexible Server](overview.md) and the feature troubleshooting guides that are available to monitor the database bloat, autovacuum blockers and also information around how far the database is from emergency or wraparound situation.
-Internal data consistency in PostgreSQL is based on the Multi-Version Concurrency Control (MVCC) mechanism, which allows the database engine to maintain multiple versions of a row and provides greater concurrency with minimal blocking between the different processes.
+## What is autovacuum
-PostgreSQL databases need appropriate maintenance. For example, when a row is deleted, it isn't removed physically. Instead, the row is marked as ΓÇ£deadΓÇ¥. Similarly for updates, the row is marked as "dead" and a new version of the row is inserted. These operations leave behind dead records, called dead tuples, even after all the transactions that might see those versions finish. Unless cleaned up, dead tuples remain, consuming disk space and bloating tables and indexes which result in slow query performance.
+Internal data consistency in PostgreSQL is based on the Multi-Version Concurrency Control (MVCC) mechanism, which allows the database engine to maintain multiple versions of a row and provides greater concurrency with minimal blocking between the different processes.
-PostgreSQL uses a process called autovacuum to automatically clean up dead tuples.
+PostgreSQL databases need appropriate maintenance. For example, when a row is deleted, it isn't removed physically. Instead, the row is marked as "dead". Similarly for updates, the row is marked as "dead" and a new version of the row is inserted. These operations leave behind dead records, called dead tuples, even after all the transactions that might see those versions finish. Unless cleaned up, dead tuples remain, consuming disk space and bloating tables and indexes which result in slow query performance.
+PostgreSQL uses a process called autovacuum to automatically clean-up dead tuples.
## Autovacuum internals
Autovacuum reads pages looking for dead tuples, and if none are found, autovacuu
- `vacuum_cost_page_miss`: Cost of fetching a page that isn't in shared buffers. The default value is set to 10. - `vacuum_cost_page_dirty`: Cost of writing to a page when dead tuples are found in it. The default value is set to 20.
-The amount of work autovacuum does depends on two parameters:
+The amount of work autovacuum does depends on two parameters:
- `autovacuum_vacuum_cost_limit` is the amount of work autovacuum does in one go. - `autovacuum_vacuum_cost_delay` number of milliseconds that autovacuum is asleep after it has reached the cost limit specified by the `autovacuum_vacuum_cost_limit` parameter. - In Postgres versions 9.6, 10 and 11 the default for `autovacuum_vacuum_cost_limit` is 200 and `autovacuum_vacuum_cost_delay` is 20 milliseconds. In Postgres versions 12 and above the default `autovacuum_vacuum_cost_limit` is 200 and `autovacuum_vacuum_cost_delay` is 2 milliseconds. Autovacuum wakes up 50 times (50*20 ms=1000 ms) every second. Every time it wakes up, autovacuum reads 200 pages.
-That means in one-second autovacuum can do:
+That means in one-second autovacuum can do:
- ~80 MB/Sec [ (200 pages/`vacuum_cost_page_hit`) * 50 * 8 KB per page] if all pages with dead tuples are found in shared buffers. - ~8 MB/Sec [ (200 pages/`vacuum_cost_page_miss`) * 50 * 8 KB per page] if all pages with dead tuples are read from disk. - ~4 MB/Sec [ (200 pages/`vacuum_cost_page_dirty`) * 50 * 8 KB per page] autovacuum can write up to 4 MB/sec.
+## Monitor autovacuum
-
-## Monitoring autovacuum
-
-Use the following queries to monitor autovacuum:
+Use the following queries to monitor autovacuum:
```postgresql select schemaname,relname,n_dead_tup,n_live_tup,round(n_dead_tup::float/n_live_tup::float*100) dead_pct,autovacuum_count,last_vacuum,last_autovacuum,last_autoanalyze,last_analyze from pg_stat_all_tables where n_live_tup >0; ```
-ΓÇ»
The following columns help determine if autovacuum is catching up to table activity: - - **Dead_pct**: percentage of dead tuples when compared to live tuples.-- **Last_autovacuum**: The date of the last time the table was autovacuumed. -- **Last_autoanalyze**: The date of the last time the table was automatically analyzed. --
-## When does PostgreSQL trigger autovacuum
-
-An autovacuum action (either *ANALYZE* or *VACUUM*) triggers when the number of dead tuples exceeds a particular number that is dependent on two factors: the total count of rows in a table, plus a fixed threshold. *ANALYZE*, by default, triggers when 10% of the table plus 50 rows changes, while *VACUUM* triggers when 20% of the table plus 50 rows changes. Since the *VACUUM* threshold is twice as high as the *ANALYZE* threshold, *ANALYZE* gets triggered much earlier than *VACUUM*.
+- **Last_autovacuum**: The date of the last time the table was autovacuumed.
+- **Last_autoanalyze**: The date of the last time the table was automatically analyzed.
-The exact equations for each action are:
+## When does PostgreSQL trigger autovacuum
-- **Autoanalyze** = autovacuum_analyze_scale_factor * tuples + autovacuum_analyze_threshold -- **Autovacuum** = autovacuum_vacuum_scale_factor * tuples + autovacuum_vacuum_threshold
+An autovacuum action (either *ANALYZE* or *VACUUM*) triggers when the number of dead tuples exceeds a particular number that is dependent on two factors: the total count of rows in a table, plus a fixed threshold. *ANALYZE*, by default, triggers when 10% of the table plus 50 rows changes, while *VACUUM* triggers when 20% of the table plus 50 rows changes. Since the *VACUUM* threshold is twice as high as the *ANALYZE* threshold, *ANALYZE* gets triggered earlier than *VACUUM*.
+The exact equations for each action are:
-For example, analyze triggers after 60 rows change on a table that contains 100 rows, and vacuum triggers when 70 rows change on the table, using the following equations:
+- **Autoanalyze** = autovacuum_analyze_scale_factor * tuples + autovacuum_analyze_threshold
+- **Autovacuum** = autovacuum_vacuum_scale_factor * tuples + autovacuum_vacuum_threshold
-`Autoanalyze = 0.1 * 100 + 50 = 60`
-`Autovacuum = 0.2 * 100 + 50 = 70`
+For example, analyze triggers after 60 rows change on a table that contains 100 rows, and vacuum triggers when 70 rows change on the table, using the following equations:
+`Autoanalyze = 0.1 * 100 + 50 = 60`
+`Autovacuum = 0.2 * 100 + 50 = 70`
-Use the following query to list the tables in a database and identify the tables that qualify for the autovacuum process:
-
+Use the following query to list the tables in a database and identify the tables that qualify for the autovacuum process:
```postgresql SELECT * ,n_dead_tup > av_threshold AS av_needed
- ,CASE
+ ,CASE
WHEN reltuples > 0 THEN round(100.0 * n_dead_tup / (reltuples)) ELSE 0
Use the following query to list the tables in a database and identify the tables
) AND N.nspname !~ '^pg_toast' ) AS av
- ORDER BY av_needed DESC ,n_dead_tup DESC;
+ ORDER BY av_needed DESC ,n_dead_tup DESC;
```
-> [!NOTE]
-> The query doesn't take into consideration that autovacuum can be configured on a per-table basis using the "alter table" DDL command. 
-
+> [!NOTE]
+> The query doesn't take into consideration that autovacuum can be configured on a per-table basis using the "alter table" DDL command.
## Common autovacuum problems
-Review the possible common problems with the autovacuum process.
+Review the possible common problems with the autovacuum process.
### Not keeping up with busy server
-The autovacuum process estimates the cost of every I/O operation, accumulates a total for each operation it performs and pauses once the upper limit of the cost is reached. `autovacuum_vacuum_cost_delay` and `autovacuum_vacuum_cost_limit` are the two server parameters that are used in the process.
-
+The autovacuum process estimates the cost of every I/O operation, accumulates a total for each operation it performs and pauses once the upper limit of the cost is reached. `autovacuum_vacuum_cost_delay` and `autovacuum_vacuum_cost_limit` are the two server parameters that are used in the process.
By default, `autovacuum_vacuum_cost_limit` is set to –1, meaning autovacuum cost limit is the same value as the parameter `vacuum_cost_limit`, which defaults to 200. `vacuum_cost_limit` is the cost of a manual vacuum. If `autovacuum_vacuum_cost_limit` is set to `-1` then autovacuum uses the `vacuum_cost_limit` parameter, but if `autovacuum_vacuum_cost_limit` itself is set to greater than `-1` then `autovacuum_vacuum_cost_limit` parameter is considered.
-In case the autovacuum isn't keeping up, the following parameters may be changed:
+In case the autovacuum isn't keeping up, the following parameters might be changed:
-|Parameter |Description |
-|||
-|`autovacuum_vacuum_scale_factor`| Default: `0.2`, range: `0.05 - 0.1`. The scale factor is workload-specific and should be set depending on the amount of data in the tables. Before changing the value, investigate the workload and individual table volumes. |
-|`autovacuum_vacuum_cost_limit`|Default: `200`. Cost limit may be increased. CPU and I/O utilization on the database should be monitored before and after making changes. |
-|`autovacuum_vacuum_cost_delay` | **Postgres Versions 9.6,10,11** - Default: `20 ms`. The parameter may be decreased to `2-10 ms`. </br> **Postgres Versions 12 and above** - Default: `2 ms`. |
+| Parameter | Description |
+| | |
+| `autovacuum_vacuum_scale_factor` | Default: `0.2`, range: `0.05 - 0.1`. The scale factor is workload-specific and should be set depending on the amount of data in the tables. Before changing the value, investigate the workload and individual table volumes. |
+| `autovacuum_vacuum_cost_limit` | Default: `200`. Cost limit might be increased. CPU and I/O utilization on the database should be monitored before and after making changes. |
+| `autovacuum_vacuum_cost_delay` | **Postgres Versions 9.6,10,11** - Default: `20 ms`. The parameter might be decreased to `2-10 ms`.<br />**Postgres Versions 12 and above** - Default: `2 ms`. |
-> [!NOTE]
+> [!NOTE]
> The `autovacuum_vacuum_cost_limit` value is distributed proportionally among the running autovacuum workers, so that if there is more than one, the sum of the limits for each worker doesn't exceed the value of the `autovacuum_vacuum_cost_limit` parameter ### Autovacuum constantly running
-Continuously running autovacuum may affect CPU and IO utilization on the server. The following might be possible reasons:
+Continuously running autovacuum might affect CPU and IO utilization on the server. The following might be possible reasons:
#### `maintenance_work_mem` Autovacuum daemon uses `autovacuum_work_mem` that is by default set to `-1` meaningΓÇ»`autovacuum_work_mem` would have the same value as the parameterΓÇ»`maintenance_work_mem`. This document assumes `autovacuum_work_mem` is set to `-1` and `maintenance_work_mem` is used by the autovacuum daemon.
-If `maintenance_work_mem` is low, it may be increased to up to 2 GB on Flexible Server. A general rule of thumb is to allocate 50 MB to `maintenance_work_mem` for every 1 GB of RAM. 
-
+If `maintenance_work_mem` is low, it might be increased to up to 2 GB on Flexible Server. A general rule of thumb is to allocate 50 MB to `maintenance_work_mem` for every 1 GB of RAM.
#### Large number of databases
-Autovacuum tries to start a worker on each database every `autovacuum_naptime` seconds.
-
-For example, if a server has 60 databases and `autovacuum_naptime` is set to 60 seconds, then the autovacuum worker starts every second [autovacuum_naptime/Number of DBs].
+Autovacuum tries to start a worker on each database every `autovacuum_naptime` seconds.
-It's a good idea to increase `autovacuum_naptime` if there are more databases in a cluster. At the same time, the autovacuum process can be made more aggressive by increasing the `autovacuum_cost_limit` and decreasing the `autovacuum_cost_delay` parameters and increasing the `autovacuum_max_workers` from the default of 3 to 4 or 5.
+For example, if a server has 60 databases and `autovacuum_naptime` is set to 60 seconds, then the autovacuum worker starts every second [autovacuum_naptime/Number of DBs].
+It's a good idea to increase `autovacuum_naptime` if there are more databases in a cluster. At the same time, the autovacuum process can be made more aggressive by increasing the `autovacuum_cost_limit` and decreasing the `autovacuum_cost_delay` parameters and increasing the `autovacuum_max_workers` from the default of 3 to 4 or 5.
-### Out of memory errors
-
-Overly aggressive `maintenance_work_mem` values could periodically cause out-of-memory errors in the system. It's important to understand available RAM on the server before any change to the `maintenance_work_mem` parameter is made.
+### Out of memory errors
+Overly aggressive `maintenance_work_mem` values could periodically cause out-of-memory errors in the system. It's important to understand available RAM on the server before any change to the `maintenance_work_mem` parameter is made.
### Autovacuum is too disruptive
If autovacuum is consuming a lot of resources, the following can be done:
#### Autovacuum parameters
-Evaluate the parameters `autovacuum_vacuum_cost_delay`, `autovacuum_vacuum_cost_limit`, `autovacuum_max_workers`. Improperly setting autovacuum parameters may lead to scenarios where autovacuum becomes too disruptive.
+Evaluate the parameters `autovacuum_vacuum_cost_delay`, `autovacuum_vacuum_cost_limit`, `autovacuum_max_workers`. Improperly setting autovacuum parameters might lead to scenarios where autovacuum becomes too disruptive.
-If autovacuum is too disruptive, consider the following:
+If autovacuum is too disruptive, consider the following:
-- IncreaseΓÇ»`autovacuum_vacuum_cost_delay` and reduce `autovacuum_vacuum_cost_limit` if set higher than the default of 200. -- Reduce the number of `autovacuum_max_workers` if it's set higher than the default of 3.ΓÇ»
+- IncreaseΓÇ»`autovacuum_vacuum_cost_delay` and reduce `autovacuum_vacuum_cost_limit` if set higher than the default of 200.
+- Reduce the number of `autovacuum_max_workers` if it's set higher than the default of 3.
-#### Too many autovacuum workersΓÇ»
+#### Too many autovacuum workers
-Increasing the number of autovacuum workers will not necessarily increase the speed of vacuum. Having a high number of autovacuum workers isn't recommended.
+Increasing the number of autovacuum workers won't necessarily increase the speed of vacuum. Having a high number of autovacuum workers isn't recommended.
-Increasing the number of autovacuum workers will result in more memory consumption, and depending on the value of `maintenance_work_mem` , could cause performance degradation.
+Increasing the number of autovacuum workers will result in more memory consumption, and depending on the value of `maintenance_work_mem` , could cause performance degradation.
Each autovacuum worker process only gets (1/autovacuum_max_workers) of the total `autovacuum_cost_limit`, so having a high number of workers causes each one to go slower. If the number of workers is increased, `autovacuum_vacuum_cost_limit` should also be increased and/or `autovacuum_vacuum_cost_delay` should be decreased to make the vacuum process faster. However, if we have changed table level `autovacuum_vacuum_cost_delay` or `autovacuum_vacuum_cost_limit` parameters then the workers running on those tables are exempted from being considered in the balancing algorithm [autovacuum_cost_limit/autovacuum_max_workers].
-ΓÇ»
+ ### Autovacuum transaction ID (TXID) wraparound protection When a database runs into transaction ID wraparound protection, an error message like the following can be observed: ```
-Database isn't accepting commands to avoid wraparound data loss in database ΓÇÿxxΓÇÖ
-Stop the postmaster and vacuum that database in single-user mode.
+Database isn't accepting commands to avoid wraparound data loss in database 'xx'
+Stop the postmaster and vacuum that database in single-user mode.
```
-> [!NOTE]
+> [!NOTE]
> This error message is a long-standing oversight. Usually, you do not need to switch to single-user mode. Instead, you can run the required VACUUM commands and perform tuning for VACUUM to run fast. While you cannot run any data manipulation language (DML), you can still run VACUUM.
+The wraparound problem occurs when the database is either not vacuumed or there are too many dead tuples that couldn't be removed by autovacuum. The reasons for this might be:
-The wraparound problem occurs when the database is either not vacuumed or there are too many dead tuples that could not be removed by autovacuum. The reasons for this might be:
-
-#### Heavy workload
+#### Heavy workload
The workload could cause too many dead tuples in a brief period that makes it difficult for autovacuum to catch up. The dead tuples in the system add up over a period leading to degradation of query performance and leading to wraparound situation. One reason for this situation to arise might be because autovacuum parameters aren't adequately set and it isn't keeping up with a busy server.
+#### Long-running transactions
-#### Long-running transactions
-
-Any long-running transactions in the system will not allow dead tuples to be removed while autovacuum is running. They're a blocker to the vacuum process. Removing the long running transactions frees up dead tuples for deletion when autovacuum runs.
+Any long-running transactions in the system won't allow dead tuples to be removed while autovacuum is running. They're a blocker to the vacuum process. Removing the long running transactions frees up dead tuples for deletion when autovacuum runs.
-Long-running transactions can be detected using the following query:
+Long-running transactions can be detected using the following query:
```postgresql
- SELECT pid, age(backend_xid) AS age_in_xids,
- now () - xact_start AS xact_age,
- now () - query_start AS query_age,
- state,
- query
- FROM pg_stat_activity
- WHERE state != 'idle'
- ORDER BY 2 DESC
- LIMIT 10;
+ SELECT pid, age(backend_xid) AS age_in_xids,
+ now () - xact_start AS xact_age,
+ now () - query_start AS query_age,
+ state,
+ query
+ FROM pg_stat_activity
+ WHERE state != 'idle'
+ ORDER BY 2 DESC
+ LIMIT 10;
```
-#### Prepared statements
+#### Prepared statements
-If there are prepared statements that are not committed, they would prevent dead tuples from being removed.
-The following query helps find non-committed prepared statements:
+If there are prepared statements that aren't committed, they would prevent dead tuples from being removed.
+The following query helps find noncommitted prepared statements:
```postgresql
- SELECT gid, prepared, owner, database, transaction
- FROM pg_prepared_xacts
- ORDER BY age(transaction) DESC;
+ SELECT gid, prepared, owner, database, transaction
+ FROM pg_prepared_xacts
+ ORDER BY age(transaction) DESC;
```
-Use COMMIT PREPARED or ROLLBACK PREPARED to commit or roll back these statements.
+Use COMMIT PREPARED or ROLLBACK PREPARED to commit or roll back these statements.
-#### Unused replication slots
+#### Unused replication slots
-Unused replication slots prevent autovacuum from claiming dead tuples. The following query helps identify unused replication slots:
+Unused replication slots prevent autovacuum from claiming dead tuples. The following query helps identify unused replication slots:
```postgresql
- SELECT slot_name, slot_type, database, xmin
- FROM pg_replication_slots
- ORDER BY age(xmin) DESC;
+ SELECT slot_name, slot_type, database, xmin
+ FROM pg_replication_slots
+ ORDER BY age(xmin) DESC;
```
-UseΓÇ»`pg_drop_replication_slot()` to delete unused replication slots.
+UseΓÇ»`pg_drop_replication_slot()` to delete unused replication slots.
-When the database runs into transaction ID wraparound protection, check for any blockers as mentioned previously, and remove those manually for autovacuum to continue and complete. You can also increase the speed of autovacuum by setting `autovacuum_cost_delay` to 0 and increasing the `autovacuum_cost_limit` to a value much greater than 200. However, changes to these parameters will not be applied to existing autovacuum workers. Either restart the database or kill existing workers manually to apply parameter changes.
+When the database runs into transaction ID wraparound protection, check for any blockers as mentioned previously, and remove those manually for autovacuum to continue and complete. You can also increase the speed of autovacuum by setting `autovacuum_cost_delay` to 0 and increasing the `autovacuum_cost_limit` to a value greater than 200. However, changes to these parameters won't be applied to existing autovacuum workers. Either restart the database or kill existing workers manually to apply parameter changes.
+### Table-specific requirements
-### Table-specific requirementsΓÇ»
+Autovacuum parameters might be set for individual tables. It's especially important for small and big tables. For example, for a small table that contains only 100 rows, autovacuum triggers VACUUM operation when 70 rows change (as calculated previously). If this table is frequently updated, you might see hundreds of autovacuum operations a day. This prevents autovacuum from maintaining other tables on which the percentage of changes aren't as big. Alternatively, a table containing a billion rows needs to change 200 million rows to trigger autovacuum operations. Setting autovacuum parameters appropriately prevents such scenarios.
-Autovacuum parameters may be set for individual tables. It's especially important for small and big tables. For example, for a small table that contains only 100 rows, autovacuum triggers VACUUM operation when 70 rows change (as calculated previously). If this table is frequently updated, you might see hundreds of autovacuum operations a day. This will prevent autovacuum from maintaining other tables on which the percentage of changes aren't as big. Alternatively, a table containing a billion rows needs to change 200 million rows to trigger autovacuum operations. Setting autovacuum parameters appropriately prevents such scenarios.
-
-To set autovacuum setting per table, change the server parameters as the following examples:
+To set autovacuum setting per table, change the server parameters as the following examples:
```postgresql ALTER TABLE <table name> SET (autovacuum_analyze_scale_factor = xx); ALTER TABLE <table name> SET (autovacuum_analyze_threshold = xx);
- ALTER TABLE <table name> SET (autovacuum_vacuum_scale_factor =xx); 
- ALTER TABLE <table name> SET (autovacuum_vacuum_threshold = xx); 
- ALTER TABLE <table name> SET (autovacuum_vacuum_cost_delay = xx); 
- ALTER TABLE <table name> SET (autovacuum_vacuum_cost_limit = xx); 
+ ALTER TABLE <table name> SET (autovacuum_vacuum_scale_factor =xx);
+ ALTER TABLE <table name> SET (autovacuum_vacuum_threshold = xx);
+ ALTER TABLE <table name> SET (autovacuum_vacuum_cost_delay = xx);
+ ALTER TABLE <table name> SET (autovacuum_vacuum_cost_limit = xx);
```
-### Insert-only workloadsΓÇ»
+### Insert-only workloads
-In versions of PostgreSQL prior to 13, autovacuum will not run on tables with an insert-only workload, because if there are no updates or deletes, there are no dead tuples and no free space that needs to be reclaimed. However, autoanalyze will run for insert-only workloads since there is new data. The disadvantages of this are:
+In versions of PostgreSQL prior to 13, autovacuum won't run on tables with an insert-only workload, because if there are no updates or deletes, there are no dead tuples and no free space that needs to be reclaimed. However, autoanalyze will run for insert-only workloads since there's new data. The disadvantages of this are:
- The visibility map of the tables isn't updated, and thus query performance, especially where there are Index Only Scans, starts to suffer over time. - The database can run into transaction ID wraparound protection.-- Hint bits will not be set.
+- Hint bits won't be set.
-#### SolutionsΓÇ»
+#### Solutions
-##### Postgres versions prior to 13 
+##### Postgres versions prior to 13
-Using the **pg_cron** extension, a cron job can be set up to schedule a periodic vacuum analyze on the table. The frequency of the cron job depends on the workload.  
+Using the **pg_cron** extension, a cron job can be set up to schedule a periodic vacuum analyze on the table. The frequency of the cron job depends on the workload.
For step-by-step guidance using pg_cron, review [Extensions](./concepts-extensions.md).
+##### Postgres 13 and higher versions
+
+Autovacuum will run on tables with an insert-only workload. Two new server parameters `autovacuum_vacuum_insert_threshold` and  `autovacuum_vacuum_insert_scale_factor` help control when autovacuum can be triggered on insert-only tables.
-##### Postgres 13 and higher versions
+## Troubleshooting guides
-Autovacuum will run on tables with an insert-only workload. Two new server parameters `autovacuum_vacuum_insert_threshold` and  `autovacuum_vacuum_insert_scale_factor` help control when autovacuum can be triggered on insert-only tables. 
+Using the feature troubleshooting guides which is available on the Azure Database for PostgreSQL - Flexible Server portal it is possible to monitor bloat at database or individual schema level along with identifying potential blockers to autovacuum process. Two troubleshooting guides are available first one is autovacuum monitoring that can be used to monitor bloat at database or individual schema level. The second troubleshooting guide is autovacuum blockers and wraparound which helps to identify potential autovacuum blockers along with information on how far the databases on the server are from wraparound or emergency situation. The troubleshooting guides also share recommendations to mitigate potential issues. How to set up the troubleshooting guides to use them please follow [setup troubleshooting guides](how-to-troubleshooting-guides.md).
-## Next steps
+## Related content
-- Troubleshoot high CPU utilization [High CPU Utilization](./how-to-high-cpu-utilization.md).-- Troubleshoot high memory utilization [High Memory Utilization](./how-to-high-memory-utilization.md).-- Configure server parameters [Server Parameters](./howto-configure-server-parameters-using-portal.md).
+- [High CPU Utilization](how-to-high-cpu-utilization.md)
+- [High Memory Utilization](how-to-high-memory-utilization.md)
+- [Server Parameters](howto-configure-server-parameters-using-portal.md)
postgresql How To High Cpu Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-high-cpu-utilization.md
Title: High CPU Utilization description: Troubleshooting guide for high cpu utilization in Azure Database for PostgreSQL - Flexible Server- + Last updated : 10/26/2023 Previously updated : 08/03/2022
-# Troubleshoot high CPU utilization in Azure Database for PostgreSQL - Flexible Server
+# Troubleshoot high CPU utilization in Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article shows you how to quickly identify the root cause of high CPU utilization, and possible remedial actions to control CPU utilization when using [Azure Database for PostgreSQL - Flexible Server](overview.md).
+This article shows you how to quickly identify the root cause of high CPU utilization, and possible remedial actions to control CPU utilization when using [Azure Database for PostgreSQL - Flexible Server](overview.md).
+
+In this article, you'll learn:
-In this article, you'll learn:
+- About troubleshooting guides to identify and get recommendations to mitigate root causes.
+- About tools to identify high CPU utilization such as Azure Metrics, Query Store, and pg_stat_statements.
+- How to identify root causes, such as long running queries and total connections.
+- How to resolve high CPU utilization by using Explain Analyze, Connection Pooling, and Vacuuming tables.
-- About tools to identify high CPU utilization such as Azure Metrics, Query Store, and pg_stat_statements. -- How to identify root causes, such as long running queries and total connections. -- How to resolve high CPU utilization by using Explain Analyze, Connection Pooling, and Vacuuming tables.
+## Troubleshooting guides
-## Tools to identify high CPU utilization
+Using the feature troubleshooting guides which is available on the Azure Database for PostgreSQL - Flexible Server portal the probable root cause and recommendations to the mitigate high CPU scenario can be found. How to setup the troubleshooting guides to use them please follow [setup troubleshooting guides](how-to-troubleshooting-guides.md).
-Consider these tools to identify high CPU utilization.
+## Tools to identify high CPU utilization
-### Azure Metrics
+Consider these tools to identify high CPU utilization.
+
+### Azure Metrics
Azure Metrics is a good starting point to check the CPU utilization for the definite date and period. Metrics give information about the time duration during which the CPU utilization is high. Compare the graphs of Write IOPs, Read IOPs, Read Throughput, and Write Throughput with CPU utilization to find out times when the workload caused high CPU. For proactive monitoring, you can configure alerts on the metrics. For step-by-step guidance, see [Azure Metrics](./howto-alert-on-metrics.md).
Query Store automatically captures the history of queries and runtime statistics
The pg_stat_statements extension helps identify queries that consume time on the server.
-#### Mean or average execution time
+#### Mean or average execution time
##### [Postgres v13 & above](#tab/postgres-13)
-For Postgres versions 13 and above, use the following statement to view the top five SQL statements by mean or average execution time:
+For Postgres versions 13 and above, use the following statement to view the top five SQL statements by mean or average execution time:
```postgresql
-SELECT userid::regrole, dbid, query, mean_exec_time
-FROM pg_stat_statements
-ORDER BY mean_exec_time
-DESC LIMIT 5;
+SELECT userid::regrole, dbid, query, mean_exec_time
+FROM pg_stat_statements
+ORDER BY mean_exec_time
+DESC LIMIT 5;
``` ##### [Postgres v9.6-12](#tab/postgres9-12)
-For Postgres versions 9.6, 10, 11, and 12, use the following statement to view the top five SQL statements by mean or average execution time:
+For Postgres versions 9.6, 10, 11, and 12, use the following statement to view the top five SQL statements by mean or average execution time:
```postgresql
-SELECT userid::regrole, dbid, query
-FROM pg_stat_statements
-ORDER BY mean_time
-DESC LIMIT 5;
+SELECT userid::regrole, dbid, query
+FROM pg_stat_statements
+ORDER BY mean_time
+DESC LIMIT 5;
``` #### Total execution time
-Execute the following statements to view the top five SQL statements by total execution time.
+Execute the following statements to view the top five SQL statements by total execution time.
##### [Postgres v13 & above](#tab/postgres-13)
-For Postgres versions 13 and above, use the following statement to view the top five SQL statements by total execution time:
+For Postgres versions 13 and above, use the following statement to view the top five SQL statements by total execution time:
```postgresql
-SELECT userid::regrole, dbid, query
-FROM pg_stat_statements
-ORDER BY total_exec_time
-DESC LIMIT 5;
+SELECT userid::regrole, dbid, query
+FROM pg_stat_statements
+ORDER BY total_exec_time
+DESC LIMIT 5;
``` ##### [Postgres v9.6-12](#tab/postgres9-12)
-For Postgres versions 9.6, 10, 11, and 12, use the following statement to view the top five SQL statements by total execution time:
+For Postgres versions 9.6, 10, 11, and 12, use the following statement to view the top five SQL statements by total execution time:
```postgresql
-SELECT userid::regrole, dbid, query,
-FROM pg_stat_statements
-ORDER BY total_time
-DESC LIMIT 5;
+SELECT userid::regrole, dbid, query,
+FROM pg_stat_statements
+ORDER BY total_time
+DESC LIMIT 5;
```
+## Identify root causes
-## Identify root causes
-
-If CPU consumption levels are high in general, the following could be possible root causes:
+If CPU consumption levels are high in general, the following could be possible root causes:
-
-### Long-running transactions
+### Long-running transactions
Long-running transactions can consume CPU resources that can lead to high CPU utilization.
-The following query helps identify connections running for the longest time:
+The following query helps identify connections running for the longest time:
```postgresql
-SELECT pid, usename, datname, query, now() - xact_start as duration
-FROM pg_stat_activity
-WHERE pid <> pg_backend_pid() and state IN ('idle in transaction', 'active')
-ORDER BY duration DESC;
+SELECT pid, usename, datname, query, now() - xact_start as duration
+FROM pg_stat_activity
+WHERE pid <> pg_backend_pid() and state IN ('idle in transaction', 'active')
+ORDER BY duration DESC;
```
-### Total number of connections and number connections by state
+### Total number of connections and number connections by state
A large number of connections to the database is also another issue that might lead to increased CPU and memory utilization. -
-The following query gives information about the number of connections by state:
+The following query gives information about the number of connections by state:
```postgresql
-SELECT state, count(*)
-FROM pg_stat_activity
-WHERE pid <> pg_backend_pid()
-GROUP BY 1 ORDER BY 1;
+SELECT state, count(*)
+FROM pg_stat_activity
+WHERE pid <> pg_backend_pid()
+GROUP BY 1 ORDER BY 1;
```
-
-## Resolve high CPU utilization
-Use Explain Analyze, PG Bouncer, connection pooling and terminate long running transactions to resolve high CPU utilization.
+## Resolve high CPU utilization
-### Using Explain Analyze
+Use Explain Analyze, PG Bouncer, connection pooling and terminate long running transactions to resolve high CPU utilization.
-Once you know the query that's running for a long time, use **EXPLAIN** to further investigate the query and tune it.
-For more information about the **EXPLAIN** command, review [Explain Plan](https://www.postgresql.org/docs/current/sql-explain.html).
+### Use Explain Analyze
+Once you know the query that's running for a long time, use **EXPLAIN** to further investigate the query and tune it.
+For more information about the **EXPLAIN** command, review [Explain Plan](https://www.postgresql.org/docs/current/sql-explain.html).
-### PGBouncer and connection pooling
+### PGBouncer and connection pooling
In situations where there are lots of idle connections or lot of connections, which are consuming the CPU consider use of a connection pooler like PgBouncer.
-For more details about PgBouncer, review:
+For more details about PgBouncer, review:
[Connection Pooler](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/not-all-postgres-connection-pooling-is-equal/ba-p/825717)
For more details about PgBouncer, review:
Azure Database for Flexible Server offers PgBouncer as a built-in connection pooling solution. For more information, see [PgBouncer](./concepts-pgbouncer.md)
-### Terminating long running transactions
+### Terminate long running transactions
You could consider killing a long running transaction as an option.
-To terminate a session's PID, you'll need to detect the PID using the following query:
+To terminate a session's PID, you'll need to detect the PID using the following query:
```postgresql
-SELECT pid, usename, datname, query, now() - xact_start as duration
-FROM pg_stat_activity
-WHERE pid <> pg_backend_pid() and state IN ('idle in transaction', 'active')
-ORDER BY duration DESC;
+SELECT pid, usename, datname, query, now() - xact_start as duration
+FROM pg_stat_activity
+WHERE pid <> pg_backend_pid() and state IN ('idle in transaction', 'active')
+ORDER BY duration DESC;
```
-You can also filter by other properties like `usename` (username), `datname` (database name) etc.
+You can also filter by other properties like `usename` (username), `datname` (database name) etc.
Once you have the session's PID, you can terminate using the following query:
Once you have the session's PID, you can terminate using the following query:
SELECT pg_terminate_backend(pid); ```
-### Monitoring vacuum and table stats
+### Monitor vacuum and table stats
-Keeping table statistics up to date helps improve query performance. Monitor whether regular autovacuuming is being carried out.
+Keeping table statistics up to date helps improve query performance. Monitor whether regular autovacuuming is being carried out.
-The following query helps to identify the tables that need vacuuming:
+The following query helps to identify the tables that need vacuuming:
```postgresql
-select schemaname,relname,n_dead_tup,n_live_tup,last_vacuum,last_analyze,last_autovacuum,last_autoanalyze
-from pg_stat_all_tables where n_live_tup > 0;  
+select schemaname,relname,n_dead_tup,n_live_tup,last_vacuum,last_analyze,last_autovacuum,last_autoanalyze
+from pg_stat_all_tables where n_live_tup > 0;
``` `last_autovacuum` and `last_autoanalyze` columns give the date and time when the table was last autovacuumed or analyzed. If the tables aren't being vacuumed regularly, take steps to tune autovacuum. For more information about autovacuum troubleshooting and tuning, see [Autovacuum Troubleshooting](./how-to-autovacuum-tuning.md). - A short-term solution would be to do a manual vacuum analyze of the tables where slow queries are seen: ```postgresql vacuum analyze <table_name>; ```
-## Next steps
+## Related content
-- Troubleshoot and tune Autovacuum [Autovacuum Tuning](./how-to-high-cpu-utilization.md).-- Troubleshoot High Memory Utilization [High Memory Utilization](./how-to-high-memory-utilization.md).
+- [Autovacuum Tuning](how-to-high-cpu-utilization.md)
+- [High Memory Utilization](how-to-high-memory-utilization.md)
+- [Identify Slow Queries](how-to-identify-slow-queries.md)
postgresql How To High Io Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-high-io-utilization.md
Title: High IOPS utilization for Azure Database for PostgreSQL - Flexible Server
-description: This article is a troubleshooting guide for high IOPS utilization in Azure Database for PostgreSQL - Flexible Server
+description: This article is a troubleshooting guide for high IOPS utilization in Azure Database for PostgreSQL - Flexible Server
+ Last updated : 10/26/2023 Previously updated : 08/16/2022-+
+ - template-how-to
# Troubleshoot high IOPS utilization for Azure Database for PostgreSQL - Flexible Server
-This article shows you how to quickly identify the root cause of high IOPS (input/output operations per second) utilization and provides remedial actions to control IOPS utilization when you're using [Azure Database for PostgreSQL - Flexible Server](overview.md).
+This article shows you how to quickly identify the root cause of high IOPS (input/output operations per second) utilization and provides remedial actions to control IOPS utilization when you're using [Azure Database for PostgreSQL - Flexible Server](overview.md).
In this article, you learn how to:
+- About troubleshooting guides to identify and get recommendations to mitigate root causes.
- Use tools to identify high input/output (I/O) utilization, such as Azure Metrics, Query Store, and pg_stat_statements. - Identify root causes, such as long-running queries, checkpoint timings, a disruptive autovacuum daemon process, and high storage utilization. - Resolve high I/O utilization by using Explain Analyze, tune checkpoint-related server parameters, and tune the autovacuum daemon.
+## Troubleshooting guides
+
+Using the feature troubleshooting guides which is available on the Azure Database for PostgreSQL - Flexible Server portal the probable root cause and recommendations to the mitigate high IOPS utilization scenario can be found. How to setup the troubleshooting guides to use them please follow [setup troubleshooting guides](how-to-troubleshooting-guides.md).
+ ## Tools to identify high I/O utilization Consider the following tools to identify high I/O utilization.
The Query Store feature automatically captures the history of queries and runtim
Use the following statement to view the top five SQL statements that consume I/O: ```sql
-select * from query_store.qs_view qv where is_system_query is FALSE
+select * from query_store.qs_view qv where is_system_query is FALSE
order by blk_read_time + blk_write_time desc limit 5; ```
Use the following statement to view the top five SQL statements that consume I/O
```sql SELECT userid::regrole, dbid, query
-FROM pg_stat_statements
-ORDER BY blk_read_time + blk_write_time desc
-LIMIT 5;
+FROM pg_stat_statements
+ORDER BY blk_read_time + blk_write_time desc
+LIMIT 5;
```
-> [!NOTE]
-> When using query store or pg_stat_statements for columns blk_read_time and blk_write_time to be populated, you need to enable server parameter `track_io_timing`. For more information about `track_io_timing`, review [Server parameters](https://www.postgresql.org/docs/current/runtime-config-statistics.html).
+> [!NOTE]
+> When using query store or pg_stat_statements for columns blk_read_time and blk_write_time to be populated, you need to enable server parameter `track_io_timing`. For more information about `track_io_timing`, review [Server parameters](https://www.postgresql.org/docs/current/runtime-config-statistics.html).
-## Identify root causes
+## Identify root causes
-If I/O consumption levels are high in general, the following could be the root causes:
+If I/O consumption levels are high in general, the following could be the root causes:
-### Long-running transactions
+### Long-running transactions
Long-running transactions can consume I/O, which can lead to high I/O utilization.
-The following query helps identify connections that are running for the longest time:
+The following query helps identify connections that are running for the longest time:
```sql
-SELECT pid, usename, datname, query, now() - xact_start as duration
-FROM pg_stat_activity
-WHERE pid <> pg_backend_pid() and state IN ('idle in transaction', 'active')
-ORDER BY duration DESC;
+SELECT pid, usename, datname, query, now() - xact_start as duration
+FROM pg_stat_activity
+WHERE pid <> pg_backend_pid() and state IN ('idle in transaction', 'active')
+ORDER BY duration DESC;
``` ### Checkpoint timings High I/O can also be seen in scenarios where a checkpoint is happening too frequently. One way to identify this is by checking the PostgreSQL log file for the following log text: "LOG: checkpoints are occurring too frequently."
-You could also investigate by using an approach where periodic snapshots of `pg_stat_bgwriter` with a time stamp are saved. By using the saved snapshots, you can calculate the average checkpoint interval, number of checkpoints requested, and number of checkpoints timed.
+You could also investigate by using an approach where periodic snapshots of `pg_stat_bgwriter` with a time stamp are saved. By using the saved snapshots, you can calculate the average checkpoint interval, number of checkpoints requested, and number of checkpoints timed.
### Disruptive autovacuum daemon process Run the following query to monitor autovacuum: ```sql
-SELECT schemaname, relname, n_dead_tup, n_live_tup, autovacuum_count, last_vacuum, last_autovacuum, last_autoanalyze, autovacuum_count, autoanalyze_count FROM pg_stat_all_tables WHERE n_live_tup > 0;
+SELECT schemaname, relname, n_dead_tup, n_live_tup, autovacuum_count, last_vacuum, last_autovacuum, last_autoanalyze, autovacuum_count, autoanalyze_count FROM pg_stat_all_tables WHERE n_live_tup > 0;
```
-The query is used to check how frequently the tables in the database are being vacuumed.
+The query is used to check how frequently the tables in the database are being vacuumed.
+
-* `last_autovacuum`: The date and time when the last autovacuum ran on the table.
-* `autovacuum_count`: The number of times the table was vacuumed.
-* `autoanalyze_count`: The number of times the table was analyzed.
+- `last_autovacuum`: The date and time when the last autovacuum ran on the table.
+- `autovacuum_count`: The number of times the table was vacuumed.
+- `autoanalyze_count`: The number of times the table was analyzed.
## Resolve high I/O utilization To resolve high I/O utilization, you can use any of the following three methods.
-### The `EXPLAIN ANALYZE` command
+### The `EXPLAIN ANALYZE` command
-After you've identified the query that's consuming high I/O, use `EXPLAIN ANALYZE` to further investigate the query and tune it. For more information about the `EXPLAIN ANALYZE` command, review the [EXPLAIN plan](https://www.postgresql.org/docs/current/sql-explain.html).
+After you've identified the query that's consuming high I/O, use `EXPLAIN ANALYZE` to further investigate the query and tune it. For more information about the `EXPLAIN ANALYZE` command, review the [EXPLAIN plan](https://www.postgresql.org/docs/current/sql-explain.html).
-### Terminate long-running transactions
+### Terminate long-running transactions
You could consider killing a long-running transaction as an option.
-To terminate a session's process ID (PID), you need to detect the PID by using the following query:
+To terminate a session's process ID (PID), you need to detect the PID by using the following query:
```sql
-SELECT pid, usename, datname, query, now() - xact_start as duration
-FROM pg_stat_activity
-WHERE pid <> pg_backend_pid() and state IN ('idle in transaction', 'active')
-ORDER BY duration DESC;
+SELECT pid, usename, datname, query, now() - xact_start as duration
+FROM pg_stat_activity
+WHERE pid <> pg_backend_pid() and state IN ('idle in transaction', 'active')
+ORDER BY duration DESC;
```
-You can also filter by other properties, such as `usename` (username) or `datname` (database name).
+You can also filter by other properties, such as `usename` (username) or `datname` (database name).
After you have the session's PID, you can terminate it by using the following query:
SELECT pg_terminate_backend(pid);
If you've observed that the checkpoint is happening too frequently, increase the `max_wal_size` server parameter until most checkpoints are time driven, instead of requested. Eventually, 90 percent or more should be time based, and the interval between two checkpoints should be close to the `checkpoint_timeout` value that's set on the server.
-* `max_wal_size`: Peak business hours are a good time to arrive at a `max_wal_size` value. To arrive at a value, do the following:
+- `max_wal_size`: Peak business hours are a good time to arrive at a `max_wal_size` value. To arrive at a value, do the following:
1. Run the following query to get the current WAL LSN, and then note the result:
If you've observed that the checkpoint is happening too frequently, increase the
1. Run the following query, which uses the two results, to check the difference, in gigabytes (GB):
- ```sql
+ ```sql
select round (pg_wal_lsn_diff ('LSN value when run second time', 'LSN value when run first time')/1024/1024/1024,2) WAL_CHANGE_GB;
- ```
+ ```
-* `checkpoint_completion_target`: A good practice would be to set the value to 0.9. As an example, a value of 0.9 for a `checkpoint_timeout` of 5 minutes indicates that the target to complete a checkpoint is 270 seconds (0.9\*300 seconds). A value of 0.9 provides a fairly consistent I/O load. An aggressive value of `checkpoint_completion_target` might result in an increased I/O load on the server.
+- `checkpoint_completion_target`: A good practice would be to set the value to 0.9. As an example, a value of 0.9 for a `checkpoint_timeout` of 5 minutes indicates that the target to complete a checkpoint is 270 seconds (0.9\*300 seconds). A value of 0.9 provides a fairly consistent I/O load. An aggressive value of `checkpoint_completion_target` might result in an increased I/O load on the server.
-* `checkpoint_timeout`: You can increase the `checkpoint_timeout` value from the default value that's set on the server. As you're increasing the value, take into consideration that increasing it would also increase the time for crash recovery.
+- `checkpoint_timeout`: You can increase the `checkpoint_timeout` value from the default value that's set on the server. As you're increasing the value, take into consideration that increasing it would also increase the time for crash recovery.
### Tune autovacuum to decrease disruptions For more information about monitoring and tuning in scenarios where autovacuum is too disruptive, review [Autovacuum tuning](./how-to-autovacuum-tuning.md).
-### Increase storage
+### Increase storage
Increasing storage helps when you're adding more IOPS to the server. For more information about storage and associated IOPS, review [Compute and storage options](./concepts-compute-storage.md).
-## Next steps
+## Related content
-- [Troubleshoot and tune autovacuum](./how-to-autovacuum-tuning.md)-- [Compute and storage options](./concepts-compute-storage.md)
-
+- [Troubleshoot and tune autovacuum](how-to-autovacuum-tuning.md)
+- [Compute and storage options](concepts-compute-storage.md)
postgresql How To High Memory Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-high-memory-utilization.md
Title: High Memory Utilization description: Troubleshooting guide for high memory utilization in Azure Database for PostgreSQL - Flexible Server- + Last updated : 10/26/2023 Previously updated : 08/03/2022 # High memory utilization in Azure Database for PostgreSQL - Flexible Server
-This article introduces common scenarios and root causes that might lead to high memory utilization in [Azure Database for PostgreSQL - Flexible Server](overview.md).
+This article introduces common scenarios and root causes that might lead to high memory utilization in [Azure Database for PostgreSQL - Flexible Server](overview.md).
-In this article, you learn:
+In this article, you learn:
+- About troubleshooting guides to identify and get recommendations to mitigate root causes.
- Tools to identify high memory utilization. - Reasons for high memory & remedial actions.
-## Tools to identify high memory utilization
+## Troubleshooting guides
+
+Using the feature troubleshooting guides which is available on the Azure Database for PostgreSQL - Flexible Server portal the probable root cause and recommendations to the mitigate high memory scenario can be found. How to setup the troubleshooting guides to use them please follow [setup troubleshooting guides](how-to-troubleshooting-guides.md).
-Consider the following tools to identify high memory utilization.
+## Tools to identify high memory utilization
+
+Consider the following tools to identify high memory utilization.
### Azure Metrics
-Use Azure Metrics to monitor the percentage of memory in use for the definite date and time frame.
+Use Azure Metrics to monitor the percentage of memory in use for the definite date and time frame.
For proactive monitoring, configure alerts on the metrics. For step-by-step guidance, see [Azure Metrics](./howto-alert-on-metrics.md). - ### Query Store
-Query Store automatically captures the history of queries and their runtime statistics, and it retains them for your review.
+Query Store automatically captures the history of queries and their runtime statistics, and it retains them for your review.
-Query Store can correlate wait event information with query run time statistics. Use Query Store to identify queries that have high memory consumption during the period of interest.
+Query Store can correlate wait event information with query run time statistics. Use Query Store to identify queries that have high memory consumption during the period of interest.
For more information on setting up and using Query Store, review [Query Store](./concepts-query-store.md). ## Reasons and remedial actions
-Consider the following reasons and remedial actions for resolving high memory utilization.
+Consider the following reasons and remedial actions for resolving high memory utilization.
### Server parameters The following server parameters impact memory consumption and should be reviewed:
-#### Work_Mem
-
-The `work_mem` parameter specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. It isn't on a per-query basis rather, it's set based on the number of sort and hash operations.
+#### Work_Mem
+The `work_mem` parameter specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. It isn't on a per-query basis rather, it's set based on the number of sort and hash operations.
-If the workload has many short-running queries with simple joins and minimal sort operations, it's advised to keep lower `work_mem`. If there are a few active queries with complex joins and sorts, then it's advised to set a higher value for work_mem.
+If the workload has many short-running queries with simple joins and minimal sort operations, it's advised to keep lower `work_mem`. If there are a few active queries with complex joins and sorts, then it's advised to set a higher value for work_mem.
-It's tough to get the value of `work_mem` right. If you notice high memory utilization or out-of-memory issues, consider decreasing `work_mem`.
+It's tough to get the value of `work_mem` right. If you notice high memory utilization or out-of-memory issues, consider decreasing `work_mem`.
A safer setting for `work_mem` is `work_mem = Total RAM / Max_Connections / 16 `
-The default value of `work_mem` = 4 MB. You can set the `work_mem` value on multiple levels including at the server level via the parameters page in the Azure portal.
+The default value of `work_mem` = 4 MB. You can set the `work_mem` value on multiple levels including at the server level via the parameters page in the Azure portal.
-A good strategy is to monitor memory consumption during peak times.
+A good strategy is to monitor memory consumption during peak times.
If disk sorts are happening during this time and there's plenty of unused memory, increase `work_mem` gradually until you're able to reach a good balance between available and used memory
-Similarly, if the memory use looks high, reduce `work_mem`.
+Similarly, if the memory use looks high, reduce `work_mem`.
-#### Maintenance_Work_Mem
+#### Maintenance_Work_Mem
-`maintenance_work_mem` is for maintenance tasks like vacuuming, adding indexes or foreign keys. The usage of memory in this scenario is per session.
+`maintenance_work_mem` is for maintenance tasks like vacuuming, adding indexes or foreign keys. The usage of memory in this scenario is per session.
-For example, consider a scenario where there are three autovacuum workers running.
+For example, consider a scenario where there are three autovacuum workers running.
If `maintenance_work_mem` is set to 1 GB, then all sessions combined will use 3 GB of memory. A high `maintenance_work_mem` value along with multiple running sessions for vacuuming/index creation/adding foreign keys can cause high memory utilization. The maximum allowed value for the `maintenance_work_mem` server parameter in Azure Database for Flexible Server is 2 GB.
-#### Shared buffers
+#### Shared buffers
The `shared_buffers` parameter determines how much memory is dedicated to the server for caching data. The objective of shared buffers is to reduce DISK I/O.
-A reasonable setting for shared buffers is 25% of RAM. Setting a value of greater than 40% of RAM isn't recommended for most common workloads.
+A reasonable setting for shared buffers is 25% of RAM. Setting a value of greater than 40% of RAM isn't recommended for most common workloads.
-### Max connections
+### Max connections
-All new and idle connections on a Postgres database consume up to 2 MB of memory. One way to monitor connections is by using the following query:
+All new and idle connections on a Postgres database consume up to 2 MB of memory. One way to monitor connections is by using the following query:
```postgresql select count(*) from pg_stat_activity;
For more details on PgBouncer, review:
Azure Database for Flexible Server offers PgBouncer as a built-in connection pooling solution. For more information, see [PgBouncer](./concepts-pgbouncer.md).
-### Explain Analyze
+### Explain Analyze
Once high memory-consuming queries have been identified from Query Store, use **EXPLAIN,** and **EXPLAIN ANALYZE** to investigate further and tune them. For more information on the **EXPLAIN** command, review [Explain Plan](https://www.postgresql.org/docs/current/sql-explain.html).
-## Next steps
+## Related content
-- Troubleshoot and tune Autovacuum [Autovacuum Tuning](./how-to-autovacuum-tuning.md).-- Troubleshoot High CPU Utilization [High CPU Utilization](./how-to-high-cpu-utilization.md).-- Configure server parameters [Server Parameters](./howto-configure-server-parameters-using-portal.md).
+- [Autovacuum Tuning](how-to-autovacuum-tuning.md)
+- [High CPU Utilization](how-to-high-cpu-utilization.md)
+- [Server Parameters](howto-configure-server-parameters-using-portal.md)
postgresql How To Identify Slow Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-identify-slow-queries.md
+
+ Title: Identify Slow Running Query for Azure Database for PostgreSQL - Flexible Server
+description: Troubleshooting guide for identifying slow running queries in Azure Database for PostgreSQL - Flexible Server
+++ Last updated : 10/26/2023++++
+# Troubleshoot and identify slow-running queries in Azure Database for PostgreSQL - Flexible Server
+
+This article shows you how to troubleshoot and identify slow-running queries using [Azure Database for PostgreSQL - Flexible Server](overview.md).
+
+In a high CPU utilization scenario, in this article, you learn how to:
+
+- Identify slow-running queries.
+
+- Identify a slow-running procedure along with it. Identify a slow query among a list of queries that belong to the same slow-running stored procedure.
+
+## High CPU scenario - Identify slow query
+
+### Prerequisites
+
+One must enable troubleshooting guides and auto_explain extension on the Azure Database for PostgreSQL ΓÇô Flexible Server. To enable troubleshooting guides, follow the steps mentioned [here](how-to-troubleshooting-guides.md).
+
+To enable auto_explain extension, follow the steps below:
+
+1. Add auto_explain extension to the shared preload libraries as shown below from the server parameters page on the Flexible Server portal
+
+
+ :::image type="content" source="./media/how-to-identify-slow-queries/shared-preload-library.png" alt-text="Screenshot of server parameters page with shared preload libraries parameter." lightbox="./media/how-to-identify-slow-queries/shared-preload-library.png":::
+
+> [!NOTE]
+> Making this change will require a server restart.
+
+2. After the auto_explain extension is added to shared preload libraries and the server has restarted, change the below highlighted auto_explain server parameters to `ON` from the server parameters page on the Flexible Server portal and leave the remaining ones
+ with default values as shown below.
+
+ :::image type="content" source="./media/how-to-identify-slow-queries/auto-explain-parameters.png" alt-text="Screenshot of server parameters page with auto_explain parameters." lightbox="./media/how-to-identify-slow-queries/auto-explain-parameters.png":::
+
+> [!NOTE]
+> Updating `auto_explain.log_min_duration` parameter to 0 will start logging all queries being executed on the server. This may impact performance of the database. Proper due diligence must be made to come to a value which is considered slow on the server. Example if 30 seconds is considered threshold and all queries being run below 30 seconds is acceptable for application then it is advised to update the parameter to 30000 milliseconds. This would then log any query which is executed more than 30 seconds on the server.
+
+### Scenario - Identify slow-running query
+
+With troubleshooting guides and auto_explain extension in place, we explain the scenario with the help of an example.
+
+We have a scenario where CPU utilization has spiked to 90% and would like to know the root cause of the spike. To debug the scenario, follow the steps mentioned below.
+
+1. As soon as you're alerted by a CPU scenario, go to the troubleshooting guides available under the Help tab on the Flexible server portal overview page.
+
+ :::image type="content" source="./media/how-to-identify-slow-queries/troubleshooting-guides-blade.png" alt-text="Screenshot of troubleshooting guides menu." lightbox="./media/how-to-identify-slow-queries/troubleshooting-guides-blade.png":::
+
+2. Select the High CPU Usage tab from the page opened. The high CPU Utilization troubleshooting guide opens.
+
+ :::image type="content" source="./media/how-to-identify-slow-queries/high-cpu-troubleshooting-guide.png" alt-text="Screenshot of troubleshooting guides menu - tabs. " lightbox="./media/how-to-identify-slow-queries/high-cpu-troubleshooting-guide.png":::
+
+3. Select the time range of the reported CPU spike using the time range dropdown list.
+
+ :::image type="content" source="./media/how-to-identify-slow-queries/high-cpu-timerange.png" alt-text="Screenshot of troubleshooting guides menu - CPU tab." lightbox="./media/how-to-identify-slow-queries/high-cpu-timerange.png":::
+
+4. Select Top CPU Consuming Queries tab.
+
+ The tab shares details of all the queries that ran in the interval where 90% CPU utilization was seen. From the snapshot, it looks like the query with the slowest average execution time during the time interval was ~2.6 minutes, and the query ran 22 times during the interval. This is most likely the cause of CPU spikes.
+
+ :::image type="content" source="./media/how-to-identify-slow-queries/high-cpu-query.png" alt-text="Screenshot of troubleshooting guides menu - Top CPU consuming queries tab." lightbox="./media/how-to-identify-slow-queries/high-cpu-query.png":::
+
+5. Connect to azure_sys database and execute the query to retrieve actual query text using the script below
+
+```sql
+ psql -h ServerName.postgres.database.azure.com -U AdminUsername -d azure_sys
+
+ SELECT query_sql_text
+ FROM query_store.query_texts_view
+ WHERE query_text_id = <add query id identified>;
+```
+
+6. In the example considered, the query that was found slow was the following:
+
+```sql
+SELECT c_id, SUM(c_balance) AS total_balance
+FROM customer
+GROUP BY c_w_id,c_id
+order by c_w_id;
+```
+
+7. To understand what exact explain plan was generated, use Postgres logs. Auto explain extension would have logged an entry in the logs every time the query execution was completed during the interval. Select Logs section from the `Monitoring` tab from the Flexible server portal overview page.
+
+ :::image type="content" source="./media/how-to-identify-slow-queries/log-analytics-tab.png" alt-text="Screenshot of troubleshooting guides menu - Logs." lightbox="./media/how-to-identify-slow-queries/log-analytics-tab.png":::
+
+
+8. Select the time range where 90% CPU Utilization was found.
+
+ :::image type="content" source="./media/how-to-identify-slow-queries/log-analytics-timerange.png" alt-text="Screenshot of troubleshooting guides menu - Logs Timerange." lightbox="./media/how-to-identify-slow-queries/log-analytics-timerange.png":::
+
+9. Execute the below query to retrieve the explain analyze output of the query identified.
+
+```sql
+AzureDiagnostics
+| where Category contains 'PostgreSQLLogs'
+| where Message contains "<add snippet of SQL text identified or add table name involved in the query>"
+| project TimeGenerated, Message
+```
+
+The message column will store the execution plan as shown below:
+
+```sql
+2023-10-10 19:56:46 UTC-6525a8e7.2e3d-LOG: duration: 150692.864 ms plan:
+
+Query Text: SELECT c_id, SUM(c_balance) AS total_balance
+FROM customer
+GROUP BY c_w_id,c_id
+order by c_w_id;
+GroupAggregate (cost=1906820.83..2131820.83 rows=10000000 width=40) (actual time=70930.833..129231.950 rows=10000000 loops=1)
+Output: c_id, sum(c_balance), c_w_id
+Group Key: customer.c_w_id, customer.c_id
+Buffers: shared hit=44639 read=355362, temp read=77521 written=77701
+-> Sort (cost=1906820.83..1931820.83 rows=10000000 width=13) (actual time=70930.821..81898.430 rows=10000000 loops=1)
+Output: c_id, c_w_id, c_balance
+Sort Key: customer.c_w_id, customer.c_id
+Sort Method: external merge Disk: 225104kB
+Buffers: shared hit=44639 read=355362, temp read=77521 written=77701
+-> Seq Scan on public.customer (cost=0.00..500001.00 rows=10000000 width=13) (actual time=0.021..22997.697 rows=10000000 loops=1)
+Output: c_id, c_w_id, c_balance
+```
+
+The query ran for ~2.5 minutes, as shown in troubleshooting guides, and is confirmed by the `duration` value of 150692.864 ms from the execution plan output fetched. Use the explain analyze output to troubleshoot further and tune the query.
+
+> [!NOTE]
+> Note that the query ran 22 times during the interval, and the logs shown above are one such entry captured during the interval.
+
+## High CPU scenario - Identify slow-running procedure and slow queries associated with the procedure
+
+In the second scenario, a stored procedure execution time is found to be slow, and the goal is to identify and tune the slow-running query that is part of the stored procedure.
+
+### Prerequisites
+
+One must enable troubleshooting guides and auto_explain extension on the Azure Database for PostgreSQL ΓÇô Flexible Server as a prerequisite. To enable troubleshooting guides, follow the steps mentioned [here](how-to-troubleshooting-guides.md).
+
+To enable auto_explain extension, follow the steps below:
+
+1. Add auto_explain extension to the shared preload libraries as shown below from the server parameters page on the Flexible Server portal
+
+ :::image type="content" source="./media/how-to-identify-slow-queries/shared-preload-library.png" alt-text="Screenshot of server parameters page with shared preload libraries parameter - Procedure." lightbox="./media/how-to-identify-slow-queries/shared-preload-library.png":::
+
+> [!NOTE]
+> Making this change will require a server restart.
+
+2. After the auto_explain extension is added to shared preload libraries and the server has restarted, change the below highlighted auto_explain server parameters to `ON` from the server parameters page on the Flexible Server portal and leave the remaining ones
+ with default values as shown below.
+
+ :::image type="content" source="./media/how-to-identify-slow-queries/auto-explain-procedure-parameters.png" alt-text="Screenshot of server parameters blade with auto_explain parameters - Procedure." lightbox="./media/how-to-identify-slow-queries/auto-explain-procedure-parameters.png":::
+
+> [!NOTE]
+>- Updating `auto_explain.log_min_duration` parameter to 0 will start logging all queries being executed on the server. This may impact performance of the database. Proper due diligence must be made to come to a value which is considered slow on the server. Example if 30 seconds is considered threshold and all queries being run below 30 seconds is acceptable for application then it is advised to update the parameter to 30000 milliseconds. This would then log any query which is executed more than 30 seconds on the server.
+>- The parameter `auto_explain.log_nested_statements` causes nested statements (statements executed inside a function or procedure) to be considered for logging. When it is off, only top-level query plans are logged.ΓÇ»
+
+### Scenario - Identify slow-running query in a stored procedure
+
+With troubleshooting guides and auto_explain extension in place, we explain the scenario with the help of an example.
+
+We have a scenario where CPU utilization has spiked to 90% and would like to know the root cause of the spike. To debug the scenario, follow the steps mentioned below.
+
+1. As soon as you're alerted by a CPU scenario, go to the troubleshooting guides available under the Help tab on the Flexible server portal overview page.
+
+ :::image type="content" source="./media/how-to-identify-slow-queries/troubleshooting-guides-blade.png" alt-text="Screenshot of troubleshooting guides menu." lightbox="./media/how-to-identify-slow-queries/troubleshooting-guides-blade.png":::
+
+2. Select the High CPU Usage tab from the page opened. The high CPU Utilization troubleshooting guide opens.
+
+ :::image type="content" source="./media/how-to-identify-slow-queries/high-cpu-troubleshooting-guide.png" alt-text="Screenshot of troubleshooting guides tabs." lightbox="./media/how-to-identify-slow-queries/high-cpu-troubleshooting-guide.png":::
+
+3. Select the time range of the reported CPU spike using the time range dropdown list.
+
+ :::image type="content" source="./media/how-to-identify-slow-queries/high-cpu-procedure-timerange.png" alt-text="Screenshot of troubleshooting guides - CPU tab." lightbox="./media/how-to-identify-slow-queries/high-cpu-procedure-timerange.png":::
+
+4. Select the Top CPU Consuming Queries tab.
+
+ The tab shares details of all the queries that ran in the interval where 90% CPU utilization was seen. From the snapshot, it looks like the query with the slowest average execution time during the time interval was ~6.3 minutes, and the query ran 35 times during the interval. This is most likely the cause of CPU spikes.
+
+ :::image type="content" source="./media/how-to-identify-slow-queries/high-cpu-procedure.png" alt-text="Screenshot of troubleshooting guides - CPU tab - queries." lightbox="./media/how-to-identify-slow-queries/high-cpu-procedure.png":::
+
+ It's important to note from the snapshot below that the query type as highlighted below is `Utility``. Generally, a utility can be a stored procedure or function running during the interval.
+
+5. Connect to azure_sys database and execute the query to retrieve actual query text using the below script.
+
+```sql
+ psql -h ServerName.postgres.database.azure.com -U AdminUsername -d azure_sys
+
+ SELECT query_sql_text
+ FROM query_store.query_texts_view
+ WHERE query_text_id = <add query id identified>;
+```
+
+6. In the example considered, the query that was found slow was a stored procedure as mentioned below:
+
+```sql
+ call autoexplain_test ();
+```
+
+7. To understand what exact explanations are generated for the queries that are part of the stored procedure, use Postgres logs. Auto explain extension would have logged an entry in the logs every time the query execution was completed during the interval. Select the Logs section from the `Monitoring` tab from the Flexible server portal overview page.
+
+ :::image type="content" source="./media/how-to-identify-slow-queries/log-analytics-tab.png" alt-text="Screenshot of troubleshooting guides menu - Logs." lightbox="./media/how-to-identify-slow-queries/log-analytics-tab.png":::
+
+8. Select the time range where 90% CPU Utilization was found.
+
+ :::image type="content" source="./media/how-to-identify-slow-queries/log-analytics-timerange.png" alt-text="Screenshot of troubleshooting guides menu - Logs Time range." lightbox="./media/how-to-identify-slow-queries/log-analytics-timerange.png":::
+
+9. Execute the query below to retrieve the explained analyze output of the identified query.
+
+```sql
+AzureDiagnostics
+| where Category contains 'PostgreSQLLogs'
+| where Message contains "<add a snippet of SQL text identified or add table name involved in the queries related to stored procedure>"
+| project TimeGenerated, Message
+```
+
+The procedure has multiple queries, which are highlighted below. The explain analyze output of every query used in the stored procedure is logged in to analyze further and troubleshoot. The execution time of the queries logged can be used to identify the slowest queries that are part of the stored procedure.
+
+```sql
+2023-10-11 17:52:45 UTC-6526d7f0.7f67-LOG: duration: 38459.176 ms plan:
+
+Query Text: insert into customer_balance SELECT c_id, SUM(c_balance) AS total_balance FROM customer GROUP BY c_w_id,c_id order by c_w_id
+Insert on public.customer_balance (cost=1906820.83..2231820.83 rows=0 width=0) (actual time=38459.173..38459.174 rows=0 loops=1)Buffers: shared hit=10108203 read=454055 dirtied=54058, temp read=77521 written=77701 WAL: records=10000000 fpi=1 bytes=640002197
+ -> Subquery Scan on "*SELECT*" (cost=1906820.83..2231820.83 rows=10000000 width=36) (actual time=20415.738..29514.742 rows=10000000 loops=1)
+ Output: "*SELECT*".c_id, "*SELECT*".total_balance Buffers: shared hit=1 read=400000, temp read=77521 written=77701
+ -> GroupAggregate (cost=1906820.83..2131820.83 rows=10000000 width=40) (actual time=20415.737..28574.266 rows=10000000 loops=1)
+ Output: customer.c_id, sum(customer.c_balance), customer.c_w_id Group Key: customer.c_w_id, customer.c_id Buffers: shared hit=1 read=400000, temp read=77521 written=77701
+ -> Sort (cost=1906820.83..1931820.83 rows=10000000 width=13) (actual time=20415.723..22023.515 rows=10000000 loops=1)
+ Output: customer.c_id, customer.c_w_id, customer.c_balance Sort Key: customer.c_w_id, customer.c_id Sort Method: external merge Disk: 225104kB Buffers: shared hit=1 read=400000, temp read=77521 written=77701
+ -> Seq Scan on public.customer (cost=0.00..500001.00 rows=10000000 width=13) (actual time=0.310..15061.471 rows=10000000 loops=1) Output: customer.c_id, customer.c_w_id, customer.c_balance Buffers: shared hit=1 read=400000
+
+2023-10-11 17:52:07 UTC-6526d7f0.7f67-LOG: duration: 61939.529 ms plan:
+Query Text: delete from customer_balance
+Delete on public.customer_balance (cost=0.00..799173.51 rows=0 width=0) (actual time=61939.525..61939.526 rows=0 loops=1) Buffers: shared hit=50027707 read=620942 dirtied=295462 written=71785 WAL: records=50026565 fpi=34 bytes=2711252160
+ -> Seq Scan on public.customer_balance (cost=0.00..799173.51 rows=15052451 width=6) (actual time=3185.519..35234.061 rows=50000000 loops=1)
+ Output: ctid Buffers: shared hit=27707 read=620942 dirtied=26565 written=71785 WAL: records=26565 fpi=34 bytes=11252160
++
+2023-10-11 17:51:05 UTC-6526d7f0.7f67-LOG: duration: 10387.322 ms plan:
+Query Text: select max(c_id) from customer
+Finalize Aggregate (cost=180185.84..180185.85 rows=1 width=4) (actual time=10387.220..10387.319 rows=1 loops=1) Output: max(c_id) Buffers: shared hit=37148 read=1204 written=69
+ -> Gather (cost=180185.63..180185.84 rows=2 width=4) (actual time=10387.214..10387.314 rows=1 loops=1)
+ Output: (PARTIAL max(c_id)) Workers Planned: 2 Workers Launched: 0 Buffers: shared hit=37148 read=1204 written=69
+ -> Partial Aggregate (cost=179185.63..179185.64 rows=1 width=4) (actual time=10387.012..10387.013 rows=1 loops=1) Output: PARTIAL max(c_id) Buffers: shared hit=37148 read=1204 written=69
+ -> Parallel Index Only Scan using customer_i1 on public.customer (cost=0.43..168768.96 rows=4166667 width=4) (actual time=0.446..7676.356 rows=10000000 loops=1)
+ Output: c_w_id, c_d_id, c_id Heap Fetches: 24 Buffers: shared hit=37148 read=1204 written=69
+```
+
+> [!NOTE]
+> please note for demonstration purpose explain analyze output of only few queries used in the procedure are shared. The idea is one can gather explain analyze output of all queries from the logs, and then identify the slowest of those and try to tune them.
+
+## Related content
+
+- [High CPU Utilization](how-to-high-cpu-utilization.md)
+- [Autovacuum Tuning](how-to-autovacuum-tuning.md)
postgresql How To Maintenance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-maintenance-portal.md
Last updated 11/30/2021
You can specify maintenance options for each flexible server in your Azure subscription. Options include the maintenance schedule and notification settings for upcoming and finished maintenance events. ## Prerequisites+ To complete this how-to guide, you need: - An [Azure Database for PostgreSQL - Flexible Server](quickstart-create-server-portal.md)
postgresql How To Perform Fullvacuum Pg Repack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-perform-fullvacuum-pg-repack.md
+
+ Title: Optimize Azure Database for PostgreSQL Flexible Server by using pg_repack
+description: Perform full vacuum using pg_Repack extension in Azure Database for PostgreSQL - Flexible Server
+++ Last updated : 10/26/2023+++++
+# Optimize Azure Database for PostgreSQL Flexible Server by using pg_repack
+
+In this article, you learn how to use pg_repack to remove bloat and improve your Azure Database performance for PostgreSQL Flexible Server. Bloat is the unnecessary data accumulating in tables and indexes due to frequent updates and deletes. Bloat can cause the database size to grow larger than expected and affect query performance. Using pg_repack, you can reclaim the wasted space and reorganize the data more efficiently.
+
+## What is pg_repack?
+
+pg_repack is a PostgreSQL extension that removes bloat from tables and indexes and reorganizes them more efficiently. pg_repack works by creating a new copy of the target table or index, applying any changes that occurred during the process, and then swapping the old and new versions atomically. pg_repack doesn't require any downtime or exclusive locks on the target table or index except for a brief period at the beginning and end of the operation. You can use pg_repack to optimize any table or index in your PostgreSQL database, except for the default PostgreSQL database.
+
+### How to use pg_repack?
+
+To use pg_repack, you need to install the extension in your PostgreSQL database and then run the pg_repack command, specifying the table name or index you want to optimize. The extension acquires locks on the table or index to prevent other operations from being performed while the optimization is in progress. It will then remove the bloat and reorganize the data more efficiently.
+
+### How full table repack works
+
+To perform a full table repack, pg_repack will follow these steps:
+
+1. Create a log table to record changes made to the original table.
+2. Add a trigger to the original table, logging INSERTs, UPDATEs, and DELETEs into the log table.
+3. Create a new table containing all the rows in the old table.
+4. Build indexes on the new table.
+5. Apply all changes recorded in the log table to the new table.
+6. Swap the tables, including indexes and toast tables, using the system catalogs.
+7. Drop the original table.
+
+During these steps, pg_repack will only hold an ACCESS EXCLUSIVE lock for a short period during the initial setup (steps 1 and 2) and the final swap-and-drop phase (steps 6 and 7). For the rest of the time, pg_repack will only need to hold an ACCESS SHARE lock on the original table, allowing INSERTs, UPDATEs, and DELETEs to proceed as usual.
+
+### Limitations
+
+pg_repack has some limitations that you should be aware of before using it:
+
+- The pg_repack extension can't be used to repack the default database named `postgres`. This is due to pg_repack not having the necessary permissions to operate against extensions installed by default on this database. The extension can be created in PostgreSQL, but it can't run.
+- The target table must have either a PRIMARY KEY or a UNIQUE index on a NOT NULL column for the operation to be successful.
+- While pg_repack is running, you won't be able to perform any DDL commands on the target table(s) except for VACUUM or ANALYZE. To ensure these restrictions are enforced, pg_repack will hold an ACCESS SHARE lock on the target table during a
+ full table repack.
+
+## Setup
+
+### Prerequisites
+
+To enable the pg_repack extension, follow the steps below:
+
+1. Add pg_repack extension under Azure extensions as shown below from the server parameters blade on Flexible server portal
+
+ :::image type="content" source="./media/how-to-perform-fullvacuum-pg-repack/portal.png" alt-text="Screenshot of server parameters blade with Azure extensions parameter." lightbox="./media/how-to-perform-fullvacuum-pg-repack/portal.png":::
+
+> [!NOTE]
+> Making this change will not require a server restart.
+
+### Install the packages for Ubuntu virtual machine
+
+Using the extension requires a client with psql and pg_repack installed. All examples in this document use an Ubuntu VM with PostgreSQL 11 to 15.
+
+Run the following packages on the Ubuntu machine to install the pg_repack client 1.4.7
+
+```psql
+sudo sh -c 'echo "deb https://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list' && \
+wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add - && \
+sudo apt-get update && \
+sudo apt install -y postgresql-server-dev-14 && \
+sudo apt install -y unzip make gcc && \
+sudo apt-get install -y libssl-dev liblz4-dev zlib1g-dev libreadline-dev && \
+wget 'https://api.pgxn.org/dist/pg_repack/1.4.7/pg_repack-1.4.7.zip' && \
+unzip pg_repack-1.4.7.zip && \
+cd pg_repack-1.4.7 && \
+sudo make && \
+sudo cp bin/pg_repack /usr/local/bin && \
+pg_repack -V
+```
+
+## Use pg_repack
+
+Example of how to run pg_repack on a table named info in a public schema within the Flexible Server with endpoint pgserver.postgres.database.azure.com, username azureuser, and database foo using the following command.
+
+1. Connect to the Flexible Server instance. This article uses psql for simplicity.
+
+ ```psql
+ psql "host=xxxxxxxxx.postgres.database.azure.com port=5432 dbname=foo user=xxxxxxxxxxxxx password=[my_password] sslmode=require"
+ ```
+2. Create the pg_repack extension in the databases intended to be repacked.
+
+ ```psql
+ foo=> create extension pg_repack;
+ CREATE EXTENSION
+ ```dotnetcli
+
+3. Find the pg_repack version installed on the server.
+
+ ```psql
+ foo=> \dx
+ List of installed extensions
+
+ Name | Version | Schema | Description
+
+ --+++--
+
+ pg_repack | 1.4.7 | public | Reorganize tables in PostgreSQL databases with minimal locks
+
+ (one row)
+
+ This version should match with the pg_repack in the virtual machine. Check this by running the following.
+
+ azureuser@azureuser:~$ pg_repack --version
+ pg_repack 1.4.7
+ ```
+
+4. Run pg_repack client against a table *info* within database *foo*.
+
+ ```psql
+ pg_repack --host=xxxxxxxxxxxx.postgres.database.azure.com --username=xxxxxxxxxx --dbname=foo --table=info --jobs=2 --no-kill-backend --no-superuser-check
+ ```
+
+### pg_repack options
+
+Useful pg_repack options for production workloads:
+
+- -k, --no-superuser-check
+ Skip the superuser checks in the client. This setting is helpful for using pg_repack on platforms that support running it as non-superusers, like Azure Database for PostgreSQL Flexible Servers.
+
+- -j, --jobs
+ Create the specified number of extra connections to PostgreSQL and use these extra connections to parallelize the rebuild of indexes on each table. Parallel index builds are only supported for full-table repacks.
+
+- --index or --only indexes options
+ If your PostgreSQL server has extra cores and disk I/O available, this can be a useful way to speed up pg_repack.
+
+- -D, --no-kill-backend
+ Skip to repack table if the lock can't be taken for duration specified --wait-timeout default 60 sec, instead of canceling conflicting queries. The default is false.
+
+- -E LEVEL, --elevel=LEVEL
+ Choose the output message level from DEBUG, INFO, NOTICE, WARNING, ERROR, LOG, FATAL, and PANIC. The default is INFO.
+
+To understand all the options, refer to [pg_repack options](https://reorg.github.io/pg_repack/)
+
+## Related content
+
+> [!div class="nextstepaction"]
+> [Autovacuum Tuning](how-to-high-cpu-utilization.md)
resource-mover Manage Resources Created Move Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/manage-resources-created-move-process.md
Previously updated : 02/24/2021 Last updated : 10/31/2023
resource-mover Modify Target Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/modify-target-settings.md
Previously updated : 12/29/2021 Last updated : 10/31/2023 #Customer intent: As an Azure admin, I want to modify destination settings when moving resources to another region.
resource-mover Move Across Region Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/move-across-region-dashboard.md
Previously updated : 10/06/2021 Last updated : 10/31/2023 # Move across region dashboard+ This article describes how to monitor the resources you are moving across regions via the Move across region dashboard in Azure Resource Mover. + ## Monitor via the dashboard+ 1. In **Azure Resource Mover**, select **Overview** the left navigation pane. You can toggle between two pages, **Getting started** and **Move across region dashboard**. **Getting started** page provides options to move your resources across subscription, across resource group and across region. The **Move across region dashboard** page combines all monitoring information of your move across region in a single place. [![Move across region dashboard tab](media\move-across-region-dashboard\move-across-region-dashboard-tab.png)](media\move-across-region-dashboard\move-across-region-dashboard-tab.png)
The **Move across region dashboard** page combines all monitoring information of
[![Filters](media\move-across-region-dashboard\move-across-region-dashboard-filters.png)](media\move-across-region-dashboard\move-across-region-dashboard-filters.png) 4. Navigate to the details page by selecting on **View all resources** next to the source - destination. [![Details](media\move-across-region-dashboard\move-across-region-dashboard-details.png)](media\move-across-region-dashboard\move-across-region-dashboard-details.png)+ ## Next steps+ [Learn about](about-move-process.md) the move process.
resource-mover Remove Move Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/remove-move-resources.md
Previously updated : 02/22/2020 Last updated : 10/30/2023 #Customer intent: As an Azure admin, I want remove resources I've added to a move collection.
This article describes how to remove resources from a move collection, or remove a move collection/resource group, in [Azure Resource Mover](overview.md). Move collections are used when moving Azure resources between Azure regions.
-## Remove a resource (portal)
+## Remove a resource on portal
You can remove resources in a move collection, in the Resource Mover portal as follows:
-1. In **Across regions**, select all the resources you want to remove from the collection, and select **Remove**.
+1. On the **Azure Resource Mover** > **Across regions** pane, select all the resources you want to remove from the collection, and select **Remove**.
- ![Button to select to remove](./media/remove-move-resources/portal-select-resources.png)
+ :::image type="content" source="./media/remove-move-resources/across-region.png" alt-text="Screenshot of the **Across regions** pane." lightbox="./media/remove-move-resources/across-region.png" :::
-2. In **Remove resources**, click **Remove**.
+ :::image type="content" source="./media/remove-move-resources/portal-select-resources.png" alt-text="Screenshot of the Button to select to remove." :::
- ![Button to select to remove resources from a move collection](./media/remove-move-resources/remove-portal.png)
+2. In **Remove resources**, select **Remove**.
-## Remove a move collection/resource group (portal)
+ :::image type="content" source="./media/remove-move-resources/remove-portal.png" alt-text="Screenshot of the Button to select to remove resources from a move collection." :::
-You can remove a move collection/resource group in the portal.
+## Remove a move collection or a resource group on portal
-1. Follow the instructions in the procedure above to remove resources from the collection. If you're removing a resource group, make sure it doesn't contain any resources.
+You can remove a move collection/resource group in the portal. Removing a move collection/resource group deletes all the resources in the collection.
+
+To remove a move collection/resource group, follow these steps:
+
+1. Follow [these instructions](#remove-a-resource-on-portal) to remove resources from the collection. If you're removing a resource group, make sure it doesn't contain any resources.
2. Delete the move collection or resource group.
-## Remove a resource (PowerShell)
+## Remove a resource using PowerShell
Using PowerShell cmdlets you can remove a single resource from a MoveCollection, or remove multiple resources.
Remove-AzResourceMoverMoveResource -ResourceGroupName "RG-MoveCollection-demoRMS
``` **Output after running cmdlet**
-![Output text after removing a resource from a move collection](./media/remove-move-resources/powershell-remove-single-resource.png)
### Remove multiple resources
Remove multiple resources as follows:
**Output after running cmdlet**
- ![Output text after removing multiple resources from a move collection](./media/remove-move-resources/remove-multiple-validate-dependencies.png)
+ :::image type="content" source="./media/remove-move-resources/remove-multiple-validate-dependencies.png" alt-text="Screenshot of output text after removing multiple resources from a move collection." :::
2. Retrieve the dependent resources that need to be removed (along with our example virtual network psdemorm-vnet):
Remove multiple resources as follows:
**Output after running cmdlet**
- ![Output text after retrieving dependent resources that need to be removed](./media/remove-move-resources/remove-multiple-get-dependencies.png)
+ :::image type="content" source="./media/remove-move-resources/remove-multiple-get-dependencies.png" alt-text="Screenshot of output text after retrieving dependent resources that need to be removed." :::
3. Remove all resources, along with the virtual network:
Remove multiple resources as follows:
**Output after running cmdlet**
- ![Output text after removing all resources from a move collection](./media/remove-move-resources/remove-multiple-all.png)
+ :::image type="content" source="./media/remove-move-resources/remove-multiple-all.png" alt-text="Screenshot of output text after removing all resources from a move collection." :::
-## Remove a collection (PowerShell)
+## Remove a collection using PowerShell
Remove an entire move collection from the subscription, as follows:
-1. Follow the instructions above to remove resources in the collection using PowerShell.
-2. Remove a collection as follows:
+1. Follow [these instructions](#remove-a-resource-using-powershell) to remove resources in the collection using PowerShell.
+2. Then remove a collection as follows:
```azurepowershell-interactive Remove-AzResourceMoverMoveCollection -ResourceGroupName "RG-MoveCollection-demoRMS" -MoveCollectionName "PS-centralus-westcentralus-demoRMS"
Remove an entire move collection from the subscription, as follows:
**Output after running cmdlet**
- ![Output text after removing a move collection](./media/remove-move-resources/remove-collection.png)
+ :::image type="content" source="./media/remove-move-resources/remove-collection.png" alt-text="Screenshot of output text after removing a move collection." :::
+
+> [!NOTE]
+> For removing resources in bulk where the dependency tree is not identified, use [Invoke-AzResourceMoverBulkRemove (Az.ResourceMover)](/powershell/module/az.resourcemover/invoke-azresourcemoverbulkremove).
## VM resource state after removing What happens when you remove a VM resource from a move collection depends on the resource state, as summarized in the table. ### Remove VM state+ **Resource state** | **VM** | **Networking** | | **Added to move collection** | Delete from move collection. | Delete from move collection.
resource-mover Tutorial Move Region Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-powershell.md
Previously updated : 10/12/2023 Last updated : 10/30/2023 #Customer intent: As an Azure admin, I want to move Azure VMs to a different Azure region using Azure Resource Mover with PowerShell
Verify the following requirements:
| Requirement | Description | | | |
-| **Subscription permissions** | Check you have *Owner* access on the subscription containing the resources that you want to move<br/><br/> **Why do I need Owner access?** The first time you add a resource for a specific source and destination pair in an Azure subscription, Resource Mover creates a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) (formerly known as Managed Service Identify (MSI)) that's trusted by the subscription. To create the identity, and to assign it the required role (Contributor or User Access administrator in the source subscription), the account you use to add resources needs *Owner* permissions on the subscription. [Learn more](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles) about Azure roles. |
+| **Subscription permissions** | Check you have *Owner* access on the subscription containing the resources that you want to move.<br/><br/> The first time you add a resource for a specific source and destination pair in an Azure subscription, a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) (formerly known as Managed Service Identify (MSI)) that's trusted by the subscription is necessary. To create the identity, and to assign it the required role (Contributor or User Access administrator in the source subscription), the account you use to add resources needs *Owner* permissions on the subscription. [Learn more](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles) about Azure roles. |
| **Resource Mover support** | [Review](common-questions.md) supported regions and other common questions.| | **VM support** | Check that any VMs you want to move are supported.<br/><br/> - [Verify](support-matrix-move-region-azure-vm.md#windows-vm-support) supported Windows VMs.<br/><br/> - [Verify](support-matrix-move-region-azure-vm.md#linux-vm-support) supported Linux VMs and kernel versions.<br/><br/> - Check supported [compute](support-matrix-move-region-azure-vm.md#supported-vm-compute-settings), [storage](support-matrix-move-region-azure-vm.md#supported-vm-storage-settings), and [networking](support-matrix-move-region-azure-vm.md#supported-vm-networking-settings) settings.| | **SQL support** | If you want to move SQL resources, review the [SQL requirements list](tutorial-move-region-sql.md#check-sql-requirements).|
New-AzResourceMoverMoveCollection -Name "PS-centralus-westcentralus-demoRMS" -R
**Output**:
-![Output text after creating move collection](./media/tutorial-move-region-powershell/output-move-collection.png)
### Grant access to the managed identity
Add resources as follows:
**Output**
- ![Output text after retrieving the resource ID](./media/tutorial-move-region-powershell/output-retrieve-resource.png)
+ :::image type="content" source="./media/tutorial-move-region-powershell/output-retrieve-resource.png" alt-text="Screenshot of the output text after retrieving the resource ID." :::
2. Create the target resource settings object per the resource you're moving. In our case, it's a VM.
Add resources as follows:
``` **Output**
- ![Output text after adding the resource](./media/tutorial-move-region-powershell/output-add-resource.png)
+
+ :::image type="content" source="./media/tutorial-move-region-powershell/output-add-resource.png" alt-text="Screenshot of the output text after adding the resource." :::
## Validate and add dependencies
Check whether the resources you added have any dependencies on other resources,
**Output (when dependencies exist)**
- ![Output text after validating dependencies](./media/tutorial-move-region-powershell/dependency-output.png)
+ :::image type="content" source="./media/tutorial-move-region-powershell/dependency-output.png" alt-text="Screenshot of the output text after validating dependencies." :::
2. Identity missing dependencies:
Check whether the resources you added have any dependencies on other resources,
**Output**
- ![Output text after retrieving a list of all dependencies](./media/tutorial-move-region-powershell/dependencies-list.png)
+ :::image type="content" source="./media/tutorial-move-region-powershell/dependencies-list.png" alt-text="Screenshot of the output text after retrieving a list of all dependencies." :::
- To retrieve only first-level dependencies (direct dependencies for the resource):
Check whether the resources you added have any dependencies on other resources,
``` **Output**
-
- ![Output text after retrieving a list of first-level dependencies](./media/tutorial-move-region-powershell/dependencies-list-direct.png)
+
+ :::image type="content" source="./media/tutorial-move-region-powershell/dependencies-list-direct.png" alt-text="Screenshot of the output text after retrieving a list of first-level dependencies." :::
3. To add any outstanding missing dependencies, repeat the instructions above to [add resources to the move collection](#add-resources-to-the-move-collection), and revalidate until there are no outstanding resources.
After preparing and moving the source resource group, we can prepare VM resource
**Output**
- ![Output text after validating the VM before preparing it](./media/tutorial-move-region-powershell/validate-vm-before-move.png)
+ :::image type="content" source="./media/tutorial-move-region-powershell/validate-vm-before-move.png" alt-text="Screenshot of the output text after validating the VM before preparing it." :::
2. Get the dependent resources that need to be prepared along with the VM.
After preparing and moving the source resource group, we can prepare VM resource
**Output**
- ![Output text after retrieving dependent VM resources](./media/tutorial-move-region-powershell/get-resources-before-prepare.png)
+ :::image type="content" source="./media/tutorial-move-region-powershell/get-resources-before-prepare.png" alt-text="Screenshot of the output text after retrieving dependent VM resources." :::
3. Initiate the prepare process for all dependent resources.
After preparing and moving the source resource group, we can prepare VM resource
**Output**
- ![Output text after initating prepare of all resources](./media/tutorial-move-region-powershell/initiate-prepare-all.png)
+ :::image type="content" source="./media/tutorial-move-region-powershell/initiate-prepare-all.png" alt-text="Screenshot of the output text after initiating prepare of all resources." :::
> [!NOTE] > You can provide the source resource ID instead of the resource name as the input parameters for the Prepare cmdlet, as well as in the Initiate Move and Commit cmdlets. To do this, run:
After preparing and moving the source resource group, we can prepare VM resource
**Output**
- ![Output text after checking initiate state](./media/tutorial-move-region-powershell/verify-initiate-move-pending.png)
+ :::image type="content" source="./media/tutorial-move-region-powershell/verify-initiate-move-pending.png" alt-text="Screenshot of the output text after checking initiate state." :::
2. Initiate the move:
After preparing and moving the source resource group, we can prepare VM resource
**Output**
- ![Output text after initiating the move of resources](./media/tutorial-move-region-powershell/initiate-resources-move.png)
+ :::image type="content" source="./media/tutorial-move-region-powershell/initiate-resources-move.png" alt-text="Screenshot of the output text after initiating the move of resources." :::
## Discard or commit?
Invoke-AzResourceMoverDiscard -ResourceGroupName "RG-MoveCollection-demoRMS" -Mo
**Output**
-![Output text after discarding the move](./media/tutorial-move-region-powershell/discard-move.png)
### Commit the move
Invoke-AzResourceMoverDiscard -ResourceGroupName "RG-MoveCollection-demoRMS" -Mo
**Output**
- ![Output text after committing the move](./media/tutorial-move-region-powershell/commit-move.png)
+ :::image type="content" source="./media/tutorial-move-region-powershell/commit-move.png" alt-text="Screenshot of the output text after committing the move." :::
2. Verify that all resources have moved to the target region:
sap Deploy S4hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/deploy-s4hana.md
description: Learn how to deploy S/4HANA infrastructure with Azure Center for SA
Previously updated : 02/22/2023 Last updated : 10/30/2023 #Customer intent: As a developer, I want to deploy S/4HANA infrastructure using Azure Center for SAP solutions so that I can manage SAP workloads in the Azure portal.
There are three deployment options that you can select for your infrastructure,
## Supported software
-Azure Center for SAP solutions supports the following SAP software versions: S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, and S/4HANA 2021 ISS 00.
+Azure Center for SAP solutions supports the following SAP software versions: S/4HANA 1909 SPS 03, S/4HANA 2020 SPS 03, S/4HANA 2021 ISS 00 and S/4HANA 2022 ISS 00.
The following operating system (OS) software versions are compatible with these SAP software versions:
The following operating system (OS) software versions are compatible with these
| SUSE | SUSE Linux Enterprise Server (SLES) for SAP Applications 12 SP5 - x64 Gen2 latest | S/4HANA 1909 SPS 03 | | SUSE | SUSE Linux Enterprise Server (SLES) for SAP Applications 12 SP4 - x64 Gen2 latest | S/4HANA 1909 SPS 03 | -- You can use `latest` if you want to use the latest image and not a specific older version. If the *latest* image version is newly released in marketplace and has an unforeseen issue, the deployment may fail. If you are using Portal for deployment, we recommend choosing a different image *sku train* (e.g. 12-SP4 instead of 15-SP3) till the issues are resolved. However, if deploying via API/CLI, you can provide any other *image version* which is available. To view and select the available image versions from a publisher, use below commands
+- You can use `latest` if you want to use the latest image and not a specific older version. If the *latest* image version is newly released in marketplace and has an unforeseen issue, the deployment might fail. If you are using Portal for deployment, we recommend choosing a different image *sku train* (e.g. 12-SP4 instead of 15-SP3) till the issues are resolved. However, if deploying via API/CLI, you can provide any other *image version* which is available. To view and select the available image versions from a publisher, use below commands
```Powershell
The following operating system (OS) software versions are compatible with these
$offerName="RHEL-SAP-HA" $skuName="82sapha-gen2" ```
+- Azure Center for SAP solutions now supports deployment of SAP system VMs with custom OS images along with the Azure Marketplace images. For deployment using custom OS images, follow the steps [here](deploy-s4hana.md#using-a-custom-os-image).
## Create deployment
The following operating system (OS) software versions are compatible with these
1. For **Application subnet** and **Database subnet**, map the IP address ranges as required. It's recommended to use a different subnet for each deployment. The names including AzureFirewallSubnet, AzureFirewallManagementSubnet, AzureBastionSubnet and GatewaySubnet are reserved names within Azure. Please do not use these as the subnet names.
-1. Under **Operating systems**, enter the OS details.
+1. Under **Operating systems**, select the source of the image.
- 1. For **Application OS image**, select the OS image for the application server.
+1. If you're using Azure Marketplace OS images, use these settings:
+
+ 1. For **Application OS image**, select the OS image for the application server.
+
1. For **Database OS image**, select the OS image for the database server.
+ 1. If you're using [custom OS images](deploy-s4hana.md#using-a-custom-os-image), use these settings:
+
+ 1. For **Application OS image**, select the image version from the Azure Compute Gallery.
+
+ 1. For **Database OS image**, select the image version from the Azure Compute Gallery.
1. Under **Administrator account**, enter your administrator account details.
The following operating system (OS) software versions are compatible with these
1. For **SAP Transport Options**, you can choose to **Create a new SAP transport Directory** or **Use an existing SAP transport Directory** or completely skip the creation of transport directory by choosing **Don't include SAP transport directory** option. Currently, only NFS on AFS storage account fileshares is supported.
- 1. If you choose to **Create a new SAP transport Directory**, this will create and mount a new transport fileshare on the SID. By Default, this option will create an NFS on AFS storage account and a transport fileshare in the resource group where SAP system will be deployed. However, you can choose to create this storage account in a different resource group by providing the resource group name in **Transport Resource Group**. You can also provide a custom name for the storage account to be created under **Storage account name** section. Leaving the **Storage account name** will create the storage account with service default name **""SIDname""nfs""random characters""** in the chosen transport resource group. Creating a new transport directory will create a ZRS based replication for zonal deployments and LRS based replication for non-zonal deployments. If your region doesn't support ZRS replication deploying a zonal VIS will lead to a failure. In such cases, you can deploy a transport fileshare outside ACSS with ZRS replication and then create a zonal VIS where you select **Use an existing SAP transport Directory** to mount the pre-created fileshare.
+ 1. If you choose to **Create a new SAP transport Directory**, this will create and mount a new transport fileshare on the SID. By Default, this option will create an NFS on AFS storage account and a transport fileshare in the resource group where SAP system will be deployed. However, you can choose to create this storage account in a different resource group by providing the resource group name in **Transport Resource Group**. You can also provide a custom name for the storage account to be created under **Storage account name** section. Leaving the **Storage account name** will create the storage account with service default name **""SIDname""nfs""random characters""** in the chosen transport resource group. Creating a new transport directory will create a ZRS based replication for zonal deployments and LRS based replication for non-zonal deployments. If your region doesn't support ZRS replication deploying a zonal VIS will lead to a failure. In such cases, you can deploy a transport fileshare outside Azure Center for SAP Solutions with ZRS replication and then create a zonal VIS where you select **Use an existing SAP transport Directory** to mount the pre-created fileshare.
1. If you choose to **Use an existing SAP transport Directory**, select the pre - existing NFS fileshare under **File share name** option. The existing transport fileshare will be only mounted on this SID. The selected fileshare shall be in the same region as that of SAP system being created. Currently, file shares existing in a different region cannot be selected. Provide the associated private endpoint of the storage account where the selected fileshare exists under **Private Endpoint** option.
The following operating system (OS) software versions are compatible with these
1. For **Managed identity source**, choose if you want the service to create a new managed identity or you can instead use an existing identity. If you wish to allow the service to create a managed identity, acknowledge the checkbox which asks for your consent for the identity to be created and the contributor role access to be added for all resource groups.
- 1. For **Managed identity name**, enter a name for a new identity you want to create or select an existing identity from the drop down menu. If you are selecting an existing identity, it should have **Contributor** role access on the Subscription or on Resource Groups related to this SAP system you are trying to deploy. That is, it requires Contributor access to the SAP application Resource Group, Virtual Network Resource Group and Resource Group which has the existing SSHKEY. If you wish to later install the SAP system using ACSS, we also recommend giving the **Storage Blob Data Reader and Reader** and **Data Access roles** on the Storage Account which has the SAP software media.
+ 1. For **Managed identity name**, enter a name for a new identity you want to create or select an existing identity from the drop down menu. If you are selecting an existing identity, it should have **Contributor** role access on the Subscription or on Resource Groups related to this SAP system you are trying to deploy. That is, it requires Contributor access to the SAP application Resource Group, Virtual Network Resource Group and Resource Group which has the existing SSHKEY. If you wish to later install the SAP system using Azure Center for SAP Solutions, we also recommend giving the **Storage Blob Data Reader and Reader** and **Data Access roles** on the Storage Account which has the SAP software media.
1. Select **Next: Virtual machines**.
The following operating system (OS) software versions are compatible with these
1. Optionally, click and drag resources or containers to move them around visually.
- 1. Click on **Reset** to reset the visualization to its default state. That is, revert any changes you may have made to the position of resources or containers.
+ 1. Click on **Reset** to reset the visualization to its default state. That is, revert any changes you might have made to the position of resources or containers.
1. Click on **Scale to fit** to reset the visualization to its default zoom level.
The following operating system (OS) software versions are compatible with these
1. Wait for the infrastructure deployment to complete. Numerous resources are deployed and configured. This process takes approximately 7 minutes.
+## Using a Custom OS Image
+
+You can use custom images for deployment in Azure Center for SAP Solutions from the [Azure Compute Gallery](../../virtual-machines/capture-image-portal.md#capture-a-vm-in-the-portal)
+### Custom image prerequisites
+
+- Make sure that you've met the [general SAP deployment prerequisites](#prerequisites), [downloaded the SAP media](../../sap/center-sap-solutions/get-sap-installation-media.md#prerequisites), and [installed the SAP software](../../sap/center-sap-solutions/install-software.md#install-sap-software).
+- Before you use an image from Azure Marketplace for customization, check the [list of supported OS image](#deployment-types) versions in Azure Center for SAP Solutions. BYOI is supported on the OS version supported by Azure Center for SAP Solutions. Make sure that Azure Center for SAP Solutions has support for the image, or else the deployment will fail with the following error:
+ *The resource ID provided consists of an OS image which is not supported in ACSS. Please ensure that the OS image version is supported in ACSS for a successful installation.*
+
+- Refer to SAP installation documentation to ensure the operating system prerequisites are met for the deployment to be successful.
+
+- Check that the user-assigned managed identity has the **Reader role** on the gallery of the custom OS image. Otherwise, the deployment will fail.
+
+- [Create and upload a VM to a gallery in Azure Compute Gallery](../../virtual-machines/capture-image-portal.md#capture-a-vm-in-the-portal)
+
+- Before beginning the deployment, make sure the image is available in Azure Compute Gallery.
+
+- Verify that the image is in same subscription as the deployment.
+
+- Check that the image VM is of the **Standard** security type.
++
+### Deploying using Custom Operating System Image
+- Select the **Use a custom image** option during deployment. Choose which image to use for the application and database OS.
+
+- Azure Center for SAP Solutions validates the base operating system version of the custom OS Image is available in the supportability matrix in Azure Center for SAP Solutions. If the versions are unsupported, the deployment fails. To fix this problem, delete the VIS and infrastructure resources from the resource group, then deploy again with a supported image.
++
+- Make sure the image version that you're using is [compatible with the SAP software version](#deployment-types).
+
## Confirm deployment To confirm a deployment is successful:
service-bus-messaging Monitor Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/monitor-service-bus.md
The diagnostic logging information is stored in tables named **AzureDiagnostics*
The metrics and logs you can collect are discussed in the following sections. ## Analyzing metrics
-You can analyze metrics for Azure Service Bus, along with metrics from other Azure services, by selecting **Metrics** from the **Azure Monitor** section on the home page for your Service Bus namespace. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool. For a list of the platform metrics collected, see [Monitoring Azure Service Bus data reference metrics](monitor-service-bus-reference.md#metrics).
+You can analyze metrics for Azure Service Bus, along with metrics from other Azure services, by selecting **Metrics** from the **Azure Monitor** section on the home page for your Service Bus namespace. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool. For a list of the platform metrics collected, see [Monitoring Azure Service Bus data reference metrics](monitor-service-bus-reference.md#metrics).
![Metrics Explorer with Service Bus namespace selected](./media/monitor-service-bus/metrics.png)
service-bus-messaging Service Bus Messaging Exceptions Latest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-messaging-exceptions-latest.md
+
+ Title: Azure Service Bus - messaging exceptions | Microsoft Docs
+description: This article provides a list of Azure Service Bus messaging exceptions and suggested actions to taken when the exception occurs.
+ Last updated : 02/17/2023++
+# Service Bus messaging exceptions (.NET)
+The Service Bus .NET client library surfaces exceptions when a service operation or a client encounters an error. When possible, standard .NET exception types are used to convey error information. For scenarios specific to Service Bus, a [ServiceBusException](/dotnet/api/azure.messaging.servicebus.servicebusexception) is thrown.
+
+The Service Bus clients automatically retry exceptions that are considered transient, following the configured [retry options](/dotnet/api/azure.messaging.servicebus.servicebusretryoptions). When an exception is surfaced to the application, either all retries were applied unsuccessfully, or the exception was considered nontransient. More information on configuring retry options can be found in the [Customizing the retry options](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/samples/Sample13_AdvancedConfiguration.md#customizing-the-retry-options) sample.
+
+## ServiceBusException
+
+The exception includes some contextual information to help you understand the context of the error and its relative severity.
+
+- `EntityPath` : Identifies the Service Bus entity from which the exception occurred, if available.
+- `IsTransient` : Indicates whether or not the exception is considered recoverable. In the case where it was deemed transient, the appropriate retry policy has already been applied and all retries were unsuccessful.
+- `Message` : Provides a description of the error that occurred and relevant context.
+- `StackTrace` : Represents the immediate frames of the call stack, highlighting the location in the code where the error occurred.
+- `InnerException` : When an exception was the result of a service operation, it's often a `Microsoft.Azure.Amqp.AmqpException` instance describing the error, following the [OASIS Advanced Message Queuing Protocol (AMQP) 1.0 spec](https://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-types-v1.0-os.html).
+- `Reason` : Provides a set of well-known reasons for the failure that help to categorize and clarify the root cause. These values are intended to allow for applying exception filtering and other logic where inspecting the text of an exception message wouldn't be ideal. Some key failure reasons are:
+ - `ServiceTimeout`: Indicates that the Service Bus service didn't respond to an operation request within the expected amount of time. It might be due to a transient network issue or service problem. The Service Bus service might or might not have successfully completed the request; the status isn't known. In the context of the next available session, this exception indicates that there were no unlocked sessions available in the entity. These errors are transient errors that are automatically retried.
+ - `QuotaExceeded`: Typically indicates that there are too many active receive operations for a single entity. In order to avoid this error, reduce the number of potential concurrent receives. You can use batch receives to attempt to receive multiple messages per receive request. For more information, see [Service Bus quotas](service-bus-quotas.md).
+ - `MessageSizeExceeded`: Indicates that the max message size has been exceeded. The message size includes the body of the message, and any associated metadata. The best approach for resolving this error is to reduce the number of messages being sent in a batch or the size of the body included in the message. Because size limits are subject to change, see [Service Bus quotas](service-bus-quotas.md) for specifics.
+ - `MessageLockLost`: Indicates that the lock on the message is lost. Callers should attempt to receive and process the message again. This exception only applies to non-session entities. This error occurs if processing takes longer than the lock duration and the message lock isn't renewed. This error can also occur when the link is detached due to a transient network issue or when the link is idle for 10 minutes.
+
+ The Service Bus service uses the AMQP protocol, which is stateful. Due to the nature of the protocol, if the link that connects the client and the service is detached after a message is received, but before the message is settled, the message isn't able to be settled on reconnecting the link. Links can be detached due to a short-term transient network failure, a network outage, or due to the service enforced 10-minute idle timeout. The reconnection of the link happens automatically as a part of any operation that requires the link, that is, settling or receiving messages. Because of this behavior, you might encounter `ServiceBusException` with `Reason` of `MessageLockLost` or `SessionLockLost` even if the lock expiration time hasn't yet passed.
+ - `SessionLockLost`: Indicates that the lock on the session has expired. Callers should attempt to accept the session again. This exception applies only to session-enabled entities. This error occurs if processing takes longer than the lock duration and the session lock isn't renewed. This error can also occur when the link is detached due to a transient network issue or when the link is idle for 10 minutes. The Service Bus service uses the AMQP protocol, which is stateful. Due to the nature of the protocol, if the link that connects the client and the service is detached after a message is received, but before the message is settled, the message isn't able to be settled on reconnecting the link. Links can be detached due to a short-term transient network failure, a network outage, or due to the service enforced 10-minute idle timeout. The reconnection of the link happens automatically as a part of any operation that requires the link, that is, settling or receiving messages. Because of this behavior, you might encounter `ServiceBusException` with `Reason` of `MessageLockLost` or `SessionLockLost` even if the lock expiration time hasn't yet passed.
+ - `MessageNotFound`: This error occurs when attempting to receive a deferred message by sequence number for a message that either doesn't exist in the entity, or is currently locked.
+ - `SessionCannotBeLocked`: Indicates that the requested session can't be locked because the lock is already held elsewhere. Once the lock expires, the session can be accepted.
+ - `GeneralError`: Indicates that the Service Bus service encountered an error while processing the request. This error is often caused by service upgrades and restarts. These errors are transient errors that are automatically retried.
+ - `ServiceCommunicationProblem`: Indicates that there was an error communicating with the service. The issue might stem from a transient network problem, or a service problem. These errors are transient errors that will be automatically retried.
+ - `ServiceBusy`: Indicates that a request was throttled by the service. The details describing what can cause a request to be throttled and how to avoid being throttled can be found [here](service-bus-throttling.md). Throttled requests are retried, but the client library automatically applies a 10 second back off before attempting any more requests using the same `ServiceBusClient` (or any subtypes created from that client). It can cause issues if your entity's lock duration is less than 10 seconds, as message or session locks are likely to be lost for any unsettled messages or locked sessions. Because throttled requests are generally retried successfully, the exceptions generated would be logged as warnings rather than errors - the specific warning-level event source event is 43 (RunOperation encountered an exception and retry occurs.).
+ - `MessagingEntityAlreadyExists`: Indicates that An entity with the same name exists under the same namespace.
+ - `MessagingEntityDisabled`: The Messaging Entity is disabled. Enable the entity again using Portal.
+ - `MessagingEntityNotFound`: Service Bus service can't find a Service Bus resource.
+
+## Handle ServiceBusException - example
+Here's an example of how to handle a `ServiceBusException` and filter by the `Reason`.
+
+```csharp
+try
+{
+ // Receive messages using the receiver client
+}
+catch (ServiceBusException ex) when
+ (ex.Reason == ServiceBusFailureReason.ServiceTimeout)
+{
+ // Take action based on a service timeout
+}
+```
+
+### Other common exceptions
+
+- `ArgumentException`: Client throws this exception deriving from `ArgumentException` when a parameter provided when interacting with the client is invalid. Information about the specific parameter and the nature of the problem can be found in the `Message`.
+- `InvalidOperationException`: Occurs when attempting to perform an operation that isn't valid for its current configuration. This exception typically occurs when a client wasn't configured to support the operation. Often, it can be mitigated by adjusting the options passed to the client.
+- `NotSupportedException`: Occurs when a requested operation is valid for the client, but not supported by its current state. Information about the scenario can be found in the `Message`.
+- `AggregateException`: Occurs when an operation might encounter multiple exceptions and is surfacing them as a single failure. This exception is most commonly encountered when starting or stopping the Service Bus processor or Service Bus session processor.
+
+## Reason: QuotaExceeded
+
+[ServiceBusException](/dotnet/api/azure.messaging.servicebus.servicebusexception) with reason set to `QuotaExceeded` indicates that a quota for a specific entity has been exceeded.
+
+> [!NOTE]
+> For Service Bus quotas, see [Quotas](service-bus-quotas.md).
+
+### Queues and topics
+
+For queues and topics, it's often the size of the queue. The error message property contains further details, as in the following example:
+
+```output
+Message: The maximum entity size has been reached or exceeded for Topic: 'xxx-xxx-xxx'.
+ Size of entity in bytes:1073742326, Max entity size in bytes:
+1073741824..TrackingId:xxxxxxxxxxxxxxxxxxxxxxxxxx, TimeStamp:3/15/2013 7:50:18 AM
+```
+
+The message states that the topic exceeded its size limit, in this case 1 GB (the default size limit).
+
+### Namespaces
+
+For namespaces, QuotaExceeded exception can indicate that an application has exceeded the maximum number of connections to a namespace. For example:
+
+```output
+<tracking-id-guid>_G12 >
+System.ServiceModel.FaultException`1[System.ServiceModel.ExceptionDetail]:
+ConnectionsQuotaExceeded for namespace xxx.
+```
+
+### Common causes
+
+There are two common causes for this error: the dead-letter queue, and nonfunctioning message receivers.
+
+- **[Dead-letter queue](service-bus-dead-letter-queues.md)**
+ A reader is failing to complete messages and the messages are returned to the queue/topic when the lock expires. It can happen if the reader encounters an exception that prevents it from completing the message. After a message has been read 10 times, it moves to the dead-letter queue by default. This behavior is controlled by the [MaxDeliveryCount](/dotnet/api/azure.messaging.servicebus.administration.queueproperties.maxdeliverycount) property and has a default value of 10. As messages pile up in the dead letter queue, they take up space.
+
+ To resolve the issue, read and complete the messages from the dead-letter queue, as you would from any other queue.
+- **Receiver stopped**. A receiver has stopped receiving messages from a queue or subscription. The way to identify the issue is to look at the [active message count](/dotnet/api/azure.messaging.servicebus.administration.queueruntimeproperties.activemessagecount). If the active message count is high or growing, then the messages aren't being read as fast as they're being written.
++
+## Reason: MessageLockLost
++
+### Cause
+
+[ServiceBusException](/dotnet/api/azure.messaging.servicebus.servicebusexception) with reason set to `MessageLockLost` indicates that a message is received using the [PeekLock](message-transfers-locks-settlement.md#peeklock) Receive mode and the lock held by the client expires on the service side.
+
+The lock on a message might expire due to various reasons:
+
+ * The lock timer has expired before it was renewed by the client application.
+ * The client application acquired the lock, saved it to a persistent store and then restarted. Once it restarted, the client application looked at the inflight messages and tried to complete the messages.
+
+You might also receive this exception in the following scenarios:
+
+* Service Update
+* OS update
+* Changing properties on the entity (queue, topic, subscription) while holding the lock.
+
+### Resolution
+
+When a client application receives **MessageLockLostException**, it can no longer process the message. The client application might optionally consider logging the exception for analysis, but the client *must* dispose off the message.
+
+Since the lock on the message has expired, it would go back on the Queue (or Subscription) and can be processed by the next client application that calls receive.
+
+If the **MaxDeliveryCount** has exceeded, then the message might be moved to the **DeadLetterQueue**.
+
+## Reason: SessionLockLost
+
+### Cause
+
+[ServiceBusException](/dotnet/api/azure.messaging.servicebus.servicebusexception) with reason set to `MessageLockLost` is thrown when a session is accepted and the lock held by the client expires on the service side.
+
+The lock on a session might expire due to various reasons:
+
+ * The lock timer has expired before it was renewed by the client application.
+ * The client application acquired the lock, saved it to a persistent store and then restarted. Once it restarted, the client application looked at the inflight sessions and tried to process the messages in those sessions.
+
+You might also receive this exception in the following scenarios:
+
+* Service Update
+* OS update
+* Changing properties on the entity (queue, topic, subscription) while holding the lock.
+
+### Resolution
+
+When a client application receives **SessionLockLostException**, it can no longer process the messages on the session. The client application might consider logging the exception for analysis, but the client *must* dispose off the message.
+
+Since the lock on the session has expired, it would go back on the Queue (or Subscription) and can be locked by the next client application that accepts the session. Since the session lock is held by a single client application at any given time, the in-order processing is guaranteed.
+
+## TimeoutException
+
+A [TimeoutException](/dotnet/api/system.timeoutexception) indicates that a user-initiated operation is taking longer than the operation timeout.
+
+You should check the value of the [ServicePointManager.DefaultConnectionLimit](/dotnet/api/system.net.servicepointmanager.defaultconnectionlimit) property, as hitting this limit can also cause a [TimeoutException](/dotnet/api/system.timeoutexception).
+
+Timeouts are expected to happen during or in-between maintenance operations such as Service Bus service updates (or) OS updates on resources that run the service. During OS updates, entities are moved around and nodes are updated or rebooted, which can cause timeouts. For service level agreement (SLA) details for the Azure Service Bus service, see [SLA for Service Bus](https://azure.microsoft.com/support/legal/sla/service-bus/).
++
+## SocketException
+
+### Cause
+
+A **SocketException** is thrown in the following cases:
+
+ * When a connection attempt fails because the host didn't properly respond after a specified time (TCP error code 10060).
+ * An established connection failed because connected host has failed to respond.
+ * There was an error processing the message or the timeout is exceeded by the remote host.
+ * Underlying network resource issue.
+
+### Resolution
+
+The **SocketException** errors indicate that the VM hosting the applications is unable to convert the name `<mynamespace>.servicebus.windows.net` to the corresponding IP address.
+
+Check to see if the following command succeeds in mapping to an IP address.
+
+```powershell
+PS C:\> nslookup <mynamespace>.servicebus.windows.net
+```
+
+Which should provide an output like:
+
+```bash
+Name: <cloudappinstance>.cloudapp.net
+Address: XX.XX.XXX.240
+Aliases: <mynamespace>.servicebus.windows.net
+```
+
+If the above name **does not resolve** to an IP and the namespace alias, check with the network administrator to investigate further. Name resolution is done through a DNS server typically a resource in the customer network. If the DNS resolution is done by Azure DNS, contact Azure support.
+
+If name resolution **works as expected**, check if connections to Azure Service Bus is allowed [here](service-bus-troubleshooting-guide.md#connectivity-certificate-or-timeout-issues).
++
+## Next steps
+
+For the complete Service Bus .NET API reference, see the [Azure .NET API reference](/dotnet/api/overview/azure/service-bus).
+For troubleshooting tips, see the [Troubleshooting guide](service-bus-troubleshooting-guide.md).
service-bus-messaging Service Bus Messaging Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-messaging-exceptions.md
+
+ Title: Azure Service Bus - messaging exceptions (deprecated) | Microsoft Docs
+description: This article provides a list of Azure Service Bus messaging exceptions from the deprecated packages and suggested actions to taken when the exception occurs.
+ Last updated : 02/17/2023++
+# Service Bus messaging exceptions (deprecated)
+
+This article lists the .NET exceptions generated by .NET Framework APIs.
++
+## Exception categories
+
+The messaging APIs generate exceptions that can fall into the following categories, along with the associated action you can take to try to fix them. The meaning and causes of an exception can vary depending on the type of messaging entity:
+
+1. User coding error ([System.ArgumentException](/dotnet/api/system.argumentexception), [System.InvalidOperationException](/dotnet/api/system.invalidoperationexception), [System.OperationCanceledException](/dotnet/api/system.operationcanceledexception), [System.Runtime.Serialization.SerializationException](/dotnet/api/system.runtime.serialization.serializationexception)). General action: try to fix the code before proceeding.
+2. Setup/configuration error ([Microsoft.ServiceBus.Messaging.MessagingEntityNotFoundException](/dotnet/api/microsoft.azure.servicebus.messagingentitynotfoundexception), [System.UnauthorizedAccessException](/dotnet/api/system.unauthorizedaccessexception). General action: review your configuration and change if necessary.
+3. Transient exceptions ([Microsoft.ServiceBus.Messaging.MessagingException](/dotnet/api/microsoft.servicebus.messaging.messagingexception), [Microsoft.ServiceBus.Messaging.ServerBusyException](/dotnet/api/microsoft.azure.servicebus.serverbusyexception), [Microsoft.ServiceBus.Messaging.MessagingCommunicationException](/dotnet/api/microsoft.servicebus.messaging.messagingcommunicationexception)). General action: retry the operation or notify users. The `RetryPolicy` class in the client SDK can be configured to handle retries automatically. For more information, see [Retry guidance](/azure/architecture/best-practices/retry-service-specific#service-bus).
+4. Other exceptions ([System.Transactions.TransactionException](/dotnet/api/system.transactions.transactionexception), [System.TimeoutException](/dotnet/api/system.timeoutexception), [Microsoft.ServiceBus.Messaging.MessageLockLostException](/dotnet/api/microsoft.azure.servicebus.messagelocklostexception), [Microsoft.ServiceBus.Messaging.SessionLockLostException](/dotnet/api/microsoft.azure.servicebus.sessionlocklostexception)). General action: specific to the exception type; refer to the table in the following section:
+
+> [!IMPORTANT]
+> - Azure Service Bus doesn't retry an operation in case of an exception when the operation is in a transaction scope.
+> - For retry guidance specific to Azure Service Bus, see [Retry guidance for Service Bus](/azure/architecture/best-practices/retry-service-specific#service-bus).
++
+## Exception types
+
+The following table lists messaging exception types, and their causes, and notes suggested action you can take.
+
+| **Exception Type** | **Description/Cause/Examples** | **Suggested Action** | **Note on automatic/immediate retry** |
+| | | | |
+| [TimeoutException](/dotnet/api/system.timeoutexception) |The server didn't respond to the requested operation within the specified time, which is controlled by [OperationTimeout](/dotnet/api/microsoft.servicebus.messaging.messagingfactorysettings). The server might have completed the requested operation. It can happen because of network or other infrastructure delays. |Check the system state for consistency and retry if necessary. See [Timeout exceptions](#timeoutexception). |Retry might help in some cases; add retry logic to code. |
+| [InvalidOperationException](/dotnet/api/system.invalidoperationexception) |The requested user operation isn't allowed within the server or service. See the exception message for details. For example, [Complete()](/dotnet/api/microsoft.azure.servicebus.queueclient.completeasync) generates this exception if the message was received in [ReceiveAndDelete](/dotnet/api/microsoft.azure.servicebus.receivemode) mode. |Check the code and the documentation. Make sure the requested operation is valid. |Retry doesn't help. |
+| [OperationCanceledException](/dotnet/api/system.operationcanceledexception) |An attempt is made to invoke an operation on an object that has already been closed, aborted, or disposed. In rare cases, the ambient transaction is already disposed. |Check the code and make sure it doesn't invoke operations on a disposed object. |Retry doesn't help. |
+| [UnauthorizedAccessException](/dotnet/api/system.unauthorizedaccessexception) |The [TokenProvider](/dotnet/api/microsoft.servicebus.tokenprovider) object couldn't acquire a token, the token is invalid, or the token doesn't contain the claims required to do the operation. |Make sure the token provider is created with the correct values. Check the configuration of the Access Control Service. |Retry might help in some cases; add retry logic to code. |
+| [ArgumentException](/dotnet/api/system.argumentexception)<br /> [ArgumentNullException](/dotnet/api/system.argumentnullexception)<br />[ArgumentOutOfRangeException](/dotnet/api/system.argumentoutofrangeexception) |One or more arguments supplied to the method are invalid.<br /> The URI supplied to [NamespaceManager](/dotnet/api/microsoft.servicebus.namespacemanager) or [Create](/dotnet/api/microsoft.servicebus.messaging.messagingfactory) contains path segments.<br /> The URI scheme supplied to [NamespaceManager](/dotnet/api/microsoft.servicebus.namespacemanager) or [Create](/dotnet/api/microsoft.servicebus.messaging.messagingfactory) is invalid. <br />The property value is larger than 32 KB. |Check the calling code and make sure the arguments are correct. |Retry doesn't help. |
+| [MessagingEntityNotFoundException](/dotnet/api/microsoft.azure.servicebus.messagingentitynotfoundexception) |Entity associated with the operation doesn't exist or it has been deleted. |Make sure the entity exists. |Retry doesn't help. |
+| [MessageNotFoundException](/dotnet/api/microsoft.servicebus.messaging.messagenotfoundexception) |Attempt to receive a message with a particular sequence number. This message isn't found. |Make sure the message hasn't been received already. Check the deadletter queue to see if the message has been deadlettered. |Retry doesn't help. |
+| [MessagingCommunicationException](/dotnet/api/microsoft.servicebus.messaging.messagingcommunicationexception) |Client isn't able to establish a connection to Service Bus. |Make sure the supplied host name is correct and the host is reachable. <p>If your code runs in an environment with a firewall/proxy, ensure that the traffic to the Service Bus domain/IP address and ports isn't blocked.</p>|Retry might help if there are intermittent connectivity issues. |
+| [ServerBusyException](/dotnet/api/microsoft.azure.servicebus.serverbusyexception) |Service isn't able to process the request at this time. |Client can wait for a period of time, then retry the operation. |Client might retry after certain interval. If a retry results in a different exception, check retry behavior of that exception. |
+| [MessagingException](/dotnet/api/microsoft.servicebus.messaging.messagingexception) |Generic messaging exception that might be thrown in the following cases:<p>An attempt is made to create a [QueueClient](/dotnet/api/microsoft.azure.servicebus.queueclient) using a name or path that belongs to a different entity type (for example, a topic).</p><p>An attempt is made to send a message larger than 256 KB. </p>The server or service encountered an error during processing of the request. See the exception message for details. It's usually a transient exception.</p><p>The request was terminated because the entity is being throttled. Error code: 50001, 50002, 50008. </p> | Check the code and ensure that only serializable objects are used for the message body (or use a custom serializer). <p>Check the documentation for the supported value types of the properties and only use supported types.</p><p> Check the [IsTransient](/dotnet/api/microsoft.servicebus.messaging.messagingexception) property. If it's **true**, you can retry the operation. </p>| If the exception is due to throttling, wait for a few seconds and retry the operation again. Retry behavior is undefined and might not help in other scenarios.|
+| [MessagingEntityAlreadyExistsException](/dotnet/api/microsoft.servicebus.messaging.messagingentityalreadyexistsexception) |Attempt to create an entity with a name that is already used by another entity in that service namespace. |Delete the existing entity or choose a different name for the entity to be created. |Retry doesn't help. |
+| [QuotaExceededException](/dotnet/api/microsoft.azure.servicebus.quotaexceededexception) |The messaging entity has reached its maximum allowable size, or the maximum number of connections to a namespace has been exceeded. |Create space in the entity by receiving messages from the entity or its subqueues. See [QuotaExceededException](#quotaexceededexception). |Retry might help if messages have been removed in the meantime. |
+| [RuleActionException](/dotnet/api/microsoft.servicebus.messaging.ruleactionexception) |Service Bus returns this exception if you attempt to create an invalid rule action. Service Bus attaches this exception to a deadlettered message if an error occurs while processing the rule action for that message. |Check the rule action for correctness. |Retry doesn't help. |
+| [FilterException](/dotnet/api/microsoft.servicebus.messaging.filterexception) |Service Bus returns this exception if you attempt to create an invalid filter. Service Bus attaches this exception to a deadlettered message if an error occurred while processing the filter for that message. |Check the filter for correctness. |Retry doesn't help. |
+| [SessionCannotBeLockedException](/dotnet/api/microsoft.servicebus.messaging.sessioncannotbelockedexception) |Attempt to accept a session with a specific session ID, but the session is currently locked by another client. |Make sure the session is unlocked by other clients. |Retry might help if the session has been released in the interim. |
+| [TransactionSizeExceededException](/dotnet/api/microsoft.servicebus.messaging.transactionsizeexceededexception) |Too many operations are part of the transaction. |Reduce the number of operations that are part of this transaction. |Retry doesn't help. |
+| [MessagingEntityDisabledException](/dotnet/api/microsoft.azure.servicebus.messagingentitydisabledexception) |Request for a runtime operation on a disabled entity. |Activate the entity. |Retry might help if the entity has been activated in the interim. |
+| [NoMatchingSubscriptionException](/dotnet/api/microsoft.servicebus.messaging.nomatchingsubscriptionexception) |Service Bus returns this exception if you send a message to a topic that has prefiltering enabled and none of the filters match. |Make sure at least one filter matches. |Retry doesn't help. |
+| [MessageSizeExceededException](/dotnet/api/microsoft.servicebus.messaging.messagesizeexceededexception) |A message payload exceeds the 256-KB limit. The 256-KB limit is the total message size, which can include system properties and any .NET overhead. |Reduce the size of the message payload, then retry the operation. |Retry doesn't help. |
+| [TransactionException](/dotnet/api/system.transactions.transactionexception) |The ambient transaction (`Transaction.Current`) is invalid. It might have been completed or aborted. Inner exception might provide additional information. | |Retry doesn't help. |
+| [TransactionInDoubtException](/dotnet/api/system.transactions.transactionindoubtexception) |An operation is attempted on a transaction that is in doubt, or an attempt is made to commit the transaction and the transaction becomes in doubt. |Your application must handle this exception (as a special case), as the transaction might have already been committed. |- |
+
+## QuotaExceededException
+
+[QuotaExceededException](/dotnet/api/microsoft.azure.servicebus.quotaexceededexception) indicates that a quota for a specific entity has been exceeded.
+
+> [!NOTE]
+> For Service Bus quotas, see [Quotas](service-bus-quotas.md).
+
+### Queues and topics
+
+For queues and topics, it's often the size of the queue. The error message property contains further details, as in the following example:
+
+```output
+Microsoft.ServiceBus.Messaging.QuotaExceededException
+Message: The maximum entity size has been reached or exceeded for Topic: 'xxx-xxx-xxx'.
+ Size of entity in bytes:1073742326, Max entity size in bytes:
+1073741824..TrackingId:xxxxxxxxxxxxxxxxxxxxxxxxxx, TimeStamp:3/15/2013 7:50:18 AM
+```
+
+The message states that the topic exceeded its size limit, in this case 1 GB (the default size limit).
+
+### Namespaces
+
+For namespaces, [QuotaExceededException](/dotnet/api/microsoft.azure.servicebus.quotaexceededexception) can indicate that an application has exceeded the maximum number of connections to a namespace. For example:
+
+```output
+Microsoft.ServiceBus.Messaging.QuotaExceededException: ConnectionsQuotaExceeded for namespace xxx.
+<tracking-id-guid>_G12 >
+System.ServiceModel.FaultException`1[System.ServiceModel.ExceptionDetail]:
+ConnectionsQuotaExceeded for namespace xxx.
+```
+
+### Common causes
+
+There are two common causes for this error: the dead-letter queue, and nonfunctioning message receivers.
+
+1. **[Dead-letter queue](service-bus-dead-letter-queues.md)**
+ A reader is failing to complete messages and the messages are returned to the queue/topic when the lock expires. It can happen if the reader encounters an exception that prevents it from calling [BrokeredMessage.Complete](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.complete). After a message has been read 10 times, it moves to the dead-letter queue by default. This behavior is controlled by the [QueueDescription.MaxDeliveryCount](/dotnet/api/microsoft.servicebus.messaging.queuedescription.maxdeliverycount) property and has a default value of 10. As messages pile up in the dead letter queue, they take up space.
+
+ To resolve the issue, read and complete the messages from the dead-letter queue, as you would from any other queue. You can use the [FormatDeadLetterPath](/dotnet/api/microsoft.azure.servicebus.entitynamehelper.formatdeadletterpath) method to help format the dead-letter queue path.
+2. **Receiver stopped**. A receiver has stopped receiving messages from a queue or subscription. The way to identify this is to look at the [QueueDescription.MessageCountDetails](/dotnet/api/microsoft.servicebus.messaging.messagecountdetails) property, which shows the full breakdown of the messages. If the [ActiveMessageCount](/dotnet/api/microsoft.servicebus.messaging.messagecountdetails.activemessagecount) property is high or growing, then the messages aren't being read as fast as they're being written.
+
+## TimeoutException
+
+A [TimeoutException](/dotnet/api/system.timeoutexception) indicates that a user-initiated operation is taking longer than the operation timeout.
+
+You should check the value of the [ServicePointManager.DefaultConnectionLimit](/dotnet/api/system.net.servicepointmanager.defaultconnectionlimit) property, as hitting this limit can also cause a [TimeoutException](/dotnet/api/system.timeoutexception).
+
+Timeouts are expected to happen during or in-between maintenance operations such as Service Bus service updates (or) OS updates on resources that run the service. During OS updates, entities are moved around and nodes are updated or rebooted, which can cause timeouts. For service level agreement (SLA) details for the Azure Service Bus service, see [SLA for Service Bus](https://azure.microsoft.com/support/legal/sla/service-bus/).
+
+### Queues and topics
+
+For queues and topics, the timeout is specified either in the [MessagingFactorySettings.OperationTimeout](/dotnet/api/microsoft.servicebus.messaging.messagingfactorysettings) property, as part of the connection string, or through [ServiceBusConnectionStringBuilder](/dotnet/api/microsoft.azure.servicebus.servicebusconnectionstringbuilder). The error message itself might vary, but it always contains the timeout value specified for the current operation.
+
+## MessageLockLostException
+
+### Cause
+
+The **MessageLockLostException** is thrown when a message is received using the [PeekLock](message-transfers-locks-settlement.md#peeklock) Receive mode and the lock held by the client expires on the service side.
+
+The lock on a message might expire due to various reasons:
+
+ * The lock timer has expired before it was renewed by the client application.
+ * The client application acquired the lock, saved it to a persistent store and then restarted. Once it restarted, the client application looked at the inflight messages and tried to complete these.
+
+You might also receive this exception in the following scenarios:
+
+* Service Update
+* OS update
+* Changing properties on the entity (queue, topic, subscription) while holding the lock.
+
+### Resolution
+
+When a client application receives **MessageLockLostException**, it can no longer process the message. The client application might optionally consider logging the exception for analysis, but the client *must* dispose off the message.
+
+Since the lock on the message has expired, it would go back on the Queue (or Subscription) and can be processed by the next client application that calls receive.
+
+If the MaxDeliveryCount has exceeded, then the message might be moved to the **DeadLetterQueue**.
+
+## SessionLockLostException
+
+### Cause
+
+The **SessionLockLostException** is thrown when a session is accepted and the lock held by the client expires on the service side.
+
+The lock on a session might expire due to various reasons:
+
+ * The lock timer has expired before it was renewed by the client application.
+ * The client application acquired the lock, saved it to a persistent store and then restarted. Once it restarted, the client application looked at the inflight sessions and tried to process the messages in those sessions.
+
+You might also receive this exception in the following scenarios:
+
+* Service Update
+* OS update
+* Changing properties on the entity (queue, topic, subscription) while holding the lock.
+
+### Resolution
+
+When a client application receives **SessionLockLostException**, it can no longer process the messages on the session. The client application might consider logging the exception for analysis, but the client *must* dispose off the message.
+
+Since the lock on the session has expired, it would go back on the Queue (or Subscription) and can be locked by the next client application that accepts the session. Since the session lock is held by a single client application at any given time, the in-order processing is guaranteed.
+
+## SocketException
+
+### Cause
+
+A **SocketException** is thrown in the following cases:
+
+ * When a connection attempt fails because the host didn't properly respond after a specified time (TCP error code 10060).
+ * An established connection failed because connected host has failed to respond.
+ * There was an error processing the message or the timeout is exceeded by the remote host.
+ * Underlying network resource issue.
+
+### Resolution
+
+The **SocketException** errors indicate that the VM hosting the applications is unable to convert the name `<mynamespace>.servicebus.windows.net` to the corresponding IP address.
+
+Check to see if the following command succeeds in mapping to an IP address.
+
+```powershell
+PS C:\> nslookup <mynamespace>.servicebus.windows.net
+```
+
+Which should provide an output like:
+
+```bash
+Name: <cloudappinstance>.cloudapp.net
+Address: XX.XX.XXX.240
+Aliases: <mynamespace>.servicebus.windows.net
+```
+
+If the above name **does not resolve** to an IP and the namespace alias, check with the network administrator to investigate further. Name resolution is done through a DNS server typically a resource in the customer network. If the DNS resolution is done by Azure DNS, contact Azure support.
+
+If name resolution **works as expected**, check if connections to Azure Service Bus is allowed [here](service-bus-troubleshooting-guide.md#connectivity-certificate-or-timeout-issues).
+
+## MessagingException
+
+### Cause
+
+**MessagingException** is a generic exception that might be thrown for various reasons. Some of the reasons are:
+
+ * An attempt is made to create a **QueueClient** on a **Topic** or a **Subscription**.
+ * The size of the message sent is greater than the limit for the given tier. Read more about the Service Bus [quotas and limits](service-bus-quotas.md).
+ * Specific data plane request (send, receive, complete, abandon) was terminated due to throttling.
+ * Transient issues caused due to service upgrades and restarts.
+
+> [!NOTE]
+> The above list of exceptions is not exhaustive.
+
+### Resolution
+
+The resolution steps depend on what caused the **MessagingException** to be thrown.
+
+ * For **transient issues** (where ***isTransient*** is set to ***true***) or for **throttling issues**, retrying the operation might resolve it. The default retry policy on the SDK can be used for this.
+ * For other issues, the details in the exception indicate the issue and resolution steps can be deduced from the same.
+
+## StorageQuotaExceededException
+
+### Cause
+
+The **StorageQuotaExceededException** is generated when the total size of entities in a premium namespace exceeds the limit of 1 TB per [messaging unit](service-bus-premium-messaging.md).
+
+### Resolution
+
+- Increase the number of messaging units assigned to the premium namespace
+- If you're already using maximum allowed messaging units for a namespace, create a separate namespace.
+
+## Next steps
+
+For the complete Service Bus .NET API reference, see the [Azure .NET API reference](/dotnet/api/overview/azure/service-bus).
+For troubleshooting tips, see the [Troubleshooting guide](service-bus-troubleshooting-guide.md).
service-connector How To Build Connections With Iac Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-build-connections-with-iac-tools.md
+
+ Title: Create connections with IaC tools
+description: Learn how to translate your infrastructure to an IaC template
+++ Last updated : 10/20/2023++
+# How to translate your infrastructure to an IaC template
+
+Service Connector helps users connect their compute services to target backing services in just a few clicks or commands. When moving from a getting-started to a production stage, users also need to make the transition from using manual configurations to using Infrastructure as Code (IaC) templates in their CI/CD pipelines. In this guide, we show how to translate your connected Azure services to IaC templates.
+
+## Prerequisites
+
+- This guide assumes that you're aware of the [Service Connector IaC limitations](./known-limitations.md).
+
+## Solution overview
+
+Translating the infrastructure to IaC templates usually involves two major parts: the logics to provision source and target services, and the logics to build connections. To implement the logics to provision source and target services, there are two options:
+
+* Authoring the template from scratch.
+* Exporting the template from Azure and polish it.
+
+To implement the logics to build connections, there are also two options:
+
+* Using Service Connector in the template.
+* Using template logics to configure source and target services directly.
+
+Combinations of these different options can produce different solutions. Due to [IaC limitations](./known-limitations.md) in Service Connector, we recommend that you implement the following solutions in the order presented below. To apply these solutions, you must understand the IaC tools and the template authoring grammar.
+
+| Solution | Provision source and target | Build connection | Applicable scenario | Pros | Cons |
+| :: | :-: | :-: | :-: | - | - |
+| 1 | Authoring from scratch | Use Service Connector | Has liveness check on the cloud resources before allowing live traffics | - Template is simple and readable<br />- Service Connector brings extra values | - Cost to check cloud resources liveness |
+| 2 | Authoring from scratch | Configure source and target services directly in template | No liveness check on the cloud resources | - Template is simple and readable | - Service Connector features aren't available |
+| 3 | Export and polish | Use Service Connector | Has liveness check on the cloud resources before allowing live traffics | - Resources are exactly the same as in the cloud<br />- Service Connector brings extra values | - Cost to check cloud resources liveness<br />- Supports only ARM templates<br />- Efforts required to understand and polish the template |
+| 4 | Export and polish | Configure source and target services directly in template | No liveness check on the cloud resources | - Resources are exactly same as on the cloud | - Support only ARM template<br />- Efforts to understand and polish the template<br />- Service Connector features aren't available |
+
+## Authoring templates
+
+The following sections show how to create a web app and a storage account and connect them with a system-assigned identity using Bicep. It shows how to do this both using Service Connector and using template logics.
+
+### Provision source and target services
+
+**Authoring from scratch**
+
+Authoring the template from scratch is the preferred and recommended way to provision source and target services, as it's easy to get started and makes the template simple and readable. Following is an example, using a minimal set of parameters to create a webapp and a storage account.
+
+```bicep
+// This template creates a webapp and a storage account.
+// In order to make it more readable, we use only the mininal set of parameters to create the resources.
+
+param location string = resourceGroup().location
+// App Service plan parameters
+param planName string = 'plan_${uniqueString(resourceGroup().id)}'
+param kind string = 'linux'
+param reserved bool = true
+param sku string = 'B1'
+// Webapp parameters
+param webAppName string = 'webapp-${uniqueString(resourceGroup().id)}'
+param linuxFxVersion string = 'PYTHON|3.8'
+param identityType string = 'SystemAssigned'
+param appSettings array = []
+// Storage account parameters
+param storageAccountName string = 'account${uniqueString(resourceGroup().id)}'
++
+// Create an app service plan
+resource appServicePlan 'Microsoft.Web/serverfarms@2022-09-01' = {
+ name: planName
+ location: location
+ kind: kind
+ sku: {
+ name: sku
+ }
+ properties: {
+ reserved: reserved
+ }
+}
++
+// Create a web app
+resource appService 'Microsoft.Web/sites@2022-09-01' = {
+ name: webAppName
+ location: location
+ properties: {
+ serverFarmId: appServicePlan.id
+ siteConfig: {
+ linuxFxVersion: linuxFxVersion
+ appSettings: appSettings
+ }
+ }
+ identity: {
+ type: identityType
+ }
+}
++
+// Create a storage account
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-01-01' = {
+ name: storageAccountName
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'StorageV2'
+}
+```
+
+**Export and polish**
+
+If the resources you're provisioning are exactly the same ones as the ones you have in the cloud, exporting the template from Azure might be another option. The two premises of this approach are: the resources exist in Azure and you're using ARM templates for your IaC. The `Export template` button is usually at the bottom of the sidebar on Azure portal. The exported ARM template reflects the resource's current states, including the settings configured by Service Connector. You usually need to know about the resource properties to polish the exported template.
++
+### Build connection logics
+
+**Using Service Connector**
+
+Creating connections between the source and target service using Service Connector is the preferred and recommended way if the [Service Connector ](./known-limitations.md)[IaC limitation](./known-limitations.md) doesn't matter for your scenario. Service Connector makes the template simpler and also provides additional elements, such as the connection health validation, which you won't have if you're building connections through template logics directly.
+
+```bicep
+// The template builds a connection between a webapp and a storage account
+// with a system-assigned identity using Service Connector
+
+param webAppName string = 'webapp-${uniqueString(resourceGroup().id)}'
+param storageAccountName string = 'account${uniqueString(resourceGroup().id)}'
+param connectorName string = 'connector_${uniqueString(resourceGroup().id)}'
+
+// Get an existing webapp
+resource webApp 'Microsoft.Web/sites@2022-09-01' existing = {
+ name: webAppName
+}
+
+// Get an existig storage
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-01-01' existing = {
+ name: storageAccountName
+}
+
+// Create a Service Connector resource for the webapp
+// to connect to a storage account using system identity
+resource serviceConnector 'Microsoft.ServiceLinker/linkers@2022-05-01' = {
+ name: connectorName
+ scope: webApp
+ properties: {
+ clientType: 'python'
+ targetService: {
+ type: 'AzureResource'
+ id: storageAccount.id
+ }
+ authInfo: {
+ authType: 'systemAssignedIdentity'
+ }
+ }
+}
+```
+
+For the formats of properties and values needed when creating a Service Connector resource, check [how to provide correct parameters](./how-to-provide-correct-parameters.md). You can also preview and download an ARM template for reference when creating a Service Connector resource in the Azure portal.
++
+**Using template logics**
+
+For the scenarios where the Service Connector [IaC limitation](./known-limitations.md) matters, consider building connections using the template logics directly. The following template is an example showing how to connect a storage account to a web app using a system-assigned identity.
+
+```bicep
+// The template builds a connection between a webapp and a storage account
+// with a system-assigned identity without using Service Connector
+
+param webAppName string = 'webapp-${uniqueString(resourceGroup().id)}'
+param storageAccountName string = 'account${uniqueString(resourceGroup().id)}'
+param storageBlobDataContributorRole string = 'ba92f5b4-2d11-453d-a403-e96b0029c9fe'
+
+// Get an existing webapp
+resource webApp 'Microsoft.Web/sites@2022-09-01' existing = {
+ name: webAppName
+}
+
+// Get an existing storage account
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-01-01' existing = {
+ name: storageAccountName
+}
+
+// Operation: Enable system-assigned identity on the source service
+// No action needed as this is enabled when creating the webapp
+
+// Operation: Configure the target service's endpoint on the source service's app settings
+resource appSettings 'Microsoft.Web/sites/config@2022-09-01' = {
+ name: 'appsettings'
+ parent: webApp
+ properties: {
+ AZURE_STORAGEBLOB_RESOURCEENDPOINT: storageAccount.properties.primaryEndpoints.blob
+ }
+}
+
+// Operation: Configure firewall on the target service to allow the source service's outbound IPs
+// No action needed as storage account allows all IPs by default
+
+// Operation: Create role assignment for the source service's identity on the target service
+resource roleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
+ scope: storageAccount
+ name: guid(resourceGroup().id, storageBlobDataContributorRole)
+ properties: {
+ roleDefinitionId: resourceId('Microsoft.Authorization/roleDefinitions', storageBlobDataContributorRole)
+ principalId: webApp.identity.principalId
+ }
+}
+```
+
+When building connections using template logics directly, it's crucial to understand what Service Connector does for each kind of authentication type, as the template logics are equivalent to the Service Connector backend operations. The following table shows the operation details that you need translate to template logics for each kind of authentication type.
+
+| Auth type | Service Connector operations |
+| -- | - |
+| Secret / Connection string | - Configure the target service's connection string on the source service's app settings<br />- Configure firewall on the target service to allow the source service's outbound IPs |
+| System-assigned managed identity | - Configure the target service's endpoint on the source service's app settings<br />- Configure firewall on the target service to allow the source service's outbound IPs<br />- Enable system assigned identity on the source service<br />- Create role assignment for the source service's identity on the target service |
+| User-assigned managed identity | - Configure the target service's endpoint on the source service's app settings<br />- Configure firewall on the target service to allow the source service's outbound IPs<br />- Bind user assigned identity to the source service<br />- Create role assignment for the user assigned identity on the target service |
+| Service principal | - Configure the target service's endpoint on the source service's app settings<br />- Configure the service principal's appId and secret on the source service's app settings<br />- Configure firewall on the target service to allow the source service's outbound IPs<br />- Create role assignment for the service principal on the target service |
service-connector Known Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/known-limitations.md
Last updated 03/02/2023
- # Known limitations of Service Connector In this article, learn about Service Connector's existing limitations and how to mitigate them.
Service Connector has been designed to bring the benefits of easy, secure, and c
Unfortunately, there are some limitations with IaC support as Service Connector modifies infrastructure on users' behalf. In this scenario, users would begin by using Azure Resource Manager (ARM), Bicep, Terraform, or other IaC templates to create resources. Afterwards, they would use Service Connector to set up resource connections. During this step, Service Connector modifies resource configurations on behalf of the user. If the user reruns their IaC template at a later time, modifications made by Service Connector would disappear as they were not reflected in the original IaC templates. An example of this behavior is Azure Container Apps deployed with ARM templates usually have Managed Identity (MI) disabled by default, Service Connector enables MI when setting up connections on users' behalf. If users trigger the same ARM templates without updating MI settings, the redeployed container apps will have MI disabled again.
-If you run into any issues when using Service Connector, [file an issue with us](https://github.com/Azure/ServiceConnector/issues/new).
+If you run into any issues when using Service Connector, [file an issue with us](https://github.com/Azure/ServiceConnector/issues/new).
## Solutions
-We suggest the following solutions:
--- Use Service Connector in Azure portal or Azure CLI to set up connections between compute and backing services, export ARM template from these existing resources via Azure portal or Azure CLI. Then use the exported ARM template as basis to craft automation ARM templates. This way, the exported ARM templates contain configurations added by Service Connector, reapplying the ARM templates doesn't affect existing application. -- If CI/CD pipelines contain ARM templates of source compute or backing services, suggested flow is: reapplying the ARM templates, adding sanity check or smoke tests to make sure the application is up and running, then allowing live traffic to the application. The flow adds verification step before allowing live traffic.
+We suggest the following solutions:
+- Reference [how to build connections with IaC tools](how-to-build-connections-with-iac-tools.md) to build your infrastructure or translate your existing infrastructure to IaC templates.
+- If your CI/CD pipelines contain templates of source compute or backing services, suggested flow is: reapplying the templates, adding sanity check or smoke tests to make sure the application is up and running, then allowing live traffic to the application. The flow adds a verification step before allowing live traffic.
- When automating Azure Container App code deployments with Service Connector, we recommend the use of [multiple revision mode](../container-apps/revisions.md#revision-modes) to avoid routing traffic to a temporarily nonfunctional app before Service connector can reapply connections.--- The order in which automation operations are performed matters greatly. Ensure your connection endpoints are there before the connection itself is created. Ideally, create the backing service, then the compute service, and then the connection between the two. So Service Connector can configure both the compute service and the backing service appropriately. -
+- The order in which automation operations are performed matters greatly. Ensure your connection endpoints are there before the connection itself is created. Ideally, create the backing service, then the compute service, and then the connection between the two. This way, Service Connector can configure both the compute service and the backing service appropriately.
## Next steps
spring-apps Concept Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/concept-metrics.md
The following table applies to the Tanzu Spring Cloud Gateway in Enterprise plan
## Next steps * [Quickstart: Monitoring Azure Spring Apps apps with logs, metrics, and tracing](./quickstart-logs-metrics-tracing.md)
-* [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md)
+* [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md)
* [Analyze logs and metrics with diagnostics settings](./diagnostic-services.md) * [Tutorial: Monitor Spring app resources using alerts and action groups](./tutorial-alerts-action-groups.md) * [Quotas and Service Plans for Azure Spring Apps](./quotas.md)
storage Anonymous Read Access Prevent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-prevent.md
To understand how disallowing anonymous access may affect client applications, w
### Monitor anonymous requests with Metrics Explorer
-To track anonymous requests to a storage account, use Azure Metrics Explorer in the Azure portal. For more information about Metrics Explorer, see [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md).
+To track anonymous requests to a storage account, use Azure Metrics Explorer in the Azure portal. For more information about Metrics Explorer, see [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md).
Follow these steps to create a metric that tracks anonymous requests:
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage.md
For a list of all Azure Monitor support metrics, which includes Azure Blob Stora
### [Azure portal](#tab/azure-portal)
-You can analyze metrics for Azure Storage with metrics from other Azure services by using Metrics Explorer. Open Metrics Explorer by choosing **Metrics** from the **Azure Monitor** menu. For details on using this tool, see [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md).
+You can analyze metrics for Azure Storage with metrics from other Azure services by using Metrics Explorer. Open Metrics Explorer by choosing **Metrics** from the **Azure Monitor** menu. For details on using this tool, see [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md).
This example shows how to view **Transactions** at the account level.
Get started with any of these guides.
| [Monitor, diagnose, and troubleshoot your Azure Storage](/training/modules/monitor-diagnose-and-troubleshoot-azure-storage/) | Troubleshoot storage account issues (contains step-by-step guidance). | | [Monitor storage with Azure Monitor Storage insights](../common/storage-insights-overview.md) | A unified view of storage performance, capacity, and availability | | [Best practices for monitoring Azure Blob Storage](blob-storage-monitoring-scenarios.md) | Guidance for common monitoring and troubleshooting scenarios. |
-| [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md) | A tour of Metrics Explorer.
+| [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md) | A tour of Metrics Explorer.
| [Overview of Log Analytics in Azure Monitor](../../azure-monitor/logs/log-analytics-overview.md) | A tour of Log Analytics. | | [Azure Monitor Metrics overview](../../azure-monitor/essentials/data-platform-metrics.md) | The basics of metrics and metric dimensions | | [Azure Monitor Logs overview](../../azure-monitor/logs/data-platform-logs.md)| The basics of logs and how to collect and analyze them |
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
You can use many different SFTP clients to securely connect and then transfer fi
| Key exchange |ecdh-sha2-nistp384<br>ecdh-sha2-nistp256<br>diffie-hellman-group14-sha256<br>diffie-hellman-group16-sha512<br>diffie-hellman-group-exchange-sha256| | Ciphers/encryption |aes128-gcm@openssh.com<br>aes256-gcm@openssh.com<br>aes128-ctr<br>aes192-ctr<br>aes256-ctr| | Integrity/MAC |hmac-sha2-256<br>hmac-sha2-512<br>hmac-sha2-256-etm@openssh.com<br>hmac-sha2-512-etm@openssh.com|
-| Public key |ssh-rsa <sup>2</sup><br>rsa-sha2-256<br>rsa-sha2-512<br>ecdsa-sha2-nistp256<br>ecdsa-sha2-nistp384|
+| Public key |ssh-rsa <sup>2</sup><br>rsa-sha2-256<br>rsa-sha2-512<br>ecdsa-sha2-nistp256<br>ecdsa-sha2-nistp384<br>ecdsa-sha2-nistp521|
<sup>1</sup> Host keys are published [here](secure-file-transfer-protocol-host-keys.md). <sup>2</sup> RSA keys must be minimum 2048 bits in length.
storage Versioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/versioning-overview.md
When blob soft delete is enabled, all soft-deleted entities are billed at full c
## Feature support +
+Versioning is not supported for blobs that are uploaded by using [Data Lake Storage Gen2](/rest/api/storageservices/data-lake-storage-gen2) APIs.
## See also
When blob soft delete is enabled, all soft-deleted entities are billed at full c
- [Creating a snapshot of a blob](/rest/api/storageservices/creating-a-snapshot-of-a-blob) - [Soft delete for blobs](./soft-delete-blob-overview.md) - [Soft delete for containers](soft-delete-container-overview.md)+
storage Shared Key Authorization Prevent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/shared-key-authorization-prevent.md
A SAS may be authorized with either Shared Key or Microsoft Entra ID. For more i
### Determine the number and frequency of requests authorized with Shared Key
-To track how requests to a storage account are being authorized, use Azure Metrics Explorer in the Azure portal. For more information about Metrics Explorer, see [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md).
+To track how requests to a storage account are being authorized, use Azure Metrics Explorer in the Azure portal. For more information about Metrics Explorer, see [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md).
Follow these steps to create a metric that tracks requests made with Shared Key or SAS:
storage File Sync Monitor Cloud Tiering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-monitor-cloud-tiering.md
To be more specific on what you want your graphs to display, consider using **Ad
For details on the different types of metrics for Azure File Sync and how to use them, see [Monitor Azure File Sync](file-sync-monitoring.md).
-For details on how to use metrics, see [Getting started with Azure Metrics Explorer.](../../azure-monitor/essentials/metrics-getting-started.md).
+For details on how to use metrics, see [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md).
To change your cloud tiering policy, see [Choose cloud tiering policies](file-sync-choose-cloud-tiering-policies.md).
storage Analyze Files Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/analyze-files-metrics.md
For a list of all Azure Monitor support metrics, which includes Azure Files, see
### [Azure portal](#tab/azure-portal)
-You can analyze metrics for Azure Storage with metrics from other Azure services by using Metrics Explorer. Open Metrics Explorer by choosing **Metrics** from the **Azure Monitor** menu. For details on using this tool, see [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md).
+You can analyze metrics for Azure Storage with metrics from other Azure services by using Metrics Explorer. Open Metrics Explorer by choosing **Metrics** from the **Azure Monitor** menu. For details on using this tool, see [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md).
For metrics that support dimensions, you can filter the metric with the desired dimension value. For a complete list of the dimensions that Azure Storage supports, see [Metrics dimensions](storage-files-monitoring-reference.md#metrics-dimensions). Metrics for Azure Files are in these namespaces:
storage Storage Files Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-faq.md
Title: Frequently asked questions (FAQ) for Azure Files
description: Get answers to Azure Files frequently asked questions. You can mount Azure file shares concurrently on cloud or on-premises Windows, Linux, or macOS deployments. Previously updated : 07/12/2023 Last updated : 10/30/2023
* <a id="ad-sid-to-upn"></a> **Is it possible to view the userPrincipalName (UPN) of a file/directory owner in File Explorer instead of the security identifier (SID)?**
- Windows Explorer calls an RPC API directly to the server (Azure Files) to translate the SID to a UPN. This API is something that Azure Files does not support. In File Explorer, the SID of a file/directory owner is displayed instead of the UPN for files and directories hosted on Azure Files. However, you can use the following PowerShell command to view all items in a directory and their owner, including UPN:
+ File Explorer calls an RPC API directly to the server (Azure Files) to translate the SID to a UPN. Azure Files doesn't support this API, so in File Explorer, the SID of a file/directory owner is displayed instead of the UPN for files and directories hosted on Azure Files. However, you can use the following PowerShell command to view all items in a directory and their owner, including UPN:
```PowerShell Get-ChildItem <Path> | Get-ACL | Select Path, Owner
storage Storage Files Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-overview.md
description: Learn how to migrate to Azure file shares and find your migration g
Previously updated : 07/13/2023 Last updated : 10/30/2023
Unlike object storage in Azure blobs, an Azure file share can natively store fil
> [!IMPORTANT] > If you're migrating on-premises file servers to Azure File Sync, set the ACLs for the root directory of the file share **before** copying a large number of files, as changes to permissions for root ACLs can take up to a day to propagate if done after a large file migration.
-A user of Active Directory, which is their on-premises domain controller, can natively access an Azure file share. So can a user of Microsoft Entra Domain Services. Each uses their current identity to get access based on share permissions and on file and folder ACLs. This behavior is similar to a user connecting to an on-premises file share.
+Users that leverage Active Directory Domain Services (AD DS) as their on-premises domain controller can natively access an Azure file share. So can users of Microsoft Entra Domain Services. Each uses their current identity to get access based on share permissions and on file and folder ACLs. This behavior is similar to a user connecting to an on-premises file share.
The alternative data stream is the primary aspect of file fidelity that currently can't be stored on a file in an Azure file share. It's preserved on-premises when Azure File Sync is used.
A scenario without a link doesn't yet have a published migration guide. Check th
### File-copy tools
-There are several file-copy tools available from Microsoft and others. To select the right tool for your migration scenario, you must consider these fundamental questions:
+There are several file-copy tools available from Microsoft and others. To select the right tool for your migration scenario, consider these fundamental questions:
* Does the tool support the source and target locations for your file copy?
There are several file-copy tools available from Microsoft and others. To select
The first time you run the tool, it copies the bulk of the data. This initial run might last a while. It often lasts longer than you want for taking the data source offline for your business processes.
- By mirroring a source to a target (as with **robocopy /MIR**), you can run the tool again on that same source and target. The run is much faster because it needs to transport only source changes that occur after the previous run. Rerunning a copy tool this way can reduce downtime significantly.
+ By mirroring a source to a target (as with **robocopy /MIR**), you can run the tool again on that same source and target. This second run is much faster because it needs to transport only source changes that happened after the previous run. Rerunning a copy tool this way can reduce downtime significantly.
The following table classifies Microsoft tools and their current suitability for Azure file shares:
The following table classifies Microsoft tools and their current suitability for
| :-: | :-- | :- | :- | |![Yes, recommended](media/storage-files-migration-overview/circle-green-checkmark.png)| RoboCopy | Supported. Azure file shares can be mounted as network drives. | Full fidelity.* | |![Yes, recommended](media/storage-files-migration-overview/circle-green-checkmark.png)| Azure File Sync | Natively integrated into Azure file shares. | Full fidelity.* |
-|![Yes, recommended](medi) | Supported. | Full fidelity.* |
+|![Yes, recommended](medi) | Supported. | Full fidelity.* |
|![Yes, recommended](media/storage-files-migration-overview/circle-green-checkmark.png)| Storage Migration Service | Indirectly supported. Azure file shares can be mounted as network drives on SMS target servers. | Full fidelity.* |
-|![Yes, recommended](medi) to load files onto the device)| Supported. </br>(Data Box Disks does not support large file shares) | Data Box and Data Box Heavy fully support metadata. </br>Data Box Disks does not preserve file metadata. |
+|![Yes, recommended](medi) to load files onto the device)| Supported. </br>(Data Box Disks doesn't support large file shares) | Data Box and Data Box Heavy fully support metadata. </br>Data Box Disks does not preserve file metadata. |
|![Not fully recommended](medi) | |![Not fully recommended](media/storage-files-migration-overview/triangle-yellow-exclamation.png)| Azure Storage Explorer </br>latest version | Supported but not recommended. | Loses most file fidelity, like ACLs. Supports timestamps. | |![Not recommended](media/storage-files-migration-overview/circle-red-x.png)| Azure Data Factory | Supported. | Doesn't copy metadata. | |||||
-*\* Full fidelity: meets or exceeds Azure file-share capabilities.*
+*\* Full fidelity: meets or exceeds Azure file share capabilities.*
### Migration helper tools This section describes tools that help you plan and run migrations.
-#### RoboCopy from Microsoft Corporation
+#### RoboCopy
-RoboCopy is one of the tools most applicable to file migrations. It comes as part of Windows. The main [RoboCopy documentation](/windows-server/administration/windows-commands/robocopy) is a helpful resource for this tool's many options.
+Included in Windows, RoboCopy is one of the tools most applicable to file migrations. The main [RoboCopy documentation](/windows-server/administration/windows-commands/robocopy) is a helpful resource for this tool's many options.
+
+#### Azure Storage Migration Program
+
+Understanding your data is the first step in selecting the appropriate Azure storage service and migration strategy. Azure Storage Migration Program provides different tools that can analyze your data and storage infrastructure to provide valuable insights. These tools can help you understand the size and type of data, file and folder count, and access patterns. They provide a consolidated view of your data and enable the creation of various customized reports.
+
+This information can help:
+
+- Identify duplicate and redundant data sets
+- Identify colder data that can be moved to less expensive storage
+
+To learn more, see [Comparison Matrix for Azure Storage Migration Program participants](../solution-integration/validated-partners/data-management/azure-file-migration-program-solutions.md).
#### TreeSize from JAM Software GmbH
The tested version of the tool is version 4.4.1. It's compatible with cloud-tier
## Next steps 1. Create a plan for which deployment of Azure file shares (cloud-only or hybrid) you want.
-1. Review the list of available migration guides to find the detailed guide that matches your source and deployment of Azure file shares.
+1. Review the list of available migration guides to find the guide that matches your source and deployment of Azure file shares.
More information about the Azure Files technologies mentioned in this article:
-* [Azure file share overview](storage-files-introduction.md)
-* [Planning for an Azure File Sync deployment](../file-sync/file-sync-planning.md)
-* [Azure File Sync: Cloud tiering](../file-sync/file-sync-cloud-tiering-overview.md)
+- [Azure file share overview](storage-files-introduction.md)
+- [Planning for an Azure File Sync deployment](../file-sync/file-sync-planning.md)
+- [Azure File Sync: Cloud tiering](../file-sync/file-sync-cloud-tiering-overview.md)
storage Monitor Queue Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/monitor-queue-storage.md
For a list of all Azure Monitor support metrics, which includes Azure Queue Stor
### [Azure portal](#tab/azure-portal)
-You can analyze metrics for Azure Storage with metrics from other Azure services by using Azure Metrics Explorer. Open Metrics Explorer by choosing **Metrics** from the **Azure Monitor** menu. For details on using this tool, see [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md).
+You can analyze metrics for Azure Storage with metrics from other Azure services by using Azure Metrics Explorer. Open Metrics Explorer by choosing **Metrics** from the **Azure Monitor** menu. For details on using this tool, see [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md).
This example shows how to view **Transactions** at the account level.
Get started with any of these guides.
||| | [Monitor, diagnose, and troubleshoot your Azure Storage](/training/modules/monitor-diagnose-and-troubleshoot-azure-storage/) | Troubleshoot storage account issues (contains step-by-step guidance). | | [Monitor storage with Azure Monitor Storage insights](../common/storage-insights-overview.md) | A unified view of storage performance, capacity, and availability |
-| [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md) | A tour of Metrics Explorer.
+| [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md) | A tour of Metrics Explorer.
| [Overview of Log Analytics in Azure Monitor](../../azure-monitor/logs/log-analytics-overview.md) | A tour of Log Analytics. | | [Azure Monitor Metrics overview](../../azure-monitor/essentials/data-platform-metrics.md) | The basics of metrics and metric dimensions | | [Azure Monitor Logs overview](../../azure-monitor/logs/data-platform-logs.md)| The basics of logs and how to collect and analyze them |
storage Monitor Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/monitor-table-storage.md
For a list of all Azure Monitor support metrics, which includes Azure Table stor
### [Azure portal](#tab/azure-portal)
-You can analyze metrics for Azure Storage with metrics from other Azure services by using Metrics Explorer. Open Metrics Explorer by choosing **Metrics** from the **Azure Monitor** menu. For details on using this tool, see [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md).
+You can analyze metrics for Azure Storage with metrics from other Azure services by using Metrics Explorer. Open Metrics Explorer by choosing **Metrics** from the **Azure Monitor** menu. For details on using this tool, see [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md).
This example shows how to view **Transactions** at the account level.
stream-analytics Monitor Azure Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/monitor-azure-stream-analytics.md
See [Create diagnostic setting to collect platform logs and metrics in Azure](/a
The metrics and logs you can collect are discussed in the following sections. ## Analyzing metrics
-You can analyze metrics for **Azure Stream Analytics** with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started) for details on using this tool.
+You can analyze metrics for **Azure Stream Analytics** with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
For a list of the platform metrics collected for Azure Stream Analytics, see [Monitoring Azure Stream Analytics data reference metrics](monitor-azure-stream-analytics-reference.md#metrics)
synapse-analytics Get Started Analyze Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-sql-pool.md
A dedicated SQL pool consumes billable resources as long as it's active. You can
GROUP BY passenger_count; SELECT * FROM dbo.PassengerCountStats
- ORDER BY passenger_count;
+ ORDER BY PassengerCount;
``` This query creates a table `dbo.PassengerCountStats` with aggregate data from the `trip_distance` field, then queries the new table. The data shows how the total trip distances and average trip distance relate to the number of passengers.
synapse-analytics Connectivity Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/connectivity-settings.md
You can use the public network access feature to allow incoming public network c
- When public network access is **enabled**, you can connect to your workspace also from public networks. You can manage this feature both during and after your workspace creation. > [!IMPORTANT]
-> This feature is only available to Azure Synapse workspaces associated with [Azure Synapse Analytics Managed Virtual Network](synapse-workspace-managed-vnet.md). However, you can still open your Synapse workspaces to the public network regardless of its association with managed VNet.
+> This feature is only available to Azure Synapse workspaces associated with [Azure Synapse Analytics Managed Virtual Network](synapse-workspace-managed-vnet.md). However, you can still open your Synapse workspaces to the public network regardless of its association with managed VNet.
+>
+> When the public network access is disabled, access to GIT mode in Synapse Studio and commit changes won't be blocked as long as the user has enough permission to access the integrated Git repo or the corresponding Git branch. However, the publish button won't work because the access to Live mode is blocked by the firewall settings.
Selecting the **Disable** option will not apply any firewall rules that you may configure. Additionally, your firewall rules will appear greyed out in the Network setting in Synapse portal. Your firewall configurations will be reapplied when you enable public network access again.
synapse-analytics Sql Data Warehouse Workload Management Portal Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-management-portal-monitor.md
# Azure Synapse Analytics ΓÇô Workload Management Portal Monitoring This article explains how to monitor [workload group](sql-data-warehouse-workload-isolation.md#workload-groups) resource utilization and query activity.
-For details on how to configure the Azure Metrics Explorer see the [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) article. See the [Resource utilization](sql-data-warehouse-concept-resource-utilization-query-activity.md#resource-utilization) section in Azure Synapse Analytics Monitoring documentation for details on how to monitor system resource consumption.
+For details on how to configure the Azure Metrics Explorer see the [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) article. See the [Resource utilization](sql-data-warehouse-concept-resource-utilization-query-activity.md#resource-utilization) section in Azure Synapse Analytics Monitoring documentation for details on how to monitor system resource consumption.
There are two different categories of workload group metrics provided for monitoring workload management: resource allocation and query activity. These metrics can be split and filtered by workload group. The metrics can be split and filtered based on if they are system defined (resource class workload groups) or user-defined (created by user with [CREATE WORKLOAD GROUP](/sql/t-sql/statements/create-workload-group-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) syntax). ## Workload management metric definitions
time-series-insights How To Monitor Tsi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-monitor-tsi.md
You can collect logs from the following categories for Azure Time Series Insight
## Analyzing metrics
-You can analyze metrics for Azure Time Series Insights, along with metrics from other Azure services, by opening Metrics from the Azure Monitor menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+You can analyze metrics for Azure Time Series Insights, along with metrics from other Azure services, by opening Metrics from the Azure Monitor menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
For a list of the platform metrics collected, see [Monitoring Azure Time Series Insights data reference](how-to-monitor-tsi-reference.md#metrics)
traffic-manager Traffic Manager Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-metrics-alerts.md
For more information about probes and monitoring, see [Traffic Manager endpoint
## Next steps - Learn more about [Azure Monitor service](../azure-monitor/essentials/metrics-supported.md)-- Learn how to [create a chart using Azure Monitor](../azure-monitor/essentials/metrics-getting-started.md#create-your-first-metric-chart)
+- Learn how to [create a chart in Azure Monitor](../azure-monitor/essentials/analyze-metrics.md#create-a-metric-chart)
virtual-machines Premium Storage Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/premium-storage-performance.md
Latency is the time it takes an application to receive a single request, send it
When you are optimizing your application to get higher IOPS and Throughput, it will affect the latency of your application. After tuning the application performance, always evaluate the latency of the application to avoid unexpected high latency behavior.
-The following control plane operations on Managed Disks may involve movement of the Disk from one Storage location to another. This is orchestrated via background copy of data that can take several hours to complete, typically less than 24 hours depending on the amount of data in the disks. During that time your application can experience higher than usual read latency as some reads can get redirected to the original location and can take longer to complete. There is no impact on write latency during this period.
+The following control plane operations on Managed Disks may involve movement of the Disk from one Storage location to another. This is orchestrated via background copy of data that can take several hours to complete, typically less than 24 hours depending on the amount of data in the disks. During that time your application can experience higher than usual read latency as some reads can get redirected to the original location and can take longer to complete. There is no impact on write latency during this period. For Premium SSD v2 and Ultra disks, if the disk has a 4k sector size, it would experience higher read latency. If the disk has a 512e sector size, it would experience both higher read and write latency.
- Update the storage type. - Detach and attach a disk from one VM to another.
virtual-network Monitor Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/monitor-public-ip.md
The metrics and logs you can collect are discussed in the following sections.
## Analyzing metrics
-You can analyze metrics for *Public IP Addresses* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+You can analyze metrics for *Public IP Addresses* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
For a list of the platform metrics collected for Public IP Address, see [Monitoring *Public IP Addresses* data reference](monitor-public-ip-reference.md#metrics) .
virtual-network Troubleshoot Outbound Smtp Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/troubleshoot-outbound-smtp-connectivity.md
Previously updated : 04/28/2021 Last updated : 10/30/2023
Outbound email messages that are sent directly to external domains (such as outlook.com and gmail.com) from a virtual machine (VM) are made available only to certain subscription types in Microsoft Azure. > [!IMPORTANT]
-> For all examples below, the process applies mainly to Virtual Machines & VM Scale Sets resources (`Microsoft.Compute/virtualMachines` & `Microsoft.Compute/virtualMachineScaleSets`). It is possible to use port 25 for outbound communication on [Azure App Service](https://azure.microsoft.com/services/app-service) and [Azure Functions](https://azure.microsoft.com/services/functions) through the [virtual network integration feature](../app-service/environment/networking.md#network-routing) or when using [App Service Environment v3](../app-service/environment/networking.md#network-routing). However, the subscription limitations described below still apply. Sending email on Port 25 is unsupported for all other Azure Platform-as-a-Service (PaaS) resources.
+> For the following examples, the process applies mainly to Virtual Machines & VM Scale Sets resources (`Microsoft.Compute/virtualMachines` & `Microsoft.Compute/virtualMachineScaleSets`). It's possible to use port 25 for outbound communication on [Azure App Service](https://azure.microsoft.com/services/app-service) and [Azure Functions](https://azure.microsoft.com/services/functions) through the [virtual network integration feature](/azure/app-service/overview-vnet-integration#application-routing) or when using [App Service Environment v3](../app-service/environment/networking.md#network-routing). However, the the following subscription limitations described still apply. Sending email on Port 25 is unsupported for all other Azure Platform-as-a-Service (PaaS) resources.
## Recommended method of sending email+ We recommend you use authenticated SMTP relay services to send email from Azure VMs or from Azure App Service. (These relay services typically connect through TCP port 587, but they support other ports.) These services are used to maintain IP and domain reputation to minimize the possibility that external domains reject your messages or put them to the SPAM folder. [SendGrid](https://sendgrid.com/partners/azure/) is one such SMTP relay service, but there are others. You might also have an authenticated SMTP relay service on your on-premises servers. Using these email delivery services isn't restricted in Azure, regardless of the subscription type. ## Enterprise Agreement
-For VMs that are deployed in standard Enterprise Agreement subscriptions, the outbound SMTP connections on TCP port 25 will not be blocked. However, there is no guarantee that external domains will accept the incoming emails from the VMs. If your emails are rejected or filtered by the external domains, you should contact the email service providers of the external domains to resolve the problems. These problems are not covered by Azure support.
+
+For VMs that are deployed in standard Enterprise Agreement subscriptions, the outbound SMTP connections on TCP port 25 aren't blocked. However, there's no guarantee that external domains accept the incoming emails from the VMs. For emails rejected or filtered by the external domains, contact the email service providers of the external domains to resolve the problems. These problems aren't covered by Azure support.
For Enterprise Dev/Test subscriptions, port 25 is blocked by default.
-It is possible to have this block removed. To request to have the block removed, go to the **Cannot send email (SMTP-Port 25)** section of the **Diagnose and Solve** blade in the Azure Virtual Network resource in the Azure portal and run the diagnostic. This will exempt the qualified enterprise dev/test subscriptions automatically.
+
+It's possible to have this block removed. To request to have the block removed, go to the **Cannot send email (SMTP-Port 25)** section of the **Diagnose and Solve** section in the Azure Virtual Network resource in the Azure portal and run the diagnostic. This process exempts the qualified enterprise dev/test subscriptions automatically.
After the subscription is exempted from this block and the VMs are stopped and restarted, all VMs in that subscription are exempted going forward. The exemption applies only to the subscription requested and only to VM traffic that is routed directly to the internet. ## All Other Subscription Types
-The Azure platform will block outbound SMTP connections on TCP port 25 for deployed VMs. This is to ensure better security for Microsoft partners and customers, protect MicrosoftΓÇÖs Azure platform, and conform to industry standards.
+The Azure platform blocks outbound SMTP connections on TCP port 25 for deployed VMs. This block is to ensure better security for Microsoft partners and customers, protect MicrosoftΓÇÖs Azure platform, and conform to industry standards.
-If you're using a non-enterprise subscription type, we encourage you to use an authenticated SMTP relay service, as outlined earlier in this article.
+If you're using a nonenterprise subscription type, we encourage you to use an authenticated SMTP relay service, as outlined earlier in this article.
## Changing subscription type
-If you change your subscription type from Enterprise Agreement to another type of subscription, changes to your deployments may result in outbound SMTP being blocked.
+If you change your subscription type from Enterprise Agreement to another type of subscription, changes to your deployments might result in outbound SMTP being blocked.
## Need help? Contact support
-If you are using an Enterprise Agreement subscription and still need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your problem resolved quickly. Use this issue type: **Technical** > **Virtual Network** > **Cannot send email (SMTP/Port 25)**.
+If you're using an Enterprise Agreement subscription and still need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your problem resolved quickly. Use this issue type: **Technical** > **Virtual Network** > **Cannot send email (SMTP/Port 25)**.
virtual-wan Monitor Virtual Wan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-virtual-wan.md
Virtual WAN uses Network Insights to provide users and operators with the abilit
* See [Monitoring Virtual WAN data reference](monitor-virtual-wan-reference.md) for a data reference of the metrics, logs, and other important values created by Virtual WAN. * See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
-* See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for additional details on **Azure Monitor Metrics**.
+* See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for additional details on **Azure Monitor Metrics**.
* See [All resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md) for a list of all supported metrics. * See [Create diagnostic settings in Azure Monitor](../azure-monitor/essentials/diagnostic-settings.md) for more information and troubleshooting on creating diagnostic settings via Azure portal, CLI, PowerShell, etc., you can visit
virtual-wan Monitoring Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitoring-best-practices.md
This section of the article focuses on metric-based alerts. Azure Firewall offer
* See [Monitoring Virtual WAN data reference](monitor-virtual-wan-reference.md) for a data reference of the metrics, logs, and other important values created by Virtual WAN. * See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
-* See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for more details about **Azure Monitor Metrics**.
+* See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for more details about **Azure Monitor Metrics**.
* See [All resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md) for a list of all supported metrics. * See [Create diagnostic settings in Azure Monitor](../azure-monitor/essentials/diagnostic-settings.md) for more information and troubleshooting when creating diagnostic settings via the Azure portal, CLI, PowerShell, etc.
virtual-wan Scenario Bgp Peering Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-bgp-peering-hub.md
description: Learn about BGP peering with an Azure Virtual WAN virtual hub.
Previously updated : 09/06/2022 Last updated : 10/30/2023
The virtual hub router now also exposes the ability to peer with it, thereby exc
* You can't peer a virtual hub router with Azure Route Server provisioned in a virtual network. * The virtual hub router only supports 16-bit (2 bytes) ASN.
-* The virtual network connection that has the NVA BGP connection endpoint must always be associated and propagating to defaultRouteTable. Custom route tables are not supported at this time.
+* The virtual network connection that has the NVA BGP connection endpoint must always be associated and propagating to defaultRouteTable. Custom route tables aren't supported at this time.
* The virtual hub router supports transit connectivity between virtual networks connected to virtual hubs. This has nothing to do with this feature for BGP peering capability as Virtual WAN already supports transit connectivity. Examples: * VNET1: NVA1 connected to Virtual Hub 1 -> (transit connectivity) -> VNET2: NVA2 connected to Virtual Hub 1. * VNET1: NVA1 connected to Virtual Hub 1 -> (transit connectivity) -> VNET2: NVA2 connected to Virtual Hub 2.
The virtual hub router now also exposes the ability to peer with it, thereby exc
| Resource | Limit | ||| | Number of routes each BGP peer can advertise to the virtual hub.| The hub can only accept a maximum number of 10,000 routes (total) from its connected resources. For example, if a virtual hub has a total of 6000 routes from the connected virtual networks, branches, virtual hubs etc., then when a new BGP peering is configured with an NVA, the NVA can only advertise up to 4000 routes. |
-* Routes from NVA in a virtual network that are more specific than the virtual network address space, when advertised to the virtual hub through BGP are not propagated further to on-premises.
+* Routes from NVA in a virtual network that are more specific than the virtual network address space, when advertised to the virtual hub through BGP aren't propagated further to on-premises.
* Currently we only support 4,000 routes from the NVA to the virtual hub.
-* Traffic destined for addresses in the virtual network directly connected to the virtual hub cannot be configured to route through the NVA using BGP peering between the hub and NVA. This is because the virtual hub automatically learns about system routes associated with addresses in the spoke virtual network when the spoke virtual network connection is created. These automatically learned system routes are preferred over routes learned by the hub through BGP.
-* BGP peering between an NVA in a spoke VNet and a secured virtual hub (hub with an integrated security solution) is supported if Routing Intent **is** configured on the hub. BGP peering feature is not supported for secured virtual hubs where routing intent is **not** configured.
+* Traffic destined for addresses in the virtual network directly connected to the virtual hub can't be configured to route through the NVA using BGP peering between the hub and NVA. This is because the virtual hub automatically learns about system routes associated with addresses in the spoke virtual network when the spoke virtual network connection is created. These automatically learned system routes are preferred over routes learned by the hub through BGP.
+* BGP peering between an NVA in a spoke VNet and a secured virtual hub (hub with an integrated security solution) is supported if Routing Intent **is** configured on the hub. BGP peering feature isn't supported for secured virtual hubs where routing intent is **not** configured.
* In order for the NVA to exchange routes with VPN and ER connected sites, branch to branch routing must be turned on.
-* When configuring BGP peering with the hub, you will see two IP addresses. Peering with both these addresses is required. Not peering with both addresses can cause routing issues. The same routes must be advertised to both of these addresses. Advertising different routes will cause routing issues.
+* When configuring BGP peering with the hub, you'll see two IP addresses. Peering with both these addresses is required. Not peering with both addresses can cause routing issues. The same routes must be advertised to both of these addresses. Advertising different routes will cause routing issues.
* The next hop IP address on the routes being advertised from the NVA to the virtual HUB route server has to be the same as the IP address of the NVA, the IP address configured on the BGP peer. Having a different IP address advertised as next hop IS NOT supported on virtual WAN at the moment. ## BGP peering scenarios
In this scenario, the virtual hub named "Hub 1" is connected to several virtual
### Configuration steps without BGP peering
-The following steps are required when BGP peering is not used on the virtual hub:
+The following steps are required when BGP peering isn't used on the virtual hub:
Virtual hub configuration
Virtual network configuration
#### Effective routes
-The table below shows few entries from Hub 1's effective routes in the defaultRouteTable. Notice that the route for VNET5 (subnet 10.2.1.0/24) and this confirms VNET1 and VNET5 will be able to communicate with each other.
+The following table shows few entries from Hub 1's effective routes in the defaultRouteTable. Notice that the route for VNET5 (subnet 10.2.1.0/24) and this confirms VNET1 and VNET5 will be able to communicate with each other.
| Destination prefix | Next hop| Origin | ASN path| | | | | |
In this scenario, the on-premises site named "NVA Branch 1" has a VPN configured
### Configuration steps without BGP peering
-The following steps are required when BGP peering is not used on the virtual hub:
+The following steps are required when BGP peering isn't used on the virtual hub:
Virtual hub configuration
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
description: See answers to frequently asked questions about Azure Virtual WAN n
Previously updated : 09/29/2023 Last updated : 10/30/2023 # Customer intent: As someone with a networking background, I want to read more details about Virtual WAN in a FAQ format.
Yes, BGP communities generated by on-premises will be preserved in Virtual WAN.
* Private ASNs: 65515, 65517, 65518, 65519, 65520 * ASNs reserved by IANA: 23456, 64496-64511, 65535-65551
+### Is there a way to change the ASN for a VPN gateway?
+
+No. Virtual WAN does not support ASN changes for VPN gateways.
+ ### In Virtual WAN, what are the estimated performances by ExpressRoute gateway SKU? [!INCLUDE [ExpressRoute Performance](../../includes/virtual-wan-expressroute-performance.md)]
vpn-gateway Monitor Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/monitor-vpn-gateway.md
The metrics and logs you can collect are discussed in the following sections.
## Analyzing metrics
-You can analyze metrics for VPN Gateway with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+You can analyze metrics for VPN Gateway with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
For a list of the platform metrics collected for VPN Gateway, see [Monitoring VPN Gateway data reference metrics](monitor-vpn-gateway-reference.md#metrics).