Updates from: 03/04/2024 02:07:40
Service Microsoft Docs article Related commit history on GitHub Change details
aks Tutorial Kubernetes Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-deploy-cluster.md
AZD Environments in a codespace automatically download all dependencies found in
+## Create an AKS cluster
+
+AKS clusters can use [Kubernetes role-based access control (Kubernetes RBAC)][k8s-rbac], which allows you to define access to resources based on roles assigned to users. Permissions are combined when users are assigned multiple roles. Permissions can be scoped to either a single namespace or across the whole cluster. For more information, see [Control access to cluster resources using Kubernetes RBAC and Microsoft Entra ID in AKS][aks-k8s-rbac].
+
+For information about AKS resource limits and region availability, see [Quotas, virtual machine size restrictions, and region availability in AKS][quotas-skus-regions].
+
+> [!NOTE]
+> To ensure your cluster operates reliably, you should run at least two nodes.
+
+### [Azure CLI](#tab/azure-cli)
+
+To allow an AKS cluster to interact with other Azure resources, the Azure platform automatically creates a cluster identity. In this example, the cluster identity is [granted the right to pull images][container-registry-integration] from the ACR instance you created in the previous tutorial. To execute the command successfully, you need to have an **Owner** or **Azure account administrator** role in your Azure subscription.
+
+* Create an AKS cluster using the [`az aks create`][az aks create] command. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region.
+
+ ```azurecli-interactive
+ az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --node-count 2 \
+ --generate-ssh-keys \
+ --attach-acr <acrName>
+ ```
+
+ > [!NOTE]
+ > If you already generated SSH keys, you may encounter an error similar to `linuxProfile.ssh.publicKeys.keyData is invalid`. To proceed, retry the command without the `--generate-ssh-keys` parameter.
+
+To avoid needing an **Owner** or **Azure account administrator** role, you can also manually configure a service principal to pull images from ACR. For more information, see [ACR authentication with service principals](../container-registry/container-registry-auth-service-principal.md) or [Authenticate from Kubernetes with a pull secret](../container-registry/container-registry-auth-kubernetes.md). Alternatively, you can use a [managed identity](use-managed-identity.md) instead of a service principal for easier management.
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+To allow an AKS cluster to interact with other Azure resources, the Azure platform automatically creates a cluster identity. In this example, the cluster identity is [granted the right to pull images][container-registry-integration] from the ACR instance you created in the previous tutorial. To execute the command successfully, you need to have an **Owner** or **Azure account administrator** role in your Azure subscription.
+
+* Create an AKS cluster using the [`New-AzAksCluster`][new-azakscluster] cmdlet. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region.
+
+ ```azurepowershell-interactive
+ New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 2 -GenerateSshKey -AcrNameToAttach <acrName>
+ ```
+
+ > [!NOTE]
+ > If you already generated SSH keys, you may encounter an error similar to `linuxProfile.ssh.publicKeys.keyData is invalid`. To proceed, retry the command without the `-GenerateSshKey` parameter.
+
+To avoid needing an **Owner** or **Azure account administrator** role, you can also manually configure a service principal to pull images from ACR. For more information, see [ACR authentication with service principals](../container-registry/container-registry-auth-service-principal.md) or [Authenticate from Kubernetes with a pull secret](../container-registry/container-registry-auth-kubernetes.md). Alternatively, you can use a [managed identity](use-managed-identity.md) instead of a service principal for easier management.
+
+### [Azure Developer CLI](#tab/azure-azd)
+
+AZD packages the deployment of clusters with the application itself using `azd up`. This command is covered in the next tutorial.
+++ ## Connect to cluster using kubectl ### [Azure CLI](#tab/azure-cli)
Sign in to your Azure Account through AZD configures your credentials.
-## Create an AKS cluster
-
-AKS clusters can use [Kubernetes role-based access control (Kubernetes RBAC)][k8s-rbac], which allows you to define access to resources based on roles assigned to users. Permissions are combined when users are assigned multiple roles. Permissions can be scoped to either a single namespace or across the whole cluster. For more information, see [Control access to cluster resources using Kubernetes RBAC and Microsoft Entra ID in AKS][aks-k8s-rbac].
-
-For information about AKS resource limits and region availability, see [Quotas, virtual machine size restrictions, and region availability in AKS][quotas-skus-regions].
-
-> [!NOTE]
-> To ensure your cluster operates reliably, you should run at least two nodes.
-
-### [Azure CLI](#tab/azure-cli)
-
-To allow an AKS cluster to interact with other Azure resources, the Azure platform automatically creates a cluster identity. In this example, the cluster identity is [granted the right to pull images][container-registry-integration] from the ACR instance you created in the previous tutorial. To execute the command successfully, you need to have an **Owner** or **Azure account administrator** role in your Azure subscription.
-
-* Create an AKS cluster using the [`az aks create`][az aks create] command. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region.
-
- ```azurecli-interactive
- az aks create \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --node-count 2 \
- --generate-ssh-keys \
- --attach-acr <acrName>
- ```
-
- > [!NOTE]
- > If you already generated SSH keys, you may encounter an error similar to `linuxProfile.ssh.publicKeys.keyData is invalid`. To proceed, retry the command without the `--generate-ssh-keys` parameter.
-
-To avoid needing an **Owner** or **Azure account administrator** role, you can also manually configure a service principal to pull images from ACR. For more information, see [ACR authentication with service principals](../container-registry/container-registry-auth-service-principal.md) or [Authenticate from Kubernetes with a pull secret](../container-registry/container-registry-auth-kubernetes.md). Alternatively, you can use a [managed identity](use-managed-identity.md) instead of a service principal for easier management.
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-To allow an AKS cluster to interact with other Azure resources, the Azure platform automatically creates a cluster identity. In this example, the cluster identity is [granted the right to pull images][container-registry-integration] from the ACR instance you created in the previous tutorial. To execute the command successfully, you need to have an **Owner** or **Azure account administrator** role in your Azure subscription.
-
-* Create an AKS cluster using the [`New-AzAksCluster`][new-azakscluster] cmdlet. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region.
-
- ```azurepowershell-interactive
- New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 2 -GenerateSshKey -AcrNameToAttach <acrName>
- ```
-
- > [!NOTE]
- > If you already generated SSH keys, you may encounter an error similar to `linuxProfile.ssh.publicKeys.keyData is invalid`. To proceed, retry the command without the `-GenerateSshKey` parameter.
-
-To avoid needing an **Owner** or **Azure account administrator** role, you can also manually configure a service principal to pull images from ACR. For more information, see [ACR authentication with service principals](../container-registry/container-registry-auth-service-principal.md) or [Authenticate from Kubernetes with a pull secret](../container-registry/container-registry-auth-kubernetes.md). Alternatively, you can use a [managed identity](use-managed-identity.md) instead of a service principal for easier management.
-
-### [Azure Developer CLI](#tab/azure-azd)
-
-AZD packages the deployment of clusters with the application itself using `azd up`. This command is covered in the next tutorial.
--- ## Next steps In this tutorial, you deployed a Kubernetes cluster in AKS and configured `kubectl` to connect to the cluster. You learned how to:
automation Move Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/move-account.md
When the move is complete, verify that the capabilities listed below are enabled
## Next steps +
+To learn how to move Automation to a new region, see [Move Automation account to another region](../../operational-excellence/relocation-automation.md).
+ To learn about moving resources in Azure, see [Move resources in Azure](../../azure-resource-manager/management/move-support-resources.md).
azure-monitor Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md
For a list of supported autoinstrumentation scenarios, see [Supported environmen
> [!Note] > This feature used to have an 8- to 9-second cold startup implication, which has been reduced to less than 1 second. If you were an early adopter of this feature (for example, prior to February 2023), review the "Troubleshooting" section to update to the current version and benefit from the new faster startup.
-To view more data from your Java-based Azure Functions applications than is [collected by default](../../azure-functions/functions-monitoring.md?tabs=cmd), enable the [Application Insights Java 3.x agent](./java-in-process-agent.md). This agent allows Application Insights to automatically collect and correlate dependencies, logs, and metrics from popular libraries and Azure SDKs. This telemetry is in addition to the request telemetry already captured by Functions.
+To view more data from your Java-based Azure Functions applications than is [collected by default](../../azure-functions/functions-monitoring.md?tabs=cmd), enable the [Application Insights Java 3.x agent](./java-in-process-agent.md). This agent allows Application Insights to automatically collect and correlate dependencies, logs, and metrics from popular libraries and Azure Software Development Kits (SDKs). This telemetry is in addition to the request telemetry already captured by Functions.
By using the application map and having a more complete view of end-to-end transactions, you can better diagnose issues. You have a topological view of how systems interact along with data on average performance and error rates. You also have more data for end-to-end diagnostics. You can use the application map to easily find the root cause of reliability issues and performance bottlenecks on a per-request basis.
Your Java functions might have slow startup times if you adopted this feature be
#### Duplicate logs
-If you're using log4j or logback for console logging, distributed tracing for Java Functions creates duplicate logs. These duplicate logs are then sent to Application Insights. To avoid this behavior, use the following workarounds.
+If you're using `log4j` or `logback` for console logging, distributed tracing for Java Functions creates duplicate logs. These duplicate logs are then sent to Application Insights. To avoid this behavior, use the following workarounds.
##### Log4j
Example:
</configuration> ```
+## Distributed tracing for Node.js function apps
+
+To view more data from your Node Azure Functions applications than is [collected by default](../../azure-functions/functions-monitoring.md#collecting-telemetry-data), instrument your Function using the [Azure Monitor OpenTelemetry Distro](./opentelemetry-enable.md?tabs=nodejs).
+ ## Distributed tracing for Python function apps To collect custom telemetry from services such as Redis, Memcached, and MongoDB, use the [OpenCensus Python extension](https://github.com/census-ecosystem/opencensus-python-extensions-azure) and [log your telemetry](../../azure-functions/functions-reference-python.md?tabs=azurecli-linux%2capplication-level#log-custom-telemetry). You can find the list of supported services in this [GitHub folder](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib).
azure-monitor Vminsights Dependency Agent Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-dependency-agent-maintenance.md
To uninstall Dependency Agent:
Since the Dependency agent works at the kernel level, support is also dependent on the kernel version. As of Dependency agent version 9.10.* the agent supports * kernels. The following table lists the major and minor Linux OS release and supported kernel versions for the Dependency agent.
->[!NOTE]
-> With Dependency agent 9.10.15 and above, installation is not blocked for unsupported kernel versions, but the agent will run in degraded mode. In this mode, connection and port data stored in VMConnection and VMBoundport tables is not collected. The VMProcess table may have some data, but it will be minimal.
-
-| Distribution | OS version | Kernel version |
-|:|:|:|
-| Red Hat Linux 8 | 8.6 | 4.18.0-372.\*el8.x86_64, 4.18.0-372.*el8_6.x86_64 |
-| | 8.5 | 4.18.0-348.\*el8_5.x86_644.18.0-348.\*el8.x86_64 |
-| | 8.4 | 4.18.0-305.\*el8.x86_64, 4.18.0-305.\*el8_4.x86_64 |
-| | 8.3 | 4.18.0-240.\*el8_3.x86_64 |
-| | 8.2 | 4.18.0-193.\*el8_2.x86_64 |
-| | 8.1 | 4.18.0-147.\*el8_1.x86_64 |
-| | 8.0 | 4.18.0-80.\*el8.x86_64<br>4.18.0-80.\*el8_0.x86_64 |
-| Red Hat Linux 7 | 7.9 | 3.10.0-1160 |
-| | 7.8 | 3.10.0-1136 |
-| | 7.7 | 3.10.0-1062 |
-| | 7.6 | 3.10.0-957 |
-| | 7.5 | 3.10.0-862 |
-| | 7.4 | 3.10.0-693 |
-| Red Hat Linux 6 | 6.10 | 2.6.32-754 |
-| | 6.9 | 2.6.32-696 |
-| CentOS Linux 8 | 8.6 | 4.18.0-372.\*el8.x86_64, 4.18.0-372.*el8_6.x86_64 |
-| | 8.5 | 4.18.0-348.\*el8_5.x86_644.18.0-348.\*el8.x86_64 |
-| | 8.4 | 4.18.0-305.\*el8.x86_64, 4.18.0-305.\*el8_4.x86_64 |
-| | 8.3 | 4.18.0-240.\*el8_3.x86_64 |
-| | 8.2 | 4.18.0-193.\*el8_2.x86_64 |
-| | 8.1 | 4.18.0-147.\*el8_1.x86_64 |
-| | 8.0 | 4.18.0-80.\*el8.x86_64<br>4.18.0-80.\*el8_0.x86_64 |
-| CentOS Linux 7 | 7.9 | 3.10.0-1160 |
-| | 7.8 | 3.10.0-1136 |
-| | 7.7 | 3.10.0-1062 |
-| CentOS Linux 6 | 6.10 | 2.6.32-754.3.5<br>2.6.32-696.30.1 |
-| | 6.9 | 2.6.32-696.30.1<br>2.6.32-696.18.7 |
-| Ubuntu Server | 20.04 | 5.8<br>5.4\* |
-| | 18.04 | 5.3.0-1020<br>5.0 (includes Azure-tuned kernel)<br>4.18*<br>4.15* |
-| | 16.04.3 | 4.15.\* |
-| | 16.04 | 4.13.\*<br>4.11.\*<br>4.10.\*<br>4.8.\*<br>4.4.\* |
-| | 14.04 | 3.13.\*-generic<br>4.4.\*-generic|
-| SUSE Linux 12 Enterprise Server | 12 SP5 | 4.12.14-122.\*-default, 4.12.14-16.\*-azure|
-| | 12 SP4 | 4.12.\* (includes Azure-tuned kernel) |
-| | 12 SP3 | 4.4.\* |
-| | 12 SP2 | 4.4.\* |
-| SUSE Linux 15 Enterprise Server | 15 SP1 | 4.12.14-197.\*-default, 4.12.14-8.\*-azure |
-| | 15 | 4.12.14-150.\*-default |
-| Debian | 9 | 4.9 |
-
->[!NOTE]
-> Dependency agent is not supported for Azure Virtual Machines with Ampere Altra ARMΓÇôbased processors.
## Next steps
azure-portal Azure Portal Add Remove Sort Favorites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-add-remove-sort-favorites.md
In this example, we'll add **Cost Management + Billing** to the **Favorites** li
:::image type="content" source="media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-new-all-services.png" alt-text="Screenshot showing All services in the Azure portal menu.":::
-1. Enter the word "cost" in the **Filter services** field near the top of the **All services** page. Services that have "cost" in the title or that have "cost" as a keyword are shown.
+1. Enter the word "cost" in the **Filter services** field near the top of the **All services** pane. Services that have "cost" in the title or that have "cost" as a keyword are shown.
:::image type="content" source="media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-find-service.png" alt-text="Screenshot showing a search in All services in the Azure portal.":::
azure-portal Azure Portal Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-overview.md
As noted earlier, you can [set your startup page to Dashboard](set-preferences.m
## Getting around the portal
-The portal menu and page header are global elements that are always present in the Azure portal. These persistent features are the "shell" for the user interface associated with each individual service or feature. The header provides access to global controls. The configuration page (sometimes referred to as a "blade") for a resource or service may also have a resource menu specific to that area.
+The portal menu and page header are global elements that are always present in the Azure portal. These persistent features are the "shell" for the user interface associated with each individual service or feature. The header provides access to global controls. The working pane for a resource or service may also have a resource menu specific to that area.
The figure below labels the basic elements of the Azure portal, each of which are described in the following table. In this example, the current focus is a virtual machine, but the same elements apply no matter what type of resource or service you're working with.
azure-portal Set Preferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/set-preferences.md
You can change the default settings of the Azure portal to meet your own preferences.
-To view and manage your settings, select the **Settings** menu icon in the top right section of the global page header to open the **Portal settings** page.
+To view and manage your settings, select the **Settings** menu icon in the top right section of the global page header to open **Portal settings**.
:::image type="content" source="media/set-preferences/settings-top-header.png" alt-text="Screenshot showing the settings icon in the global page header.":::
Within **Portal settings**, you'll see different sections. This article describe
## Directories + subscriptions
-The **Directories + subscriptions** page lets you manage directories and set subscription filters.
+**Directories + subscriptions** lets you manage directories and set subscription filters.
### Switch and manage directories In the **Directories** section, you'll see your **Current directory** (the directory that you're currently signed in to).
-The **Startup directory** shows the default directory when you sign in to the Azure portal (or **Last visited** if you've chosen that option). To choose a different startup directory, select **change** to open the [Appearance + startup views](#appearance--startup-views) page, where you can change your selection.
+The **Startup directory** shows the default directory when you sign in to the Azure portal (or **Last visited** if you've chosen that option). To choose a different startup directory, select **change** to open [Appearance + startup views](#appearance--startup-views), where you can change your selection.
To see a full list of directories to which you have access, select **All Directories**.
To switch to a different directory, find the directory that you want to work in,
You can choose the subscriptions that are filtered by default when you sign in to the Azure portal. This can be helpful if you have a primary list of subscriptions you work with but use others occasionally. > [!IMPORTANT]
-> After you apply a subscription filter in the Azure portal settings page, you will only see subscriptions that match the filter across all portal experiences. You won't be able to work with other subscriptions that are excluded from the selected filter. Any new subscriptions that are created after the filter was applied may not be shown if the filter criteria don't match. To see them, you must update the filter criteria to include other subscriptions in the portal, or select **Advanced filters** and use the **Default** filter to always show all subscriptions.
+> After you apply a subscription filter, you'll only see subscriptions that match that filter, across all portal experiences. You won't be able to work with other subscriptions that are excluded from the selected filter. Any new subscriptions that are created after the filter was applied may not be shown if the filter criteria don't match. To see them, you must update the filter criteria to include other subscriptions in the portal, or select **Advanced filters** and use the **Default** filter to always show all subscriptions.
> > Certain features, such as **Management groups** or **Security Center**, may show subscriptions that don't match your filter criteria. However, you won't be able to perform operations on those subscriptions (such as moving a subscription between management groups) unless you adjust your filters to include the subscriptions that you want to work with.
To use customized filters, select **Advanced filters**. You'll be prompted to co
:::image type="content" source="media/set-preferences/settings-advanced-filters-enable.png" alt-text="Screenshot showing the confirmation dialog box for Advanced filters.":::
-After you continue, the **Advanced filters** page appears in the left navigation menu of **Portal settings**. You can create and manage multiple subscription filters on this page. Your currently selected subscriptions are saved as an imported filter that you can use again. You'll see this filter selected on the **Directories + subscriptions** page.
+After you continue, **Advanced filters** appears in the left navigation menu of **Portal settings**. You can create and manage multiple subscription filters here. Your currently selected subscriptions are saved as an imported filter that you can use again. You'll see this filter selected in **Directories + subscriptions**.
If you want to stop using advanced filters, select the toggle again to restore the default subscription view. Any custom filters you've created are saved and will be available to use if you enable **Advanced filters** in the future.
If you want to stop using advanced filters, select the toggle again to restore t
## Advanced filters
-After enabling the **Advanced filters** page, you can create, modify, or delete subscription filters.
+After enabling **Advanced filters**, you can create, modify, or delete subscription filters.
:::image type="content" source="media/set-preferences/settings-advanced-filters.png" lightbox="media/set-preferences/settings-advanced-filters.png" alt-text="Screenshot showing the Advanced filters screen.":::
Once you have made the desired changes to your language and regional format sett
## My information
-The **My information** page lets you update the email address that is used for updates on Azure services, billing, support, or security issues. You can also opt in or out from additional emails about Microsoft Azure and other products and services.
+**My information** lets you update the email address that is used for updates on Azure services, billing, support, or security issues. You can also opt in or out from additional emails about Microsoft Azure and other products and services.
-Near the top of the **My information** page, you'll see options to export, restore, or delete settings.
+Near the top of **My information**, you'll see options to export, restore, or delete settings.
### Export user settings
It's a good idea to export and review your settings before you delete them, as d
[!INCLUDE [GDPR-related guidance](../../includes/gdpr-intro-sentence.md)]
-To delete your portal settings, select **Delete all settings and private dashboards** from the top of the **My information** page. You'll be prompted to confirm the deletion. When you do so, all settings customizations will return to the default settings, and all of your private dashboards will be lost.
+To delete your portal settings, select **Delete all settings and private dashboards** from the top of **My information**. You'll be prompted to confirm the deletion. When you do so, all settings customizations will return to the default settings, and all of your private dashboards will be lost.
## Signing out + notifications
azure-portal How To Create Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/how-to-create-azure-support-request.md
Follow these links to learn more:
- [How to manage an Azure support request](how-to-manage-azure-support-request.md) - [Azure support ticket REST API](/rest/api/support)-- Get help from your peers in the [Microsoft Q&A question page](/answers/products/azure)
+- Get help from your peers in [Microsoft Q&A](/answers/products/azure)
- Learn more in [Azure Support FAQ](https://azure.microsoft.com/support/faq) - [Azure Quotas overview](../../quotas/quotas-overview.md)
azure-portal How To Manage Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/how-to-manage-azure-support-request.md
View the details and status of support requests by going to **Help + support** >
:::image type="content" source="media/how-to-manage-azure-support-request/all-requests-lower.png" alt-text="All support requests":::
-On this page, you can search, filter, and sort support requests. By default, you might only see recent open requests. Change the filter options to select a longer period of time or to include support requests that were closed.
+You can search, filter, and sort support requests. By default, you might only see recent open requests. Change the filter options to select a longer period of time or to include support requests that were closed.
To view details about a support request to view details, including its severity and any messages associated with the request, select it from the list. ## Send a message
-1. On the **All support requests** page, select the support request.
+1. From **All support requests**, select the support request.
-1. On the **Support Request** page, select **New message**.
+1. In the **Support Request**, select **New message**.
1. Enter your message and select **Submit**.
To view details about a support request to view details, including its severity
> [!NOTE] > The maximum severity level depends on your [support plan](https://azure.microsoft.com/support/plans).
-1. On the **All support requests** page, select the support request.
+1. From **All support requests**, select the support request.
-1. On the **Support Request** page, select **Change severity**.
+1. In the **Support Request**, select **Change severity**.
1. The Azure portal shows one of two screens, depending on whether your request is already assigned to a support engineer:
When you create a support request, you can select **Yes** or **No** in the **Adv
To change your **Advanced diagnostic information** selection after the request is created:
-1. On the **All support requests** page, select the support request.
+1. From **All support requests**, select the support request.
-1. On the **Support Request** page, select **Advanced diagnostic information** near the top of the screen.
+1. In the **Support Request**, select **Advanced diagnostic information** near the top of the screen.
1. Select **Yes** or **No**, then select **Submit**.
To change your **Advanced diagnostic information** selection after the request i
You can use the file upload option to upload a diagnostic file, such as a [browser trace](../capture-browser-trace.md) or any other files that you think are relevant to a support request.
-1. On the **All support requests** page, select the support request.
+1. From **All support requests**, select the support request.
-1. On the **Support Request** page, select the **Upload file** box, then browse to find your file and select **Upload**.
+1. In the **Support Request**, select the **Upload file** box, then browse to find your file and select **Upload**.
### File upload guidelines
azure-resource-manager Request Limits And Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/request-limits-and-throttling.md
Title: Request limits and throttling
-description: Describes how to use throttling with Azure Resource Manager requests when subscription limits have been reached.
+description: Describes how to use throttling with Azure Resource Manager requests when subscription limits are reached.
Previously updated : 10/05/2023 Last updated : 03/01/2024
-# Throttling Resource Manager requests
-This article describes how Azure Resource Manager throttles requests. It shows you how to track the number of requests that remain before reaching the limit, and how to respond when you've reached the limit.
+# Understand how Azure Resource Manager throttles requests
+
+This article describes how Azure Resource Manager throttles requests. It shows you how to track the number of requests that remain before reaching the limit, and how to respond when you reach the limit.
Throttling happens at two levels. Azure Resource Manager throttles requests for the subscription and tenant. If the request is under the throttling limits for the subscription and tenant, Resource Manager routes the request to the resource provider. The resource provider applies throttling limits that are tailored to its operations.
The following image shows how throttling is applied as a request goes from the u
## Subscription and tenant limits
-Every subscription-level and tenant-level operation is subject to throttling limits. Subscription requests are ones that involve passing your subscription ID, such as retrieving the resource groups in your subscription. Tenant requests don't include your subscription ID, such as retrieving valid Azure locations.
+Every subscription-level and tenant-level operation is subject to throttling limits. Subscription requests are ones that involve passing your subscription ID, such as retrieving the resource groups in your subscription. For example, sending a request to `https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups?api-version=2022-01-01` is a subscription-level operation. Tenant requests don't include your subscription ID, such as retrieving valid Azure locations. For example, sending a request to `https://management.azure.com/tenants?api-version=2022-01-01` is a tenant-level operation.
The default throttling limits per hour are shown in the following table.
The default throttling limits per hour are shown in the following table.
These limits are scoped to the security principal (user or application) making the requests and the subscription ID or tenant ID. If your requests come from more than one security principal, your limit across the subscription or tenant is greater than 12,000 and 1,200 per hour.
-These limits apply to each Azure Resource Manager instance. There are multiple instances in every Azure region, and Azure Resource Manager is deployed to all Azure regions. So, in practice, the limits are higher than these limits. The requests from a user are usually handled by different instances of Azure Resource Manager.
+These limits apply to each Azure Resource Manager instance. There are multiple instances in every Azure region, and Azure Resource Manager is deployed to all Azure regions. So, in practice, the limits are higher than these limits. The requests from a user are usually handled by different instances of Azure Resource Manager.
The remaining requests are returned in the [response header values](#remaining-requests).
+## Migrating to regional throttling and token bucket algorithm
+
+Starting in 2024, Microsoft is migrating Azure subscriptions to a new throttling architecture. With this change, you'll experience new throttling limits. The new throttling limits are applied per region rather than per instance of Azure Resource Manager. The new architecture uses a [token bucket algorithm](https://en.wikipedia.org/wiki/Token_bucket) to manage API throttling.
+
+The token bucket represents the maximum number of requests that you can send for each second. When you reach the maximum number of requests, the refill rate determines how quickly tokens become available in the bucket.
+
+These updated limits make it easier for you to refresh and manage your quota.
+
+The new limits are:
+
+| Scope | Operations | Bucket size | Refill rate per sec |
+| -- | - | -- | - |
+| Subscription | reads | 250 | 25 |
+| Subscription | deletes | 200 | 10 |
+| Subscription | writes | 200 | 10 |
+| Tenant | reads | 250 | 25 |
+| Tenant | deletes | 200 | 10 |
+| Tenant | writes | 200 | 10 |
+
+The subscription limits apply per subscription, per service principal, and per operation type. There are also global subscription limits that are equivalent to 15 times the individual service principal limits for each operation type. The global limits apply across all service principals. Requests will be throttled if the global, service principal, or tenant specific limits are exceeded.
+
+The limits may be smaller for free or trial customers.
+
+For example, suppose you have a bucket size of 250 tokens for read requests and refill rate of 25 tokens per second. If you send 250 read requests in a second, the bucket is empty and your requests are throttled. Each second, 25 tokens become available until the bucket reaches its maximum capacity of 250 tokens. You can use tokens as they become available.
+
+### How do I know if my subscription uses the new throttling experience?
+
+After your subscription is migrated to the new throttling experience, the response header shows the remaining requests per minute instead of per hour. Also, your `Retry-After` value shows one minute or less, instead of five minutes. For more information, see [Error code](#error-code).
+
+### Why is throttling changing to per region rather than per instance?
+
+Since different regions have a different number of Resource Manager instances, throttling per instance causes inconsistent throttling performance. Throttling per region makes throttling consistent and predictable.
+
+### How does the new throttling experience affect my limits?
+
+You can send more requests. Write requests increase by 30 times. Delete requests increase by 2.4 times. Read requests increase by 7.5 times.
+
+### Can I prevent my subscription from migrating to the new throttling experience?
+
+No, all subscriptions will eventually be migrated.
+ ## Resource provider limits Resource providers apply their own throttling limits. Within each subscription, the resource provider throttles per region of the resource in the request. Because Resource Manager throttles by instance of Resource Manager, and there are several instances of Resource Manager in each region, the resource provider might receive more requests than the default limits in the previous section.
For information about throttling in other resource providers, see:
## Error code
-When you reach the limit, you receive the HTTP status code **429 Too many requests**. The response includes a **Retry-After** value, which specifies the number of seconds your application should wait (or sleep) before sending the next request. If you send a request before the retry value has elapsed, your request isn't processed and a new retry value is returned.
-
-After waiting for specified time, you can also close and reopen your connection to Azure. By resetting the connection, you may connect to a different instance of Azure Resource Manager.
+When you reach the limit, you receive the HTTP status code **429 Too many requests**. The response includes a **Retry-After** value, which specifies the number of seconds your application should wait (or sleep) before sending the next request. If you send a request before the retry value elapses, your request isn't processed and a new retry value is returned.
If you're using an Azure SDK, the SDK may have an auto retry configuration. For more information, see [Retry guidance for Azure services](/azure/architecture/best-practices/retry-service-specific).
You can determine the number of remaining requests by examining response headers
| x-ms-ratelimit-remaining-subscription-writes |Subscription scoped writes remaining. This value is returned on write operations. | | x-ms-ratelimit-remaining-tenant-reads |Tenant scoped reads remaining | | x-ms-ratelimit-remaining-tenant-writes |Tenant scoped writes remaining |
-| x-ms-ratelimit-remaining-subscription-resource-requests |Subscription scoped resource type requests remaining.<br /><br />This header value is only returned if a service has overridden the default limit. Resource Manager adds this value instead of the subscription reads or writes. |
-| x-ms-ratelimit-remaining-subscription-resource-entities-read |Subscription scoped resource type collection requests remaining.<br /><br />This header value is only returned if a service has overridden the default limit. This value provides the number of remaining collection requests (list resources). |
-| x-ms-ratelimit-remaining-tenant-resource-requests |Tenant scoped resource type requests remaining.<br /><br />This header is only added for requests at tenant level, and only if a service has overridden the default limit. Resource Manager adds this value instead of the tenant reads or writes. |
-| x-ms-ratelimit-remaining-tenant-resource-entities-read |Tenant scoped resource type collection requests remaining.<br /><br />This header is only added for requests at tenant level, and only if a service has overridden the default limit. |
+| x-ms-ratelimit-remaining-subscription-resource-requests |Subscription scoped resource type requests remaining.<br /><br />This header value is only returned if a service overrides the default limit. Resource Manager adds this value instead of the subscription reads or writes. |
+| x-ms-ratelimit-remaining-subscription-resource-entities-read |Subscription scoped resource type collection requests remaining.<br /><br />This header value is only returned if a service overrides the default limit. This value provides the number of remaining collection requests (list resources). |
+| x-ms-ratelimit-remaining-tenant-resource-requests |Tenant scoped resource type requests remaining.<br /><br />This header is only added for requests at tenant level, and only if a service overrides the default limit. Resource Manager adds this value instead of the tenant reads or writes. |
+| x-ms-ratelimit-remaining-tenant-resource-entities-read |Tenant scoped resource type collection requests remaining.<br /><br />This header is only added for requests at tenant level, and only if a service overrides the default limit. |
The resource provider can also return response headers with information about remaining requests. For information about response headers returned by the Compute resource provider, see [Call rate informational response headers](/troubleshoot/azure/virtual-machines/troubleshooting-throttling-errors#call-rate-informational-response-headers).
container-apps Deploy Artifact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/deploy-artifact.md
Title: 'Deploy an artifact file to Azure Container Apps'
+ Title: Deploy an artifact file to Azure Container Apps
description: Use a prebuilt artifact file to deploy to Azure Container Apps. Previously updated : 11/15/2023 Last updated : 02/27/2024 # Quickstart: Deploy an artifact file to Azure Container Apps
-This article demonstrates how to deploy a container app from a prebuilt artifact file.
-
-The following example deploys a Java application using a JAR file, which includes a Java-specific manifest file.
-
-In this quickstart, you create a backend web API service that returns a static collection of music albums. After completing this quickstart, you can continue to [Tutorial: Communication between microservices in Azure Container Apps](communicate-between-microservices.md) to learn how to deploy a front end application that calls the API.
+In this quickstart, you learn to deploy a container app from a prebuilt artifact file. The example in this article deploys a Java application using a JAR file, which includes a Java-specific manifest file. Your job is to create a backend web API service that returns a static collection of music albums. After completing this quickstart, you can continue to [Communication between microservices](communicate-between-microservices.md) to learn how to deploy a front end application that calls the API.
The following screenshot shows the output from the album API service you deploy.
The following screenshot shows the output from the album API service you deploy.
| GitHub Account | Get one for [free](https://github.com/join). | | git | [Install git](https://git-scm.com/downloads) | | Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).|
-| Java | Install the [JDK](/java/openjdk/install), recommend 17 or later|
+| Java | Install the [JDK](/java/openjdk/install), recommend 17, or later|
| Maven | Install the [Maven](https://maven.apache.org/download.cgi).| ## Setup
az upgrade
-Next, install or update the Azure Container Apps extension for the CLI.
+Next, install, or update the Azure Container Apps extension for the CLI.
# [Bash](#tab/bash)
This command:
- Creates the Container Apps environment with a Log Analytics workspace - Creates and deploys the container app using a public container image
-The `up` command uses the Docker file in the root of the repository to build the container image. The `EXPOSE` instruction in the Docker file defines the target port. A Docker file, however, isn't required to build a container app.
+The `up` command uses the Docker file in the root of the repository to build the container image. The `EXPOSE` instruction in the Docker file defines the target port. A Docker file, however, isn't required to build a container app.
> [!NOTE] > Note: When using `containerapp up` in combination with a Docker-less code base, use the `--location` parameter so that application runs in a location other than US East.
az containerapp up `
## Verify deployment
-Copy the FQDN to a web browser. From your web browser, go to the `/albums` endpoint of the FQDN.
+Copy the FQDN to a web browser. From your web browser, go to the `/albums` endpoint of the FQDN.
:::image type="content" source="media/quickstart-code-to-cloud/azure-container-apps-album-api.png" alt-text="Screenshot of response from albums API endpoint.":::
+## Deploy a WAR file
+
+You can also deploy your container app from a [WAR file](java-deploy-war-file.md).
+ ## Clean up resources If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the resource group.
container-apps Java Build Environment Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/java-build-environment-variables.md
+
+ Title: Build environment variables for Java in Azure Container Apps
+description: Learn about Java image build from source code via environment variables.
++++ Last updated : 02/27/2024+++
+# Build environment variables for Java in Azure Container Apps
+
+Azure Container Apps uses [Buildpacks](https://buildpacks.io/) to automatically create a container image that allows you to deploy from your source code directly to the cloud. To take control of your build configuration, you can use environment variables to customize parts of your build like the JDK, Maven, and Tomcat. The following article shows you how to configure environment variables to help you take control over builds that automatically create a container for you.
+
+## Supported Java build environment variables
+
+### Configure JDK
+
+Container Apps use [Microsoft Build of OpenJDK](https://www.microsoft.com/openjdk) to build source code and as the runtime environment. Four LTS JDK versions are supported: 8, 11, 17 and 21.
+
+- For source code build, the default version is JDK 17.
+
+- For a JAR file build, the JDK version is read from the file location `META-INF\MANIFEST.MF` in the JAR, but uses the default JDK version 17 if the specified version isn't available.
+
+Here's a listing of the environment variables used to configure JDK:
+
+| Environment variable | Description | Default |
+|--|--|--|
+| `BP_JVM_VERSION` | Controls the JVM version. | `17` |
+
+### Configure Maven
+
+Container Apps supports building Maven-based applications from source.
+
+Here's a listing of the environment variables used to configure Maven:
+
+| Build environment variable | Description | Default |
+|--|--|--|
+| `BP_MAVEN_VERSION` | Sets the major Maven version. Since Buildpacks only ships a single version of each supported line, updates to the buildpack can change the exact version of Maven installed. If you require a specific minor/patch version of Maven, use the Maven wrapper instead. | `3` |
+| `BP_MAVEN_BUILD_ARGUMENTS` | Defines the arguments passed to Maven. The `--batch-mode` is prepended to the argument list in environments without a TTY. | `-Dmaven.test.skip=true --no-transfer-progress package` |
+| `BP_MAVEN_ADDITIONAL_BUILD_ARGUMENTS` | Defines extra arguments used (for example, `-DskipJavadoc` appended to `BP_MAVEN_BUILD_ARGUMENTS`) to pass to Maven. | |
+| `BP_MAVEN_ACTIVE_PROFILES` | Comma separated list of active profiles passed to Maven. | |
+| `BP_MAVEN_BUILT_MODULE` | Designates application artifact that contains the module. By default, the build looks in the root module. | |
+| `BP_MAVEN_BUILT_ARTIFACT` | Location of the built application artifact. This value supersedes the `BP_MAVEN_BUILT_MODULE` variable. You can match a single file, multiple files, or a directory through one or more space separated patterns. | `target/*.[ejw]ar` |
+| `BP_MAVEN_POM_FILE` | Specifies a custom location to the project's *pom.xml* file. This value is relative to the root of the project (for example, */workspace*). | `pom.xml` |
+| `BP_MAVEN_DAEMON_ENABLED` | Triggers the installation and configuration of Apache `maven-mvnd` instead of Maven. Set this value to `true` if you want to the Maven Daemon. | `false` |
+| `BP_MAVEN_SETTINGS_PATH` | Specifies a custom location to Maven's *settings.xml* file. | |
+| `BP_INCLUDE_FILES` | Colon separated list of glob patterns to match source files. Any matched file is retained in the final image. | |
+| `BP_EXCLUDE_FILES` | Colon separated list of glob patterns to match source files. Any matched file is removed from the final image. Any include patterns are applied first, and you can use "exclude patterns" to reduce the files included in the build. | |
+| `BP_JAVA_INSTALL_NODE` | Control whether or not a separate Buildpack installs Yarn and Node.js. If set to `true`, the Buildpack checks the app root or path set by `BP_NODE_PROJECT_PATH`. The project path looks for either a *yarn.lock* file, which requires the installation of Yarn and Node.js. If there's a *package.json* file, then the build only requires Node.js. | `false` |
+| `BP_NODE_PROJECT_PATH` | Direct the project subdirectory to look for *package.json* and *yarn.lock* files. | |
+
+### Configure Tomcat
+
+Container Apps supports running war file in Tomcat application server.
+
+Here's a listing of the environment variables used to configure Tomcat:
+
+| Build environment variable | Description | Default |
+|--|--|--|
+| `BP_TOMCAT_CONTEXT_PATH` | The context path where the application is mounted. | Defaults to empty (`ROOT`) |
+| `BP_TOMCAT_EXT_CONF_SHA256` | The SHA256 hash of the external configuration package. | |
+| `BP_TOMCAT_ENV_PROPERTY_SOURCE_DISABLED` | When set to `true`, the Buildpack doesn't configure `org.apache.tomcat.util.digester.EnvironmentPropertySource`. This configuration option is added to support loading configuration from environment variables and referencing them in Tomcat configuration files. | |
+| `BP_TOMCAT_EXT_CONF_STRIP` | The number of directory levels to strip from the external configuration package. | `0` |
+| `BP_TOMCAT_EXT_CONF_URI` | The download URI of the external configuration package. | |
+| `BP_TOMCAT_EXT_CONF_VERSION` | The version of the external configuration package. | |
+| `BP_TOMCAT_VERSION` | Used to configure a specific Tomcat version. Supported Tomcat versions include 8, 9, and 10. | `9.*` |
+
+### Configure Cloud Build Service
+
+Here's a listing of the environment variables used to configure a Cloud Build Service:
+
+| Build environment variable | Description | Default |
+|--|--|--|
+| `ORYX_DISABLE_TELEMETRY` | Controls whether or not to disable telemetry collection. | `false` |
+
+## How to configure Java build environment variables
+
+You can configure Java build environment variables when you deploy Java application source code via CLI command `az containerapp up`, `az containerapp create`, or `az containerapp update`:
+
+```azurecli
+az containerapp up \
+ --name <CONTAINER_APP_NAME> \
+ --source <SOURCE_DIRECTORY> \
+ --build-env-vars <NAME=VALUE NAME=VALUE> \
+ --resource-group <RESOURCE_GROUP_NAME> \
+ --environment <ENVIRONMENT_NAME>
+```
+
+The `build-env-vars` argument is a list of environment variables for the build, space-separated values in `key=value` format. Here's an example list you can pass in as variables:
+
+```bash
+BP_JVM_VERSION=21 BP_MAVEN_VERSION=4 "BP_MAVEN_BUILD_ARGUMENTS=-Dmaven.test.skip=true --no-transfer-progress package"
+```
+
+You can also configure the Java build environment variables when you [set up GitHub Actions with Azure CLI in Azure Container Apps](github-actions-cli.md).
+
+```azurecli
+az containerapp github-action add \
+ --repo-url "https://github.com/<OWNER>/<REPOSITORY_NAME>" \
+ --build-env-vars <NAME=VALUE NAME=VALUE> \
+ --branch <BRANCH_NAME> \
+ --name <CONTAINER_APP_NAME> \
+ --resource-group <RESOURCE_GROUP> \
+ --registry-url <URL_TO_CONTAINER_REGISTRY> \
+ --registry-username <REGISTRY_USER_NAME> \
+ --registry-password <REGISTRY_PASSWORD> \
+ --service-principal-client-id <appId> \
+ --service-principal-client-secret <password> \
+ --service-principal-tenant-id <tenant> \
+ --token <YOUR_GITHUB_PERSONAL_ACCESS_TOKEN>
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Build and deploy from a repository](quickstart-code-to-cloud.md)
container-apps Java Deploy War File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/java-deploy-war-file.md
+
+ Title: Deploy a WAR file on Tomcat in Azure Container Apps
+description: Learn how to deploy a WAR file on Tomcat in Azure Container Apps.
++++ Last updated : 02/27/2024+++
+# Tutorial: Deploy a WAR file on Tomcat in Azure Container Apps
+
+Rather than manually creating a Dockerfile and directly using a container registry, you can deploy your Java application directly from a web application archive (WAR) file. This article demonstrates how to deploy a Java application on Tomcat using a WAR file to Azure Container Apps.
+
+By the end of this tutorial you deploy an application on Container Apps that displays the home page of the Spring PetClinic sample application.
++
+> [!NOTE]
+> If necessary, you can specify the Tomcat version in the build environment variables.
+
+## Prerequisites
+
+| Requirement | Instructions |
+|--|--|
+| Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).<br<br>You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. <br><br>Refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md?tabs=current) for details. |
+| GitHub Account | Get one for [free](https://github.com/join). |
+| git | [Install git](https://git-scm.com/downloads) |
+| Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).|
+| Java | Install the [Java Development Kit](/java/openjdk/install). Use version 17 or later. |
+| Maven | Install the [Maven](https://maven.apache.org/download.cgi).|
+
+## Deploy a WAR file on Tomcat
+
+1. Get the sample application.
+
+ Clone the Spring PetClinic sample application to your machine.
+
+ ```bash
+ git clone https://github.com/spring-petclinic/spring-framework-petclinic.git
+ ```
+
+1. Build the WAR package.
+
+ First, change into the *spring-framework-petclinic* folder.
+
+ ```bash
+ cd spring-framework-petclinic
+ ```
+
+ Then, clean the Maven build area, compile the project's code, and create a WAR file, all while skipping any tests.
+
+ ```bash
+ mvn clean package -DskipTests
+ ```
+
+ After you execute the build command, a file named *petclinic.war* is generated in the */target* folder.
+
+1. Deploy the WAR package to Azure Container Apps.
+
+ Now you can deploy your WAR file with the `az containerapp up` CLI command.
+
+ ```azurecli
+ az containerapp up \
+ --name <YOUR_CONTAINER_APP_NAME> \
+ --resource-group <YOUR_RESOURCE_GROUP> \
+ --subscription <YOUR_SUBSCRIPTION>\
+ --location <LOCATION> \
+ --environment <YOUR_ENVIRONMENT_NAME> \
+ --artifact <YOUR_WAR_FILE_PATH> \
+ --build-env-var BP_TOMCAT_VERSION=10.* \
+ --ingress external \
+ --target-port 8080 \
+ --query properties.configuration.ingress.fqdn
+ ```
+
+ > [!NOTE]
+ > The default Tomcat version is 9. If you need to change the Tomcat version for compatibility with your application, you can use the `--build-env-var BP_TOMCAT_VERSION=<YOUR_TOMCAT_VERSION>` argument to adjust the version number.
+
+ In this example, the Tomcat version is set to `10` (including any minor versions) by setting the `BP_TOMCAT_VERSION=10.*` environment variable.
+
+ You can find more applicable build environment variables in [Java build environment variables](java-build-environment-variables.md)
+
+1. Verify the app status.
+
+ In this example, `containerapp up` command includes the `--query properties.configuration.ingress.fqdn` argument, which returns the fully qualified domain name (FQDN), also known as the app's URL.
+
+ View the application by pasting this URL into a browser. Your app should resemble the following screenshot.
+
+ :::image type="content" source="media/java-deploy-war-file/azure-container-apps-petclinic-warfile.png" alt-text="Screenshot of petclinic application.":::
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Java build environment variables](java-build-environment-variables.md)
container-apps Java Memory Fit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/java-memory-fit.md
+
+ Title: How to use memory efficiently for Java apps in Azure Container Apps
+description: Optimization of default configurations to enhance Java application performance and efficiency.
+++++ Last updated : 02/27/2024+++
+# Use memory efficiently for Java apps in Azure Container Apps
+
+The Java Virtual Machine (JVM) uses memory conservatively as it assumes OS memory must be shared among multiple applications. However, your container app can optimize memory usage and make the maximum amount of memory possible available to your application. This memory optimization is known as Java automatic memory fitting. When memory fitting is enabled, Java application performance is typically improved between 10% and 20% without any code changes.
+
+Azure Container Apps provides automatic memory fitting under the following circumstances:
+
+- A single Java application is running in a container.
+- Your application is deployed from source code or a JAR file.
+
+Automatic memory fitting is enabled by default, but you can disable manually.
+
+## Disable memory fitting
+
+Automatic memory fitting is helpful in most scenarios, but it might not be ideal for all situations. You can disable memory fitting manually or automatically.
+
+### Manual disable
+
+To disable memory fitting when you create your container app, set the environment variable `BP_JVM_FIT` to `false`.
+
+The following examples show you how to use disable memory fitting with the `create`, `up`, and `update` commands.
+
+# [create](#tab/create)
+
+```azurecli-interactive
+az containerapp create \
+ --name <CONTAINER_APP_NAME> \
+ --resource-group <RESOURCE_GROUP> \
+ --image <CONTAINER_IMAGE_LOCATION> \
+ --environment <ENVIRONMENT_NAME> \
+ --env-vars BP_JVM_FIT="false"
+```
+
+# [up](#tab/up)
+
+```azurecli-interactive
+az containerapp up \
+ --name <CONTAINER_APP_NAME> \
+ --resource-group <RESOURCE_GROUP> \
+ --image <CONTAINER_IMAGE_LOCATION> \
+ --environment <ENVIRONMENT_NAME> \
+ --env-vars BP_JVM_FIT="false"
+```
+
+# [update](#tab/update)
+
+```azurecli-interactive
+az containerapp update \
+ --name <CONTAINER_APP_NAME> \
+ --resource-group <RESOURCE_GROUP> \
+ --image <CONTAINER_IMAGE_LOCATION> \
+ --set-env-vars BP_JVM_FIT="false"
+```
+
+> [!NOTE]
+> The container app restarts when you run the `update` command.
+++
+To verify that memory fitting is disabled, check your logs for the following message:
+
+> Disabling jvm memory fitting, reason: manually disabled
+
+### Automatic disable
+
+Memory fitting is automatically disabled when any of the following conditions are met:
+
+- **Limited container memory**: Container memory is less than 1 GB.
+
+- **Explicitly set memory options**: When one or more memory settings are specified in environment variables through `JAVA_TOOL_OPTIONS`. Memory setting options include the following values:
+
+ - `-XX:MaxRAMPercentage`
+ - `-XX:MinRAMPercentage`
+ - `-XX:InitialRAMPercentage`
+ - `-XX:MaxMetaspaceSize`
+ - `-XX:MetaspaceSize`
+ - `-XX:ReservedCodeCacheSize`
+ - `-XX:MaxDirectMemorySize`
+ - `-Xmx`
+ - `-Xms`
+ - `-Xss`
+
+ For example, memory fitting is automatically disabled if you specify the maximum heap size in an environment variable like shown the following example:
+
+ ```azurecli-interactive
+ az containerapp update \
+ --name <CONTAINER_APP_NAME> \
+ --resource-group <RESOURCE_GROUP> \
+ --image <CONTAINER_IMAGE_LOCATION> \
+ --set-env-vars JAVA_TOOL_OPTIONS="-Xmx512m"
+ ```
+
+ With memory fitting disabled, you see the following message output to the log:
+
+ > Disabling jvm memory fitting, reason: use settings specified in
+ > JAVA_TOOL_OPTIONS=-Xmx512m instead
+ > Picked up JAVA_TOOL_OPTIONS: -Xmx512m
+
+- **Small non-heap memory size**: Rare cases when the calculated size of heap or nonheap size is too small (less than 200 MB).
+
+## Verify memory fit is enabled
+
+Inspect your [log stream](log-streaming.md) during start-up for a message that references *Calculated JVM Memory Configuration*.
+
+Here's an example message output during start-up.
+
+> Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M
+> -Xmx1498277K -XX:MaxMetaspaceSize=86874K -XX:ReservedCodeCacheSize=240M
+> -Xss1M (Total Memory: 2G, Thread Count: 250,
+> Loaded Class Count: 12924, Headroom: 0%)
+>
+> Picked up JAVA_TOOL_OPTIONS: -XX:MaxDirectMemorySize=10M
+> -Xmx1498277K -XX:MaxMetaspaceSize=86874K
+> -XX:ReservedCodeCacheSize=240M -Xss1M
+
+## Runtime configuration
+
+You can set environment variables to affect the memory fitting behavior.
+
+| Variable | Unit | Example | Description |
+|--|--|--|--|
+| `BPL_JVM_HEAD_ROOM` | Percentage | `BPL_JVM_HEAD_ROOM=5` | Leave memory space for the system based on the given percentage. |
+| `BPL_JVM_THREAD_COUNT` | Number | `BPL_JVM_THREAD_COUNT=200` | The estimated maximum number of threads. |
+| `BPL_JVM_CLASS_ADJUSTMENT` | Number<br> Percentage | `BPL_JVM_CLASS_ADJUSTMENT=10000`<br>`BPL_JVM_CLASS_ADJUSTMENT="10%"` | Adjust JVM class count by explicit value or percentage. |
+
+> [!NOTE]
+> Changing these variables does not disable automatic memory fitting.
+
+## Out of memory warning
+
+If you decide to configure the memory settings yourself, you run the risk of encountering an out-of-memory warning.
+
+Here are some possible reasons of why your container could run out of memory:
+
+- Heap memory is greater than the total available memory.
+
+- Nonheap memory is greater than the total available memory.
+
+- Heap + nonheap memory is greater than the total available memory.
+
+If your container runs out of memory, then you encounter the following warning:
+
+> OOM Warning: heap memory 1200M is greater than 1G available for allocation (-Xmx1200M)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Monitor logs with Log Analytics](log-monitoring.md)
container-apps Java Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/java-overview.md
+
+ Title: Java on Azure Container Apps overview
+description: Learn about the tools and resources needed to run Java applications on Azure Container Apps.
++++ Last updated : 02/27/2024+++
+# Java on Azure Container Apps overview
+
+Azure Container Apps can run any containerized Java application in the cloud while giving flexible options for how your deploy your applications.
+
+When you use Container Apps for your containerized Java applications, you get:
+
+- **Cost effective scaling**: When you use the [Consumption plan](plans.md#consumption), your Java apps can scale to zero. Scaling in when there's little demand for your app automatically drives costs down for your projects.
+
+- **Deployment options**: Azure Container Apps integrates with [Buildpacks](https://buildpacks.io), which allows you to deploy directly from a Maven build, via artifact files, or with your own Dockerfile.
+
+- **Automatic memory fitting**: Container Apps optimizes how the Java Virtual Machines (JVM) [manages memory](java-memory-fit.md), making the most possible memory available to your Java applications.
+
+- **Build environment variables**: You can configure [custom key-value pairs](java-build-environment-variables.md) to control the Java image build from source code.
+
+- **WAR deployment**: You can deploy your container app directly from a [WAR file](java-deploy-war-file.md).
+
+This article details the information you need to know as you build Java applications on Azure Container Apps.
+
+## Deployment types
+
+Running containerized applications usually means you need to create a Dockerfile for your application, but running Java applications on Container Apps gives you a few options.
+
+| Type | Description | Uses Buildpacks | Uses a Dockerfile |
+|--|--|--|--|
+| Artifact build | You can deploy directly to Container Apps from your source code. | Yes | No |
+| Maven build | You can create a Maven build to deploy to Container Apps | Yes | No |
+| Dockerfile | You can create your Dockerfile manually and take full control over your deployment. | No | Yes |
+
+> [!NOTE]
+> The Buildpacks deployments support JDK versions 8, 11, 17, and 21.
+
+## Application types
+
+Different applications types are implemented either as an individual container app or as a [Container Apps job](jobs.md). Use the following table to help you decide which application type is best for your scenario.
+
+Examples listed in this table aren't meant to be exhaustive, but to help you best understand the intent of different application types.
+
+| Type | Examples | Implement as... |
+|--|--|--|
+| Web applications and API endpoints | Spring Boot, Quarkus, Apache Tomcat, and Jetty | An individual container app |
+| Console applications, scheduled tasks, task runners, batch jobs | SparkJobs, ETL tasks, Spring Batch Job, Jenkins pipeline job | A Container Apps job |
+
+## Debugging
+
+As you debug your Java application on Container Apps, be sure to inspect the Java [in-process agent](/azure/spring-apps/enterprise/how-to-application-insights?pivots=sc-enterprise) for log stream and console debugging messages.
+
+## Troubleshooting
+
+Keep the following items in mind as you develop your Java applications:
+
+- **Default resources**: By default, an app has a half a CPU and 1 GB available.
+
+- **Stateless processes**: As your container app scales in and out, new processes are created and shut down. Make sure to plan ahead so that you write data to shared storage such as databases and file system shares. Don't expect any files written directly to the container file system to be available to any other container.
+
+- **Scale to zero is the default**: If you need to ensure one or more instances of your application are continuously running, make sure you define a [scale rule](scale-app.md) to best meet your needs.
+
+- **Unexpected behavior**: If your container app fails to build, start, or run, verify that the artifact path is set correctly in your container.
+
+- **Buildpack support issues**: If your Buildpack doesn't support dependencies or the version of Java you require, create your own Dockerfile to deploy your app. You can view a [sample Dockerfile](https://github.com/Azure-Samples/containerapps-albumapi-java/blob/main/Dockerfile) for reference.
+
+- **SIGTERM and SIGINT signals**: By default, the JVM handles `SIGTERM` and `SIGINT` signals and doesnΓÇÖt pass them to the application unless you intercept these signals and handle them in your application accordingly. Container Apps uses both `SIGTERM` and `SIGINT` for process control. If you don't capture these signals, and your application terminates unexpectedly, you might lose these signals unless you persist them to storage.
+
+- **Access to container images**: If you use artifact or source code deployment in combination with the default registry, you don't have direct access to your container images.
+
+## Monitoring
+
+All the [standard observability tools](observability.md) work with your Java application. As you build your Java applications to run on Container Apps, keep in mind the following items:
+
+- **Logging**: Send application and error messages to `stdout` or `stderror` so they can surface in the log stream. Avoid logging directly to the container's filesystem as is common when using popular logging services.
+
+- **Performance monitoring configuration**: Deploy performance monitoring services as a separate container in your Container Apps environment so it can directly access your application.
+
+## Scaling
+
+If you need to make sure requests from your front-end applications reach the same server, or your front-end app is split between multiple containers, make sure to enable [sticky sessions](sticky-sessions.md).
+
+## Security
+
+The Container Apps runtime terminates SSL for you inside your Container Apps environment.
+
+## Memory management
+
+To help optimize memory management in your Java application, you can ensure [JVM memory fitting](java-memory-fit.md) is enabled in your app.
+
+Memory is measured in gibibytes (Gi) and CPU core pairs. The following table shows the range of resources available to your container app.
+
+| Threshold | CPU cores | Memory in Gibytes (Gi) |
+||||
+| Minimum | 0.25 | 0.5 |
+| Maximum | 4 | 8 |
+
+Cores are available in 0.25 core increments, with memory available at a 2:1 ratio. For instance, if you require 1.25 cores, you have 2.5 Gi of memory available to your container app.
+
+> [!NOTE]
+> For apps using JDK versions 9 and lower, make sure to define custom JVM memory settings to match the memory allocation in Azure Container Apps.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Configure build environment variables](java-build-environment-variables.md)
container-apps Quickstart Code To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-code-to-cloud.md
- devx-track-azurecli - ignite-2023 Previously updated : 01/26/2024 Last updated : 01/27/2024 zone_pivot_groups: container-apps-code-to-cloud-segmemts
Extract the download and change into the *containerapps-albumapi-csharp-buildpac
Extract the download and change into the *containerapps-albumapi-java-buildpack/src* folder. > [!NOTE]
-> The Java Builpack currently supports the [Maven tool](https://maven.apache.org/what-is-maven.html) to build your application.
-
+> The Java Buildpack uses [Maven](https://maven.apache.org/what-is-maven.html) with default settings to build your application. Alternatively, you can the [use `--build-env-vars` parameter to configure the image build from source code](java-build-environment-variables.md).
# [JavaScript](#tab/javascript)
Extract the download and change into the *containerapps-albumapi-java-buildpack/
Extract the download and change into the *containerapps-albumapi-javascript-buildpack/src* folder. - # [Python](#tab/python) [Download the source code](https://codeload.github.com/azure-samples/containerapps-albumapi-python/zip/refs/heads/buildpack) to your machine. Extract the download and change into the *containerapps-albumapi-python-buildpack/src* folder. - # [Go](#tab/go) Azure Container Apps cloud build doesn't currently support Buildpacks for Go.
cosmos-db Sdk Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-observability.md
Distributed tracing is available in the following SDKs:
|SDK |Supported version |Notes | |-|||
-|.NET v3 SDK |[>= `3.36.0`](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.36.0) |This feature is available in both preview and non-preview versions. For non-preview versions it's off by default. You can enable tracing by setting `IsDistributedTracingEnabled = false` in `CosmosClientOptions.CosmosClientTelemetryOptions`. |
-|.NET v3 SDK preview |[>= `3.33.0-preview`](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.33.0-preview) |This feature is available in both preview and non-preview versions. For preview versions it's on by default. You can disable tracing by setting `IsDistributedTracingEnabled = true` in `CosmosClientOptions.CosmosClientTelemetryOptions`. |
+|.NET v3 SDK |[>= `3.36.0`](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.36.0) |This feature is available in both preview and non-preview versions. For non-preview versions, it's off by default. You can enable tracing by setting `DisableDistributedTracing = false` in `CosmosClientOptions.CosmosClientTelemetryOptions`. |
+|.NET v3 SDK preview |[>= `3.33.0-preview`](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.33.0-preview) |This feature is available in both preview and non-preview versions. For preview versions, it's on by default. You can disable tracing by setting `DisableDistributedTracing = true` in `CosmosClientOptions.CosmosClientTelemetryOptions`. |
|Java v4 SDK |[>= `4.43.0`](https://mvnrepository.com/artifact/com.azure/azure-cosmos/4.43.0) | | ## Trace attributes
-Azure Cosmos DB traces follow the [OpenTelemetry database specification](https://github.com/open-telemetry/opentelemetry-specification) and also provide several custom attributes. You may see different attributes depending on the operation of your request, and these attributes are core attributes for all requests.
+Azure Cosmos DB traces follow the [OpenTelemetry database specification](https://github.com/open-telemetry/opentelemetry-specification) and also provide several custom attributes. You can see different attributes depending on the operation of your request, and these attributes are core attributes for all requests.
|Attribute |Type |Description | |-|--||
Azure Cosmos DB traces follow the [OpenTelemetry database specification](https:/
### Gather diagnostics
-If you've configured logs in your trace provider, you can automatically get [diagnostics](./troubleshoot-dotnet-sdk.md#capture-diagnostics) for Azure Cosmos DB requests that failed or had high latency. These logs can help you diagnose failed and slow requests without requiring any custom code to capture them.
+If you configured logs in your trace provider, you can automatically get [diagnostics](./troubleshoot-dotnet-sdk.md#capture-diagnostics) for Azure Cosmos DB requests that failed or had high latency. These logs can help you diagnose failed and slow requests without requiring any custom code to capture them.
### [.NET](#tab/dotnet)
-In addition to getting diagnostic logs for failed requests, you can configure different latency thresholds for when to collect diagnostics from successful requests. The default values are 100 ms for point operations and 500 ms for non point operations and can be adjusted through client options.
+In addition to getting diagnostic logs for failed requests, you can configure different latency thresholds for when to collect diagnostics from successful requests. The default values are 100 ms for point operations and 500 ms for non point operations. These thresholds can be adjusted through client options.
```csharp CosmosClientOptions options = new CosmosClientOptions()
In addition to getting diagnostic logs for failed requests, you can configure di
## Configure OpenTelemetry
-To use OpenTelemetry with the Azure Cosmos DB SDKs, add the `Azure.Cosmos.Operation` source to your trace provider. OpenTelemetry is compatible with many exporters that can ingest your data. The following sample uses the `Azure Monitor OpenTelemetry Exporter`, but you can choose to configure any exporter you wish. Depending on your chosen exporter, you may see a delay ingesting data of up to a few minutes.
+To use OpenTelemetry with the Azure Cosmos DB SDKs, add the `Azure.Cosmos.Operation` source to your trace provider. OpenTelemetry is compatible with many exporters that can ingest your data. The following sample uses the `Azure Monitor OpenTelemetry Exporter`, but you can choose to configure any exporter you wish. Depending on your chosen exporter, you might see a delay ingesting data of up to a few minutes.
> [!TIP] > If you use the `Azure.Monitor.OpenTelemetry.Exporter` package, ensure you're using version >= `1.0.0-beta.11`.
This sample shows how to configure OpenTelemetry for a .NET console app. See the
## Configure the Application Insights SDK
-There are many different ways to configure Application Insights depending on the language your application is written in and your compute environment. For more information, see the [Application Insights documentation](../../azure-monitor/app/app-insights-overview.md#how-do-i-use-application-insights). Ingestion of data into Application Insights may take up to a few minutes.
+There are many different ways to configure Application Insights depending on the language your application is written in and your compute environment. For more information, see the [Application Insights documentation](../../azure-monitor/app/app-insights-overview.md#how-do-i-use-application-insights). Ingestion of data into Application Insights can take up to a few minutes.
> [!NOTE] > Use version >= `2.22.0-beta2` of the Application Insights package for your target .NET environment.
defender-for-cloud Defender For Apis Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-deploy.md
Title: Protect your APIs with Defender for APIs
-description: Learn about deploying the Defender for APIs plan in Defender for Cloud
+description: Learn about deploying the Defender for APIs plan in Defender for Cloud.
Previously updated : 12/03/2023 Last updated : 03/03/2024 # Protect your APIs with Defender for APIs
Defender for APIs in Microsoft Defender for Cloud offers full lifecycle protecti
Defender for APIs helps you to gain visibility into business-critical APIs. You can investigate and improve your API security posture, prioritize vulnerability fixes, and quickly detect active real-time threats.
+This article describes how to enable and onboard the Defender for APIs plan in the Defender for Cloud portal. Alternately, you can [enable Defender for APIs within an API Management instance](../api-management/protect-with-defender-for-apis.md) in the Azure portal.
+
+Learn more about the [Microsoft Defender for APIs](defender-for-apis-introduction.md) plan in the Microsoft Defender for Cloud.
Learn more about [Defender for APIs](defender-for-apis-introduction.md). ## Prerequisites
Learn more about [Defender for APIs](defender-for-apis-introduction.md).
- Ensure that APIs you want to secure are published in [Azure API management](/azure/api-management/api-management-key-concepts). Follow [these instructions](/azure/api-management/get-started-create-service-instance) to set up Azure API Management.
-> [!NOTE]
-> This article describes how to enable and onboard the Defender for APIs plan in the Defender for Cloud portal. Alternately, you can [enable Defender for APIs within an API Management instance](../api-management/protect-with-defender-for-apis.md) in the Azure portal.
+- You must select a plan that grants entitlement appropriate for the API traffic volume in your subscription to receive the most optimal pricing. By default, subscriptions are opted into "Plan 1", which can lead to unexpected overages if your subscription has API traffic higher than the [one million API calls entitlement](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SecurityMenuBlade/~/18).
## Enable the Defender for APIs plan
+When selecting a plan, consider these points:
+
+- Defender for APIs protects only those APIs that are onboarded to Defender for APIs. This means you can activate the plan at the subscription level, and complete the second step of onboarding by fixing the onboarding recommendation. For more information about onboarding, see the [onboarding guide](defender-for-apis-deploy.md#enable-the-defender-for-apis-plan).
+- Defender for APIs has five pricing plans, each with a different entitlement limit and monthly fee. The billing is done at the subscription level.
+- Billing is applied to the entire subscription based on the total amount of API traffic monitored over the month for the subscription.
+- The API traffic counted towards the billing is reset to 0 at the start of each month (every billing cycle).
+- The overages are computed on API traffic exceeding the entitlement limit per plan selection during the month for your entire subscription.
+
+To select the best plan for your subscription from the Microsoft Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/), follow these steps and choose the plan that matches your subscriptionsΓÇÖ API traffic requirements:
+
+> [!NOTE]
+> The Defender for Cloud pricing page will be updated with the pricing information and pricing calculators by end of March 2024. In the meantime, use this document to select the correct Defender for APIs entitlements and enable the plan.
+ 1. Sign into the [portal](https://portal.azure.com/), and in Defender for Cloud, select **Environment settings**. 1. Select the subscription that contains the managed APIs that you want to protect.
-1. In the **APIs** plan, select **On**. Then select **Save**:
+ :::image type="content" source="media/defender-for-apis-entitlement-plans/select-environment-settings.png" alt-text="Screenshot that shows where to select Environment settings." lightbox="media/defender-for-apis-entitlement-plans/select-environment-settings.png":::
+
+1. Select **Details** under the pricing column for the APIs plan.
+
+ :::image type="content" source="media/defender-for-apis-entitlement-plans/select-api-details.png" alt-text="Screenshot that shows where to select API details." lightbox="media/defender-for-apis-entitlement-plans/select-api-details.png":::
+
+1. Select the plan that is suitable for your subscription.
+1. Select **Save**.
+
+## Selecting the optimal plan based on historical Azure API Management API traffic usage
+
+You must select a plan that grants entitlement appropriate for the API traffic volume in your subscription to receive the most optimal pricing. By default, subscriptions are opted into **Plan 1**, which can lead to unexpected overages if your subscription has API traffic higher than the [one million API calls entitlement](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SecurityMenuBlade/~/18).
+
+**To estimate the monthly API traffic in Azure API Management:**
+
+1. Navigate to the Azure API Management portal and select **Metrics** under the Monitoring menu bar item.
+
+ :::image type="content" source="media/defender-for-apis-entitlement-plans/select-metrics.png" alt-text="Screenshot that shows where to select metrics." lightbox="media/defender-for-apis-entitlement-plans/select-metrics.png":::
+
+1. Select the time range as **Last 30 days**.
+1. Select and set the following parameters:
+
+ 1. Scope: **Azure API Management Service Name**
+ 1. Metric Namespace: **API Management service standard metrics**
+ 1. Metric = **Requests**
+ 1. Aggregation = **Sum**
+
+1. After setting the above parameters, the query will automatically run, and the total number of requests for the past 30 days appears at the bottom of the screen. In the screenshot example, the query results in 414 total number of requests.
+
+ :::image type="content" source="media/defender-for-apis-entitlement-plans/metrics-results.png" alt-text="Screenshot that shows metrics results." lightbox="media/defender-for-apis-entitlement-plans/metrics-results.png":::
- :::image type="content" source="media/defender-for-apis-deploy/enable-plan.png" alt-text="Screenshot that shows how to turn on the Defender for APIs plan in the portal." lightbox="media/defender-for-apis-deploy/enable-plan.png":::
+ > [!NOTE]
+ > These instructions are for calculating the usage per Azure API management service. To calculate the estimated traffic usage for *all* API management services within the Azure subscription, change the **Scope** parameter to each Azure API management service within the Azure subscription, re-run the query, and sum the query results.
-1. Select **Save**.
+If you don't have access to run the metrics query, reach out to your internal Azure API Management administrator or your Microsoft account manager.
> [!NOTE] > After enabling Defender for APIs, onboarded APIs take up to 50 minutes to appear in the **Recommendations** tab. Security insights are available in the **Workload protections** > **API security** dashboard within 40 minutes of onboarding.
Learn more about [Defender for APIs](defender-for-apis-introduction.md).
1. In the Defender for Cloud portal, select **Recommendations**. 1. Search for *Defender for APIs*.
-1. Under **Enable enhanced security features**, select the security recommendation **Azure API Management APIs should be onboarded to Defender for APIs**:
+1. Under **Enable enhanced security features** select the security recommendation **Azure API Management APIs should be onboarded to Defender for APIs**:
:::image type="content" source="media/defender-for-apis-deploy/api-recommendations.png" alt-text="Screenshot that shows how to turn on the Defender for APIs plan from the recommendation." lightbox="media/defender-for-apis-deploy/api-recommendations.png":::
-1. In the recommendation page, you can review the recommendation severity, update interval, description, and remediation steps.
+1. In the recommendation page you can review the recommendation severity, update interval, description, and remediation steps.
1. Review the resources in scope for the recommendations: - **Unhealthy resources**: Resources that aren't onboarded to Defender for APIs. - **Healthy resources**: API resources that are onboarded to Defender for APIs. - **Not applicable resources**: API resources that aren't applicable for protection.
-1. In **Unhealthy resources**, select the APIs that you want to protect with Defender for APIs.
+1. In **Unhealthy resources** select the APIs that you want to protect with Defender for APIs.
1. Select **Fix**: :::image type="content" source="media/defender-for-apis-deploy/api-recommendation-details.png" alt-text="Screenshot that shows the recommendation details for turning on the plan." lightbox="media/defender-for-apis-deploy/api-recommendation-details.png":::
-1. In **Fixing resources**, review the selected APIs, and select **Fix resources**:
+1. In **Fixing resources** review the selected APIs and select **Fix resources**:
:::image type="content" source="media/defender-for-apis-deploy/fix-resources.png" alt-text="Screenshot that shows how to fix unhealthy resources." lightbox="media/defender-for-apis-deploy/fix-resources.png":::
You can also navigate to other collections to learn about what types of insights
## Next steps
-[Review](defender-for-apis-posture.md) API threats and security posture.
+- [Review](defender-for-apis-posture.md) API threats and security posture.
+- [Investigate API findings, recommendations, and alerts](defender-for-apis-posture.md).
defender-for-cloud Defender For Storage Malware Scan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-malware-scan.md
Learn more about [setting up logging for malware scanning](advanced-configuratio
Malware scanning is billed per GB scanned. To provide cost predictability, Malware Scanning supports setting a cap on the amount of GB scanned in a single month per storage account. > [!IMPORTANT]
-> Malware scanning in Defender for Storage is not included for free in the first 30 day trial and will be charged from the first day in accordance with the pricing scheme available on the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+> Malware scanning in Defender for Storage is not included for free in the first 30-day trial and will be charged from the first day in accordance with the pricing scheme available on the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
-The "capping" mechanism is designed to set a monthly scanning limit, measured in gigabytes (GB), for each storage account, serving as an effective cost control. If a predefined scanning limit is established for a storage account in a single calendar month, the scanning operation would automatically halt once this threshold is reached (with up to a 20-GB deviation), and files wouldn't be scanned for malware. Updating the cap typically takes up to an hour to take effect.
+The "capping" mechanism is designed to set a monthly scanning limit, measured in gigabytes (GB), for each storage account, serving as an effective cost control. If a predefined scanning limit is established for a storage account in a single calendar month, the scanning operation would automatically halt once this threshold is reached (with up to a 20-GB deviation), and files wouldn't be scanned for malware. The cap is reset at the end of every month at midnight UTC. Updating the cap typically takes up to an hour to take effect.
By default, a limit of 5 TB (5,000 GB) is established if no specific capping mechanism is defined.
defender-for-cloud Enable Adaptive Application Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-adaptive-application-controls.md
Title: Enable and manage adaptive application controls
-description: This document helps you enable and manage adaptive application control in Microsoft Defender for Cloud to create an allowlist of applications running for Azure machines.
+description: Learn how to enable and manage adaptive application control in Microsoft Defender for Cloud to create an allowlist of applications running for Azure machines.
Select the recommendation, or open the adaptive application controls page to vie
1. Open the **Recommended** tab. The groups of machines with recommended allowlists appear.
- :::image type="content" source="media/enable-adaptive-application-controls/adaptive-application-recommended-tab.png" alt-text="Screenshot that shows you where on the screen the recommendation tab is.":::
+ :::image type="content" source="media/enable-adaptive-application-controls/adaptive-application-recommended-tab.png" alt-text="Screenshot that shows you where on the screen the recommendation tab is.":::
1. Select a group.
To edit the rules for a group of machines:
1. Select **Add rule**.
- :::image type="content" source="media/enable-adaptive-application-controls/adaptive-application-add-custom-rule.png" alt-text="Screenshot that showsyou where the add rule button is located.":::
+ :::image type="content" source="media/enable-adaptive-application-controls/adaptive-application-add-custom-rule.png" alt-text="Screenshot that shows you where the add rule button is located.":::
1. If you're defining a known safe path, change the **Rule type** to 'Path' and enter a single path. You can include wildcards in the path. The following screens show some examples of how to use wildcards.
To remediate the issues:
1. To investigate further, select a group.
- :::image type="content" source="media/enable-adaptive-application-controls/recent-alerts.png" alt-text="Screenshot showing recent alerts.":::
+ :::image type="content" source="media/enable-adaptive-application-controls/recent-alerts.png" alt-text="Screenshot showing recent alerts in Configured tab.":::
1. For further details, and the list of affected machines, select an alert.
Some of the functions available from the REST API include:
> > Remove the following properties before using the JSON in the **Put** request: recommendationStatus, configurationStatus, issues, location, and sourceSystem.
-## Next steps
+## Related content
On this page, you learned how to use adaptive application control in Microsoft Defender for Cloud to define allowlists of applications running on your Azure and non-Azure machines. To learn more about some other cloud workload protection features, see: - [Understanding just-in-time (JIT) VM access](just-in-time-access-overview.md) - [Securing your Azure Kubernetes clusters](defender-for-kubernetes-introduction.md)-- View common question about [Adaptive application controls](faq-defender-for-servers.yml)
+- View common question about [Adaptive application controls](faq-defender-for-servers.yml)
defender-for-cloud Enable Agentless Scanning Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-agentless-scanning-vms.md
When you enable [Defender Cloud Security Posture Management (CSPM)](concept-clou
If you have Defender for Servers P2 already enabled and agentless scanning is turned off, you need to turn on agentless scanning manually. You can enable agentless scanning on+ - [Azure](#agentless-vulnerability-assessment-on-azure) - [AWS](#agentless-vulnerability-assessment-on-aws) - [GCP](#enable-agentless-scanning-in-gcp)
You can enable agentless scanning on
1. In the settings pane, turn on **Agentless scanning for machines**.
- :::image type="content" source="media/enable-vulnerability-assessment-agentless/turn-on-agentles-scanning-azure.png" alt-text="Screenshot of settings and monitoring screen to turn on agentless scanning." lightbox="media/enable-vulnerability-assessment-agentless/turn-on-agentles-scanning-azure.png":::
+ :::image type="content" source="media/enable-vulnerability-assessment-agentless/turn-on-agentless-scanning-azure.png" alt-text="Screenshot of settings and monitoring screen to turn on agentless scanning." lightbox="media/enable-vulnerability-assessment-agentless/turn-on-agentless-scanning-azure.png":::
1. Select **Save**.
After you enable agentless scanning, software inventory and vulnerability inform
### Enable agentless scanning in GCP
-1. In Defender for Cloud, select **Environment settings**.
-1. Select the relevant project or organization.
-1. For either the Defender Cloud Security Posture Management (CSPM) or Defender for Servers P2 plan, selectΓÇ» **Settings**.
+1. In Defender for Cloud, select **Environment settings**.
+1. Select the relevant project or organization.
+1. For either the Defender Cloud Security Posture Management (CSPM) or Defender for Servers P2 plan, selectΓÇ» **Settings**.
:::image type="content" source="media/enable-agentless-scanning-vms/gcp-select-plan.png" alt-text="Screenshot that shows where to select the plan for GCP projects." lightbox="media/enable-agentless-scanning-vms/gcp-select-plan.png":::
After you enable agentless scanning, software inventory and vulnerability inform
:::image type="content" source="media/enable-agentless-scanning-vms/gcp-select-agentless.png" alt-text="Screenshot that shows where to select agentless scanning." lightbox="media/enable-agentless-scanning-vms/gcp-select-agentless.png":::
-1. SelectΓÇ»**Save and Next: Configure Access**.
+1. SelectΓÇ»**Save and Next: Configure Access**.
1. Copy the onboarding script. 1. Run the onboarding script in the GCP organization/project scope (GCP portal or gcloud CLI).
-1. Select ΓÇ»**Next: Review and generate**.
-1. Select ΓÇ»**Update**.
+1. Select ΓÇ»**Next: Review and generate**.
+1. Select ΓÇ»**Update**.
-## Test the agentless malware scanner's deployment
+## Test the agentless malware scanner's deployment
Security alerts appear on the portal only in cases where threats are detected on your environment. If you do not have any alerts it may be because there are no threats on your environment. You can test to see that the device is properly onboarded and reporting to Defender for Cloud by creating a test file.
The alert `MDC_Test_File malware was detected (Agentless)` will appear within 24
1. Execute the following script. - ```powershell # Virus test string $TEST_STRING = '$$89-barbados-dublin-damascus-notice-pulled-natural-31$$'
- 
+ # File to be created $FILE_PATH = "C:\temp\virus_test_file.txt"
- 
+ # Create "temp" directory if it does not exist $DIR_PATH = "C:\temp" if (!(Test-Path -Path $DIR_PATH)) {
-    New-Item -ItemType Directory -Path $DIR_PATH
+ New-Item -ItemType Directory -Path $DIR_PATH
}
- 
+ # Write the test string to the file without a trailing newline [IO.File]::WriteAllText($FILE_PATH, $TEST_STRING)
- 
+ # Check if the file was created and contains the correct string if (Test-Path -Path $FILE_PATH) {
-    $content = [IO.File]::ReadAllText($FILE_PATH)
-    if ($content -eq $TEST_STRING) {
-        Write-Host "Test file created and validated successfully."
-    } else {
-        Write-Host "Test file does not contain the correct string."
-    }
+ $content = [IO.File]::ReadAllText($FILE_PATH)
+ if ($content -eq $TEST_STRING) {
+ Write-Host "Test file created and validated successfully."
+ } else {
+ Write-Host "Test file does not contain the correct string."
+ }
} else {
-    Write-Host "Failed to create test file."
+ Write-Host "Failed to create test file."
} ``` - The alert `MDC_Test_File malware was detected (Agentless)` will appear within 24 hours in the Defender for Cloud Alerts page and in the Defender XDR portal. :::image type="content" source="media/enable-agentless-scanning-vms/test-alert.jpg" alt-text="Screenshot of the test alert that appears in Defender for Cloud for Windows with because of the PowerShell script." lightbox="media/enable-agentless-scanning-vms/test-alert.jpg":::
Agentless scanning applies to all of the eligible machines in the subscription.
1. Select **Save**.
-## Next steps
+## Related content
Learn more about:
defender-for-cloud Enable Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-defender-for-endpoint.md
To remove the Defender for Endpoint solution from your machines:
1. Follow the steps in [Offboard devices from the Microsoft Defender for Endpoint service](/microsoft-365/security/defender-endpoint/offboard-machines) from the Defender for Endpoint documentation.
-## Next steps
+## Related content
- [Platforms and features supported by Microsoft Defender for Cloud](security-center-os-coverage.md) - [Learn how recommendations help you protect your Azure resources](review-security-recommendations.md) - View common question about the [Defender for Cloud integration with Microsoft Defender for Endpoint](faq-defender-for-servers.yml)--
defender-for-cloud Tutorial Enable Cspm Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-cspm-plan.md
Title: Protect your resources with Defender CSPM plan on your subscription
-description: Learn how to enable Defender CSPM on your Azure subscription for Microsoft Defender for Cloud.
+description: Learn how to enable Defender CSPM on your Azure subscription for Microsoft Defender for Cloud and enhance your security posture.
Last updated 09/05/2023
Defender Cloud Security Posture Management (CSPM) in Microsoft Defender for Clou
Defender for Cloud continually assesses your resources, subscriptions, and organization for security issues. Defender for Cloud shows you your security posture with the secure score. The secure score is an aggregated score of the security findings that tells you your current security situation. The higher the score, the lower the identified risk level.
-When you enable Defender for Cloud, you automatically enable the **Foundational CSPM capabilities**. these capabilities are part of the free services offered by Defender for Cloud.
+When you enable Defender for Cloud, you automatically enable the **Foundational CSPM capabilities**. These capabilities are part of the free services offered by Defender for Cloud.
You have the ability to enable the **Defender CSPM** plan, which offers extra protections for your environments such as governance, regulatory compliance, cloud security explorer, attack path analysis and agentless scanning for machines.
Once the Defender CSPM plan is enabled on your subscription, you have the abilit
- **Agentless discovery for Kubernetes**: API-based discovery of information about Kubernetes cluster architecture, workload objects, and setup. Required for Kubernetes inventory, identity and network exposure detection, risk hunting as part of the cloud security explorer. This extension is required for attack path analysis (Defender CSPM only). -- **Container registries vulnerability assessments**: Provides vulnerability management for images stored in your container registries.
+- **Agentless container vulnerability assessments**: Provides vulnerability management for images stored in your container registries.
- **Sensitive data discovery**: Sensitive data discovery automatically discovers managed cloud data resources containing sensitive data at scale. This feature accesses your data, it is agentless, uses smart sampling scanning, and integrates with Microsoft Purview sensitive information types and labels.
+- **Permissions Management (Preview)** - Insights into Cloud Infrastructure Entitlement Management (CIEM). CIEM ensures appropriate and secure identities and access rights in cloud environments. It helps understand access permissions to cloud resources and associated risks. Setup and data collection may take up to 24 hours.
+ **To enable the components of the Defender CSPM plan**: 1. On the Defender plans page, select **Settings**.
event-grid Mqtt Routing To Event Hubs Cli Namespace Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-routing-to-event-hubs-cli-namespace-topics.md
+
+ Title: Namespace topics to route MQTT messages to Event Hubs (CLI)
+description: 'This tutorial shows how to use namespace topics to route MQTT messages to Azure Event Hubs. You use Azure CLI to do the tasks in this tutorial.'
+ Last updated : 02/28/2024+++
+ - build-2023
+ - ignite-2023
+++
+# Tutorial: Use namespace topics to route MQTT messages to Azure Event Hubs (Azure CLI)
+In this tutorial, you learn how to use a namespace topic to route data from MQTT clients to Azure Event Hubs. Here are the high-level steps:
+
+## Prerequisites
+
+- If you don't have an Azure subscription, create an [Azure free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
+- If you're new to Event Grid, read the [Event Grid overview](/azure/event-grid/overview) before you start this tutorial.
+- Register the Event Grid resource provider according to the steps in [Register the Event Grid resource provider](/azure/event-grid/custom-event-quickstart-portal#register-the-event-grid-resource-provider).
+- Make sure that port **8883** is open in your firewall. The sample in this tutorial uses the MQTT protocol, which communicates over port 8883. This port might be blocked in some corporate and educational network environments.
+
+## Launch Cloud Shell
+
+1. Sign into the [Azure portal](https://portal.azure.com).
+1. Select the link to launch the Cloud Shell.
+1. Switch to Bash.
+
+ :::image type="content" source="./media/mqtt-routing-to-event-hubs-cli-namespace-topics/cloud-shell-bash.png" alt-text="Screenshot that shows the Azure portal with Cloud Shell open and Bash selected.":::
+
+## Create an Event Grid namespace and topic
+
+ To create an Event Grid namespace and a topic in the namespace, copy the following script to an editor, replace placeholders with actual values, and run the commands.
+
+| Placeholder | Comments |
+| -- | -- |
+| `RESOURCEGROUPNAME` | Specify a name for the resource group to be created. |
+| `EVENTGRIDNAMESPACENAME` | Specify the name for the Event Grid namespace. |
+| `REGION` | Specify the location in which you want to create the resources. |
+| `NAMESPACETOPICNAME` | Specify a name for the namespace topic. |
+++
+```azurecli
+rgName="RESOURCEGROUPNAME"
+nsName="EVENTGRIDNAMESPACENAME"
+location="REGION"
+nsTopicName="NAMESPACETOPICNAME"
+
+az group create -n $rgName -l $location
+az eventgrid namespace create -g $rgName -n $nsName -l $location --topic-spaces-configuration "{state:Enabled}" --identity "{type:SystemAssigned}"
+az eventgrid namespace topic create -g $rgName --name $nsTopicName --namespace-name $nsName
+```
+
+## Create an Event Hubs namespace and an event hub
+
+To create an Event Hubs namespace and an event hub in the namespace, replace placeholders with actual values, and run the following commands. This event hub is used as an event handler in the event subscription you create in this tutorial.
+
+| Placeholder | Comments |
+| -- | -- |
+| `EVENTHUBSNAMESPACENAME` | Specify a name for Event Hubs namespace to be created. |
+| `EVENTHUBNAME` | Specify the name for Event Hubs instance (event hub) to be created in the Event Hubs namespace. |
+
+```azurecli
+ehubNsName="EVENTHUBSNAMESPACENAME`"
+ehubName="EVENTHUBNAME"
+
+az eventhubs namespace create --resource-group $rgName --name $ehubNsName
+az eventhubs eventhub create --resource-group $rgName --namespace-name $ehubNsName --name $ehubName
+```
+
+## Give Event Grid namespace the access to send events to the event hub
+
+Run the following command to add the service principal of the Event Grid namespace to the Azure Event Hubs Data Sender role on the Event Hubs namespace. It allows the Event Grid namespace and resources in it to send events to the event hub in the Event Hubs namespace.
+
+```azurecli
+egNamespaceServicePrincipalObjectID=$(az ad sp list --display-name $nsName --query [].id -o tsv)
+namespaceresourceid=$(az eventhubs namespace show -n $ehubNsName -g $rgName --query "{I:id}" -o tsv)
+
+az role assignment create --assignee $egNamespaceServicePrincipalObjectID --role "Azure Event Hubs Data Sender" --scope $namespaceresourceid
+```
+
+## Create an event subscription with Event Hubs as the endpoint
+
+To create an event subscription for the namespace topic you created earlier, replace placeholders with actual values, and run the following commands. This subscription is configured to use the event hub as the event handler.
+
+| Placeholder | Comments |
+| -- | -- |
+| `EVENTSUBSCRIPTIONNAME` | Specify a name for the event subscription for the namespace topic. |
++
+```azurecli
+eventSubscriptionName="EVENTSUBSCRIPTIONNAME"
+eventhubresourceid=$(az eventhubs eventhub show -n $ehubName --namespace-name $ehubNsName -g $rgName --query "{I:id}" -o tsv)
+
+az resource create --api-version 2023-06-01-preview --resource-group $rgName --namespace Microsoft.EventGrid --resource-type eventsubscriptions --name $eventSubscriptionName --parent namespaces/$nsName/topics/$nsTopicName --location $location --properties "{\"deliveryConfiguration\":{\"deliveryMode\":\"Push\",\"push\":{\"maxDeliveryCount\":10,\"deliveryWithResourceIdentity\":{\"identity\":{\"type\":\"SystemAssigned\"},\"destination\":{\"endpointType\":\"EventHub\",\"properties\":{\"resourceId\":\"$eventhubresourceid\"}}}}}}"
+```
+
+## Configure routing in the Event Grid namespace
+
+Run the following commands to enable routing on the namespace to route messages or events to the namespace topic you created earlier. The event subscription on that namespace topic forwards those events to the event hub that's configured as an event handler.
+
+```azurecli
+routeTopicResourceId=$(az eventgrid namespace topic show -g $rgName --namespace-name $nsName -n $nsTopicName --query "{I:id}" -o tsv)
+az eventgrid namespace create -g $rgName -n $nsName --topic-spaces-configuration "{state:Enabled,'routeTopicResourceId':$routeTopicResourceId}"
+```
+
+## Client client, topic space, and permission bindings
+
+Now, create a client to send a few messages for testing. In this step, you create a client, a topic space with a topic, and publisher and subscriber bindings.
+
+For detailed instructions, see [Quickstart: Publish and subscribe to MQTT messages on an Event Grid namespace with the Azure CLI](mqtt-publish-and-subscribe-cli.md).
+
+| Placeholder | Comments |
+| -- | -- |
+| `CLIENTNAME` | Specify a name for client that send a few test messages. |
+| `CERTIFICATETHUMBPRINT` | Thumbprint of client's certificate. See the above quickstart for instructions to create a certificate and extract a thumbprint. Use the same thumbprint in the MQTTX tool to send test messages. |
+| `TOPICSPACENAME` | Specify a name for the topic space to be created. |
+| `PUBLSHERBINDINGNAME` | Specify a name for the publisher binding. |
+| `SUBSCRIBERBINDINGNAME` | Specify a name for the subscriber binding. |
++
+```azurecli
+clientName="CLIENTNAME"
+clientAuthName="client1-authnID"
+clientThumbprint="CERTIFICATETHUMBPRINT"
+
+topicSpaceName="TOPICSPACENAME"
+publisherBindingName="PUBLSHERBINDINGNAME"
+subscriberBindingName="SUBSCRIBERBINDINGNAME"
+
+az eventgrid namespace client create -g $rgName --namespace-name $nsName -n $clientName --authentication-name $clientAuthName --client-certificate-authentication "{validationScheme:ThumbprintMatch,allowed-thumbprints:[$clientThumbprint]}"
+
+az eventgrid namespace topic-space create -g $rgName --namespace-name $nsName -n $topicSpaceName --topic-templates ['contosotopics/topic1']
+
+az eventgrid namespace permission-binding create -g $rgName --namespace-name $nsName -n $publisherBindingName --client-group-name '$all' --permission publisher --topic-space-name $topicSpaceName
+
+az eventgrid namespace permission-binding create -g $rgName --namespace-name $nsName -n $subscriberBindingName --client-group-name '$all' --permission subscriber --topic-space-name $topicSpaceName
+```
+
+## Send messages using MQTTX
+Use MQTTX to send a few test messages. For step-by-step instructions, see the quickstart: [Publish and subscribe on an MQTT topic](./mqtt-publish-and-subscribe-portal.md).
+
+Verify that the event hub received those messages on the **Overview** page for your Event Hubs namespace.
++
+## View routed MQTT messages in Event Hubs by using a Stream Analytics query
+
+Navigate to the Event Hubs instance (event hub) within your event subscription in the Azure portal. Process data from your event hub by using Stream Analytics. For more information, see [Process data from Azure Event Hubs using Stream Analytics - Azure Event Hubs | Microsoft Learn](/azure/event-hubs/process-data-azure-stream-analytics). You can see the MQTT messages in the query.
+++
+## Next steps
+
+For code samples, go to [this GitHub repository](https://github.com/Azure-Samples/MqttApplicationSamples/tree/main).
event-grid Mqtt Routing To Event Hubs Portal Namespace Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-routing-to-event-hubs-portal-namespace-topics.md
+
+ Title: Use namespace topics to route MQTT messages to Event Hubs
+description: 'This tutorial shows how to use namespace topics to route MQTT messages to Azure Event Hubs. You use Azure portal to do the tasks in this tutorial.'
++
+ - build-2023
+ - ignite-2023
Last updated : 02/29/2024+++++
+# Tutorial: Use namespace topics to route MQTT messages to Azure Event Hubs (Azure portal)
+
+In this tutorial, you learn how to use a namespace topic to route data from MQTT clients to Azure Event Hubs. Here are the high-level steps:
+
+## Prerequisites
+
+- If you don't have an Azure subscription, create an [Azure free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
+- If you're new to Event Grid, read the [Event Grid overview](/azure/event-grid/overview) before you start this tutorial.
+- Register the Event Grid resource provider according to the steps in [Register the Event Grid resource provider](/azure/event-grid/custom-event-quickstart-portal#register-the-event-grid-resource-provider).
+- Make sure that port **8883** is open in your firewall. The sample in this tutorial uses the MQTT protocol, which communicates over port 8883. This port might be blocked in some corporate and educational network environments.
+++++
+In a separate tab of the Web browser or in a separate window, use the Azure portal to create an Event Hubs namespace with an event hub.
+++
+## Give Event Grid namespace the access to send events to the event hub
+
+1. On the **Event Hubs Namespace** page, select **Access control (IAM)** on the left menu.
+1. On the **Access control** page, select **+ Add** on the command bar, and then select **Add role assignment**.
+
+ :::image type="content" source="./media/mqtt-routing-to-event-hubs-portal-namespace-topics/event-hubs-access-control-add-role-assignment.png" alt-text="Screenshot that shows the Access control page for the Event Hubs namespace.":::
+1. On the **Add role assignment** page, select **Azure Event Hubs Data Sender** from the list of roles, and then select **Next** at the bottom of the page.
+
+ :::image type="content" source="./media/mqtt-routing-to-event-hubs-portal-namespace-topics/select-azure-event-hubs-data-sender.png" alt-text="Screenshot that shows the Add role assignment page with Azure Event Hubs Data Sender selected.":::
+1. On the **Members** page, follow these steps:
+ 1. For the **Assign access to** field, select **Managed identity**.
+ 1. Choose **+ Select members**.
+
+ :::image type="content" source="./media/mqtt-routing-to-event-hubs-portal-namespace-topics/select-managed-identity.png" alt-text="Screenshot that shows the Add role assignment page with Managed identity selected.":::
+1. On the **Select managed identities** page, follow these steps:
+ 1. Select your **Azure subscription**.
+ 1. For **Managed identity**, select **Event Grid Namespace**.
+ 1. Select the managed identity that has the same name as the Event Grid namespace.
+ 1. Choose **Select** at the bottom of the page.
+
+ :::image type="content" source="./media/mqtt-routing-to-event-hubs-portal-namespace-topics/select-event-grid-namespace-managed-identity.png" alt-text="Screenshot that shows the Select managed identities page with the Event Grid namespace's managed identity selected.":::
+1. On the **Add role assignment** page, select **Review + assign** at the bottom of the page.
+1. On the **Review + assign** page, select **Review + assign**.
+
+## Create an event subscription with Event Hubs as the endpoint
+
+1. Switch to the tab of your Web browser window that has the Event Grid namespace open.
+1. On the **Event Grid Namespace** page, select **Topics** on the left menu.
+1. On the **Topics** page, select the namespace topic you created earlier.
+
+ :::image type="content" source="./media/mqtt-routing-to-event-hubs-portal-namespace-topics/select-topic.png" alt-text="Screenshot that shows the Topics page with the namespace topic selected.":::
+1. On the **Event Grid Namespace Topic** page, select **+ Subscription** on the command bar at the top.
+
+ :::image type="content" source="./media/mqtt-routing-to-event-hubs-portal-namespace-topics/subscriptions-page.png" alt-text="Screenshot that shows the Subscriptions page.":::
+1. On the **Create Subscription** page, follow these steps:
+ 1. Enter a **name** for the event subscription.
+ 1. For **Delivery mode**, select **Push**.
+ 1. Confirm that **Endpoint type** is set to **Event hub**.
+ 1. Select **Configure an endpoint**.
+
+ :::image type="content" source="./media/mqtt-routing-to-event-hubs-portal-namespace-topics/create-subscription-page.png" alt-text="Screenshot that shows the Create Subscription page.":::
+ 1. On the **Select Event Hub**, follow these steps:
+ 1. Select the **Azure subscription** that has the event hub.
+ 1. Select the **resource group** that has the event hub.
+ 1. Select the **Event Hubs namespace**.
+ 1. Select the **event hub** in the Event Hubs namespace.
+ 1. Then, select **Confirm selection**.
+
+ :::image type="content" source="./media/mqtt-routing-to-event-hubs-portal-namespace-topics/select-event-hub-page.png" alt-text="Screenshot that shows the Select event hub page.":::
+ 1. Back on the **Create Subscription** page, select **System Assigned** for **Managed identity type**.
+ 1. Select **Create** at the bottom of the page.
+
+ :::image type="content" source="./media/mqtt-routing-to-event-hubs-portal-namespace-topics/create-subscription.png" alt-text="Screenshot that shows the Create Subscription page with Create button selected.":::
+
+## Configure routing in the Event Grid namespace
+
+1. Navigate back to the **Event Grid Namespace** page by selecting the namespace in the **Essentials** section of the **Event Grid Namespace Topic** page or by selecting the namespace name in the breadcrumb menu at the top.
+1. On the **Event Grid Namespace** page, select **Routing** on the left menu in the **MQTT broker** section.
+1. On the **Routing** page, select **Enable routing**.
+1. For **Topic type**, select **Namespace topic**.
+1. For **Topic**, select the Event Grid namespace topic that you created where all MQTT messages will be routed.
+1. Select **Apply**.
+
+ :::image type="content" source="./media/mqtt-routing-to-event-hubs-portal-namespace-topics/routing-page.png" alt-text="Screenshot that shows the Routing page with the namespace topic selected.":::
+
+ Check notifications to confirm that the namespace is enabled with the routing information.
+
+## Create clients, topic space, and permission bindings
+Follow steps in the quickstart: [Publish and subscribe on an MQTT topic](./mqtt-publish-and-subscribe-portal.md) to:
+
+1. Create a client. You can create the second client if you want to, but it's optional.
+1. Create a topic space.
+1. Create publisher and subscriber permission bindings.
+1. Use MQTTX to send a few messages.
+1. Verify that the event hub received those messages on the **Overview** page for your Event Hubs namespace.
+
+ :::image type="content" source="./media/mqtt-routing-to-event-hubs-portal-namespace-topics/verify-incoming-messages.png" alt-text="Screenshot that shows the Overview page of the event hub with incoming message count." lightbox="./media/mqtt-routing-to-event-hubs-portal-namespace-topics/verify-incoming-messages.png":::
+
+
+## View routed MQTT messages in Event Hubs by using a Stream Analytics query
+
+Navigate to the Event Hubs instance (event hub) within your event subscription in the Azure portal. Process data from your event hub by using Stream Analytics. For more information, see [Process data from Azure Event Hubs using Stream Analytics - Azure Event Hubs | Microsoft Learn](/azure/event-hubs/process-data-azure-stream-analytics). You can see the MQTT messages in the query.
++
+## Next steps
+
+For code samples, go to [this GitHub repository](https://github.com/Azure-Samples/MqttApplicationSamples/tree/main).
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/export-data.md
The Azure API for FHIR supports the following query parameters. All of these par
| \_container | No | Specifies the container within the configured storage account where the data should be exported. If a container is specified, the data will be exported into a folder into that container. If the container isnΓÇÖt specified, the data will be exported to a new container. | | \_till | No | Allows you to only export resources that have been modified till the time provided. This parameter is applicable to only System-Level export. In this case, if historical versions haven't been disabled or purged, export guarantees true snapshot view, or, in other words, enables time travel. | |includeAssociatedData | No | Allows you to export history and soft deleted resources. This filter doesn't work with '_typeFilter' query parameter. Include value as '_history' to export history/ non latest versioned resources. Include value as '_deleted' to export soft deleted resources. |
+|\_isparallel| No |The "_isparallel" query parameter can be added to the export operation to enhance its throughput. The value needs to be set to true to enable parallelization. It is important to note that using this parameter may result in an increase in the request units consumption over the life of export. |
> [!NOTE] > Only storage accounts in the same subscription as that for Azure API for FHIR are allowed to be registered as the destination for $export operations.
healthcare-apis Purge History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/purge-history.md
History in FHIR gives you the ability to see all previous versions of a resource
All past versions of a resource are considered obsolete and the current version of a resource should be used for normal business workflow operations. However, it can be useful to see the state of a resource as a point in time when a past decision was made.
+The query parameter _summary=count and _count=0 can be added to _history endpoint to get count of all versioned resources. This count includes soft deleted resources.
++ Azure API for FHIR allows you to manage history with 1. Disabling history To disable history, one time support ticket needs to be created. After disable history configuration is set, history isn't created for resources on the FHIR server. Resource version is incremented.
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/release-notes.md
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Server for Azure. The server is an implementation of the [FHIR](https://hl7.org/fhir) standard. This document provides details about the features and enhancements made to Azure API for FHIR.
+## **February 2024**
+**Enables counting all versions (historical and soft deleted) of resources**
+The query parameter _summary=count and _count=0 can be added to _history endpoint to get count of all versioned resources. This count includes soft deleted resources. For more information, see [history management](././../azure-api-for-fhir/purge-history.md).
+
+**Improve throughput for export operation**
+The "_isparallel" query parameter can be added to the export operation to enhance its throughput. It is important to note that using this parameter may result in an increase in Request Units consumption over the life of export. For more information, see [Export operation query parameters](././../azure-api-for-fhir/export-data.md).
+
+**Change in name nomenclature for exported file name and default storage account**
+With this change exported file names follow the format '{FHIR Resource Name}-{Number}-{Number}.ndjson'. The order of the files is not guaranteed to correspond to any ordering of the resources in the database.Default storage account name is updated to 'Export-{Number}'. There is no change to number of resources added in individual exported files.
++
+**Performance Enhancement**
+Parallel optimization for FHIR queries can be enabled using HTTP header "x-ms-query-latency-over-efficiency" . This value needs to be set to true to achieve maximum concurrency during execution of query. For more information, see [Batch Bundles](././../azure-api-for-fhir/fhir-rest-api-capabilities.md).
++ ## **January 2024** **Concurrent execution of queries with conditional interactions**
-Conditional interactions can be complex and performance-intensive. To enhance the latency of queries involving conditional interactions, you have the option to utilize the request header x-conditionalquery-processing-logic. For more information, see [Performance considerations for conditional API interactions](../../healthcare-apis/azure-api-for-fhir/fhir-rest-api-capabilities.md).
+Conditional interactions can be complex and performance-intensive. To enhance the latency of queries involving conditional interactions, you have the option to utilize the request header x-conditionalquery-processing-logic. For more information, see [Performance considerations for conditional API interactions](././../azure-api-for-fhir/fhir-rest-api-capabilities.md).
## **December 2023** **Additional capabilities added to the Export operation**
healthcare-apis Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/import-data.md
The `import` operation supports two modes: initial mode and incremental mode. Ea
> Also, if multiple resources share the same resource ID, then only one of those resources is imported at random. An error is logged for the resources sharing the same resource ID. This table shows the difference between import modes+ |Areas|Initial mode |Incremental mode | |:- |:-|:--| |Capability|Initial load of data into FHIR service|Continuous ingestion of data into FHIR service (Incremental or Near Real Time).|
iot-hub Iot Hub Distributed Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-distributed-tracing.md
Title: Add correlation IDs to IoT messages with distributed tracing (preview)
+ Title: Monitor IoT messages with distributed tracing (preview)
+ description: Learn how to use distributed tracing to trace IoT messages throughout the Azure services that your solution uses. Previously updated : 01/26/2022- Last updated : 02/29/2024+ # Trace Azure IoT device-to-cloud messages by using distributed tracing (preview)
-Microsoft Azure IoT Hub currently supports distributed tracing as a [preview feature](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-IoT Hub is one of the first Azure services to support distributed tracing. As more Azure services support distributed tracing, you're able to trace Internet of Things (IoT) messages throughout the Azure services involved in your solution. For a background on the feature, see [What is distributed tracing?](../azure-monitor/app/distributed-tracing-telemetry-correlation.md).
+Use distributed tracing (preview) in IoT Hub to monitor IoT messages as they pass through Azure services. IoT Hub is one of the first Azure services to support distributed tracing. As more Azure services support distributed tracing, you're able to trace Internet of Things (IoT) messages throughout the Azure services involved in your solution. For more information about the feature, see [What is distributed tracing?](../azure-monitor/app/distributed-trace-data.md).
When you enable distributed tracing for IoT Hub, you can: -- Precisely monitor the flow of each message through IoT Hub by using [trace context](https://github.com/w3c/trace-context). Trace context includes correlation IDs that allow you to correlate events from one component with events from another component. You can apply it for a subset or all IoT device messages by using a [device twin](iot-hub-devguide-device-twins.md).-- Automatically log the trace context to [Azure Monitor Logs](monitor-iot-hub.md).
+- Monitor the flow of each message through IoT Hub by using [trace context](https://github.com/w3c/trace-context). Trace context includes correlation IDs that allow you to correlate events from one component with events from another component. You can apply it for a subset or all IoT device messages by using a [device twin](iot-hub-devguide-device-twins.md).
+- Log the trace context to [Azure Monitor Logs](monitor-iot-hub.md).
- Measure and understand message flow and latency from devices to IoT Hub and routing endpoints.-- Start considering how you want to implement distributed tracing for the non-Azure services in your IoT solution.
-In this article, you use the [Azure IoT device SDK for C](https://github.com/Azure/azure-iot-sdk-c/blob/main/readme.md) with distributed tracing. Distributed tracing support is still in progress for the other SDKs.
+> [!IMPORTANT]
+> Distributed tracing an Azure IoT Hub is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Prerequisites -- The preview of distributed tracing is currently supported only for IoT hubs created in the following regions:
+- An Azure IoT hub created in one of the following regions.
- North Europe - Southeast Asia - West US 2 -- This article assumes that you're familiar with sending telemetry messages to your IoT hub.
+- A device registered to your IoT hub. If you don't have one, follow the steps in [Register a new device in the IoT hub](./iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub) and save the device connection string to use in this article.
+
+- This article assumes that you're familiar with sending telemetry messages to your IoT hub.
+
+- The latest version of [Git](https://git-scm.com/download/).
+
+## Public preview limits and considerations
+
+Consider the following limitations to determine if this preview feature is right for your scenarios:
+
+- The proposal for the W3C Trace Context standard is currently a working draft.
+- The only development language that the client SDK currently supports is C, in the [public preview branch of the Azure IoT device SDK for C](https://github.com/Azure/azure-iot-sdk-c/blob/public-preview/readme.md)
+- Cloud-to-device twin capability isn't available for the [IoT Hub basic tier](iot-hub-scaling.md#basic-and-standard-tiers). However, IoT Hub still logs to Azure Monitor if it sees a properly composed trace context header.
+- To ensure efficient operation, IoT Hub imposes a throttle on the rate of logging that can occur as part of distributed tracing.
+- The distributed tracing feature is supported only for IoT hubs created in the following regions:
-- Register a device with your IoT hub and save the connection string. Registration steps are available in the quickstart.
+ - North Europe
+ - Southeast Asia
+ - West US 2
-- Install the latest version of [Git](https://git-scm.com/download/).
+## Understand Azure IoT distributed tracing
-## Configure an IoT hub
+Many IoT solutions, including the [Azure IoT reference architecture](/azure/architecture/reference-architectures/iot), generally follow a variant of the [microservice architecture](/azure/architecture/microservices/). As an IoT solution grows more complex, you end up using a dozen or more microservices. These microservices might or might not be from Azure.
+
+Pinpointing where IoT messages are dropping or slowing down can be challenging. For example, imagine that you have an IoT solution that uses five different Azure services and 1,500 active devices. Each device sends 10 device-to-cloud messages per second, for a total of 15,000 messages per second. But you notice that your web app sees only 10,000 messages per second. How do you find the culprit?
+
+For you to reconstruct the flow of an IoT message across services, each service should propagate a *correlation ID* that uniquely identifies the message. After Azure Monitor collects correlation IDs in a centralized system, you can use those IDs to see message flow. This method is called the [distributed tracing pattern](/azure/architecture/microservices/logging-monitoring#distributed-tracing).
+
+To support wider adoption for distributed tracing, Microsoft is contributing to [W3C standard proposal for distributed tracing](https://w3c.github.io/trace-context/). When distributed tracing support for IoT Hub is enabled, it follows this flow for each generated message:
+
+1. A message is generated on the IoT device.
+1. The IoT device decides (with help from the cloud) that this message should be assigned with a trace context.
+1. The SDK adds a `tracestate` value to the message property, which contains the time stamp for message creation.
+1. The IoT device sends the message to IoT Hub.
+1. The message arrives at the IoT Hub gateway.
+1. IoT Hub looks for the `tracestate` value in the message properties and checks whether it's in the correct format. If so, IoT Hub generates a globally unique `trace-id` value for the message and a `span-id` value for the "hop." IoT Hub records these values in the [IoT Hub distributed tracing logs](monitor-iot-hub-reference.md#distributed-tracing-preview) under the `DiagnosticIoTHubD2C` operation.
+1. When the message processing is finished, IoT Hub generates another `span-id` value and logs it, along with the existing `trace-id` value, under the `DiagnosticIoTHubIngress` operation.
+1. If routing is enabled for the message, IoT Hub writes it to the custom endpoint. IoT Hub logs another `span-id` value with the same `trace-id` value under the `DiagnosticIoTHubEgress` category.
+
+## Configure distributed tracing in an IoT hub
In this section, you configure an IoT hub to log distributed tracing attributes (correlation IDs and time stamps).
In this section, you configure an IoT hub to log distributed tracing attributes
:::image type="content" source="media/iot-hub-distributed-tracing/select-distributed-tracing.png" alt-text="Screenshot that shows where the Distributed Tracing operation is for IoT Hub diagnostic settings.":::
-1. Select **Save** for this new setting.
+1. Select **Save**.
1. (Optional) To see the messages flow to different places, set up [routing rules to at least two different endpoints](iot-hub-devguide-messages-d2c.md).
After the logging is turned on, IoT Hub records a log when a message that contai
To learn more about these logs and their schemas, see [Monitor IoT Hub](monitor-iot-hub.md) and [Distributed tracing in IoT Hub resource logs](monitor-iot-hub-reference.md#distributed-tracing-preview).
-## Set up a device
+## Update sampling options
+
+To change the percentage of messages to be traced from the cloud, you must update the device twin. You can make updates by using the JSON editor in the Azure portal or the IoT Hub service SDK. The following subsections provide examples.
+
+### Update a single device
+
+You can use the Azure portal or the Azure IoT Hub extension for Visual Studio Code (VS Code) to update the sampling rate of a single device.
+
+#### [Azure portal](#tab/portal)
+
+1. Go to your IoT hub in the [Azure portal](https://portal.azure.com/), and then select **Devices** from the **Device management** section of the menu.
+
+1. Choose your device.
+
+1. Select the gear icon under **Distributed Tracing (preview)**. In the panel that opens:
+
+ 1. Select the **Enable** option.
+ 1. For **Sampling rate**, choose a percentage between 0 and 100.
+ 1. Select **Save**.
+
+ :::image type="content" source="media/iot-hub-distributed-tracing/enable-distributed-tracing.png" alt-text="Screenshot that shows how to enable distributed tracing in the Azure portal." lightbox="media/iot-hub-distributed-tracing/enable-distributed-tracing.png":::
+
+1. Wait a few seconds, and then select **Refresh**. If the device successfully acknowledges your changes, a sync icon with a check mark appears.
+
+#### [VS Code](#tab/vscode)
+
+1. With Visual Studio Code installed, install the latest version of the [Azure IoT Hub extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit).
+
+1. Open Visual Studio Code, and go to the **Explorer** tab and the **Azure IoT Hub** section.
+
+1. Select the ellipsis (...) next to **Azure IoT Hub** to see a submenu. Choose the **Select IoT Hub** option to retrieve your IoT hub from Azure.
+
+ In the pop-up window that appears at the top of Visual Studio Code, you can select your subscription and IoT hub.
+
+ See a demonstration on the [vscode-azure-iot-toolkit](https://github.com/Microsoft/vscode-azure-iot-toolkit/wiki/Select-IoT-Hub) GitHub page.
+
+1. Expand your device under **Devices**. Right-click **Distributed Tracing Setting (Preview)**, and then select **Update Distributed Tracing Setting (Preview)**.
+
+1. In the pop-up pane that appears at the top of the window, select **Enable**.
+
+ :::image type="content" source="media/iot-hub-distributed-tracing/enable-distributed-tracing-vsc.png" alt-text="Screenshot that shows how to enable distributed tracing in the Azure IoT Hub extension.":::
+
+ **Enable Distributed Tracing: Enabled** now appears under **Distributed Tracing Setting (Preview)** > **Desired**.
+
+1. In the pop-up pane that appears for the sampling rate, enter an integer between 0 and 100, then select the Enter key.
+
+ ![Screenshot that shows entering a sampling rate](./media/iot-hub-distributed-tracing/update-distributed-tracing-setting-3.png)
+
+ **Sample rate: 100(%)** now appears under **Distributed Tracing Setting (Preview)** > **Desired**.
+++
+### Bulk update multiple devices
+
+To update the distributed tracing sampling configuration for multiple devices, use [automatic device configuration](./iot-hub-automatic-device-management.md). Follow this twin schema:
+
+```json
+{
+ "properties": {
+ "desired": {
+ "azureiot*com^dtracing^1": {
+ "sampling_mode": 1,
+ "sampling_rate": 100
+ }
+ }
+ }
+}
+```
+
+| Element name | Required | Type | Description |
+|--|-||--|
+| `sampling_mode` | Yes | Integer | Two mode values are currently supported to turn sampling on and off. `1` is on, and `2` is off. |
+| `sampling_rate` | Yes | Integer | This value is a percentage. Only values from `0` to `100` (inclusive) are permitted. |
+
+## Query and visualize traces
+
+To see all the traces logged by an IoT hub, query the log store that you selected in diagnostic settings. This section shows how to query by using Log Analytics.
+
+If you set up [Log Analytics with resource logs](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage), query by looking for logs in the `DistributedTracing` category. For example, this query shows all the logged traces:
+
+```Kusto
+// All distributed traces
+AzureDiagnostics
+| where Category == "DistributedTracing"
+| project TimeGenerated, Category, OperationName, Level, CorrelationId, DurationMs, properties_s
+| order by TimeGenerated asc
+```
+
+Here are a few example logs in Log Analytics:
+
+| Time generated | Operation name | Category | Level | Correlation ID | Duration in milliseconds | Properties |
+| | | | | | | |
+| 2018-02-22T03:28:28.633Z | DiagnosticIoTHubD2C | DistributedTracing | Informational | 00-8cd869a412459a25f5b4f31311223344-0144d2590aacd909-01 | | `{"deviceId":"AZ3166","messageSize":"96","callerLocalTimeUtc":"2018-02-22T03:27:28.633Z","calleeLocalTimeUtc":"2018-02-22T03:27:28.687Z"}` |
+| 2018-02-22T03:28:38.633Z | DiagnosticIoTHubIngress | DistributedTracing | Informational | 00-8cd869a412459a25f5b4f31311223344-349810a9bbd28730-01 | 20 | `{"isRoutingEnabled":"false","parentSpanId":"0144d2590aacd909"}` |
+| 2018-02-22T03:28:48.633Z | DiagnosticIoTHubEgress | DistributedTracing | Informational | 00-8cd869a412459a25f5b4f31311223344-349810a9bbd28730-01 | 23 | `{"endpointType":"EventHub","endpointName":"myEventHub", "parentSpanId":"0144d2590aacd909"}` |
+
+To understand the types of logs, see [Azure IoT Hub distributed tracing logs](monitor-iot-hub-reference.md#distributed-tracing-preview).
+
+## Run a sample application
In this section, you prepare a development environment for use with the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). Then, you modify one of the samples to enable distributed tracing on your device's telemetry messages.
These instructions are for building the sample on Windows. For other environment
1. Install [CMake](https://cmake.org/). Ensure that it's in your `PATH` by entering `cmake -version` from a command prompt.
-1. Open a command prompt or Git Bash shell. Run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository:
+1. Open a command prompt or Git Bash shell. Run the following commands to clone the latest release of the public-preview branch of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository:
```cmd git clone -b public-preview https://github.com/Azure/azure-iot-sdk-c.git
These instructions are for building the sample on Windows. For other environment
Expect this operation to take several minutes to finish.
-1. Run the following commands from the `azure-iot-sdk-c` directory to create a `cmake` subdirectory and go to the `cmake` folder:
+1. Run the following commands from the `azure-iot-sdk-c` directory to create a `cmake` subdirectory and go to the `cmake` folder:
```cmd mkdir cmake
These instructions are for building the sample on Windows. For other environment
cmake .. ```
- If CMake can't find your C++ compiler, you might encounter build errors while running the preceding command. If that happens, try running the command in the [Visual Studio command prompt](/dotnet/framework/tools/developer-command-prompt-for-vs).
+ If CMake can't find your C++ compiler, you might encounter build errors while running the preceding command. If that happens, try running the command in the [Visual Studio command prompt](/dotnet/framework/tools/developer-command-prompt-for-vs).
After the build succeeds, the last few output lines will look similar to the following output:
These instructions are for building the sample on Windows. For other environment
### Edit the telemetry sample to enable distributed tracing
-> [!div class="button"]
-> <a href="https://github.com/Azure-Samples/azure-iot-distributed-tracing-sample/blob/master/iothub_ll_telemetry_sample-c/iothub_ll_telemetry_sample.c" target="_blank">Get the sample on GitHub</a>
+In this section, you edit the [iothub_ll_telemetry_sample.c](https://github.com/Azure/azure-iot-sdk-c/tree/public-preview/iothub_client/samples/iothub_ll_telemetry_sample) sample in the SDK repository to enable distributed tracing. Or, you can copy an already edited version of the sample from the [azure-iot-distributed-tracing-sample](https://github.com/Azure-Samples/azure-iot-distributed-tracing-sample/blob/master/iothub_ll_telemetry_sample-c/iothub_ll_telemetry_sample.c) repository.
1. Use an editor to open the `azure-iot-sdk-c/iothub_client/samples/iothub_ll_telemetry_sample/iothub_ll_telemetry_sample.c` source file.
These instructions are for building the sample on Windows. For other environment
1. Go to the `iothub_ll_telemetry_sample` project directory from the CMake directory (`azure-iot-sdk-c/cmake`) that you created earlier, and compile the sample:
- ```cmd
- cd iothub_client/samples/iothub_ll_telemetry_sample
- cmake --build . --target iothub_ll_telemetry_sample --config Debug
- ```
+ ```cmd
+ cd iothub_client/samples/iothub_ll_telemetry_sample
+ cmake --build . --target iothub_ll_telemetry_sample --config Debug
+ ```
1. Run the application. The device sends telemetry that supports distributed tracing.
- ```cmd
- Debug/iothub_ll_telemetry_sample.exe
- ```
+ ```cmd
+ Debug/iothub_ll_telemetry_sample.exe
+ ```
-1. Keep the app running. You can observe the message being sent to IoT Hub by looking at the console window.
+1. Keep the app running. You can observe the messages being sent to IoT Hub in the console window.
-<!-- For a client app that can receive sampling decisions from the cloud, check out [this sample](https://aka.ms/iottracingCsample). -->
+For a client app that can receive sampling decisions from the cloud, try the [iothub_devicetwin_sample.c](https://github.com/Azure-Samples/azure-iot-distributed-tracing-sample/tree/master/iothub_devicetwin_sample-c) sample in the distributed tracing sample repo.
-### Workaround for third-party clients
+### Workaround for non-Microsoft clients
Implementing the distributed tracing feature without using the C SDK is more complex. We don't recommend it.
-First, you must implement all the IoT Hub protocol primitives in your messages by following the developer guide [Create and read IoT Hub messages](iot-hub-devguide-messages-construct.md). Then, edit the protocol properties in the MQTT and AMQP messages to add `tracestate` as a system property.
+First, you must implement all the IoT Hub protocol primitives in your messages by following the developer guide [Create and read IoT Hub messages](iot-hub-devguide-messages-construct.md). Then, edit the protocol properties in the MQTT and AMQP messages to add `tracestate` as a system property.
Specifically:
-* For MQTT, add `%24.tracestate=timestamp%3d1539243209` to the message topic. Replace `1539243209` with the creation time of the message in Unix time-stamp format. As an example, refer to the implementation [in the C SDK](https://github.com/Azure/azure-iot-sdk-c/blob/6633c5b18710febf1af7713cf1a336fd38f623ed/iothub_client/src/iothubtransport_mqtt_common.c#L761).
-* For AMQP, add `key("tracestate")` and `value("timestamp=1539243209")` as message annotation. For a reference implementation, see the [uamqp_messaging.c](https://github.com/Azure/azure-iot-sdk-c/blob/6633c5b18710febf1af7713cf1a336fd38f623ed/iothub_client/src/uamqp_messaging.c#L527) file.
+- For MQTT, add `%24.tracestate=timestamp%3d1539243209` to the message topic. Replace `1539243209` with the creation time of the message in Unix time-stamp format. As an example, refer to the implementation [in the C SDK](https://github.com/Azure/azure-iot-sdk-c/blob/6633c5b18710febf1af7713cf1a336fd38f623ed/iothub_client/src/iothubtransport_mqtt_common.c#L761).
+- For AMQP, add `key("tracestate")` and `value("timestamp=1539243209")` as message annotation. For a reference implementation, see the [uamqp_messaging.c](https://github.com/Azure/azure-iot-sdk-c/blob/6633c5b18710febf1af7713cf1a336fd38f623ed/iothub_client/src/uamqp_messaging.c#L527) file.
To control the percentage of messages that contain this property, implement logic to listen to cloud-initiated events such as twin updates.
-## Update sampling options
-
-To change the percentage of messages to be traced from the cloud, you must update the device twin. You can make updates by using the JSON editor in the Azure portal or the IoT Hub service SDK. The following subsections provide examples.
-
-### Update by using the portal
-
-1. Go to your IoT hub in the [Azure portal](https://portal.azure.com/), and then select **Devices** from the menu.
-
-1. Choose your device.
-
-1. Under **Distributed Tracing (preview)**, select the gear icon. In the panel that opens:
-
- 1. Select the **Enable** option.
- 1. For **Sampling rate**, choose a percentage between 0 and 100.
- 1. Select **Save**.
-
- :::image type="content" source="media/iot-hub-distributed-tracing/enable-distributed-tracing.png" alt-text="Screenshot that shows how to enable distributed tracing in the Azure portal." lightbox="media/iot-hub-distributed-tracing/enable-distributed-tracing.png":::
-
-1. Wait a few seconds, and then select **Refresh**. If the device successfully acknowledges your changes, a sync icon with a check mark appears.
-
-1. Go back to the console window for the telemetry message app. Confirm that messages are being sent with `tracestate` in the application properties.
-
- :::image type="content" source="media/iot-hub-distributed-tracing/MicrosoftTeams-image.png" alt-text="Screenshot that shows trace state messages." lightbox="media/iot-hub-distributed-tracing/MicrosoftTeams-image.png":::
-
-1. (Optional) Change the sampling rate to a different value, and observe the change in frequency that messages include `tracestate` in the application properties.
-
-### Update by using the Azure IoT Hub extension for Visual Studio Code
-
-1. With Visual Studio Code installed, install the latest version of the [Azure IoT Hub extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit).
-
-1. Open Visual Studio Code, and go to the **Explorer** tab and the **Azure IoT Hub** section.
-
-1. Select the ellipsis (...) next to **Azure IoT Hub** to see a submenu. Choose the **Select IoT Hub** option to retrieve your IoT hub from Azure.
-
- In the pop-up window that appears at the top of Visual Studio Code, you can select your subscription and IoT hub.
-
- See a demonstration on the [vscode-azure-iot-toolkit](https://github.com/Microsoft/vscode-azure-iot-toolkit/wiki/Select-IoT-Hub) GitHub page.
-
-1. Expand your device under **Devices**. Right-click **Distributed Tracing Setting (Preview)**, and then select **Update Distributed Tracing Setting (Preview)**.
-
-1. In the pop-up pane that appears at the top of the window, select **Enable**.
-
- :::image type="content" source="media/iot-hub-distributed-tracing/enable-distributed-tracing-vsc.png" alt-text="Screenshot that shows how to enable distributed tracing in the Azure IoT Hub extension.":::
-
- **Enable Distributed Tracing: Enabled** now appears under **Distributed Tracing Setting (Preview)** > **Desired**.
-
-1. In the pop-up pane that appears for the sampling rate, enter **100** and then select the Enter key.
-
- ![Screenshot that shows entering a sampling rate](./media/iot-hub-distributed-tracing/update-distributed-tracing-setting-3.png)
-
- **Sample rate: 100(%)** now also appears under **Distributed Tracing Setting (Preview)** > **Desired**.
-
-### Bulk update for multiple devices
-
-To update the distributed tracing sampling configuration for multiple devices, use [automatic device configuration](./iot-hub-automatic-device-management.md). Follow this twin schema:
-
-```json
-{
- "properties": {
- "desired": {
- "azureiot*com^dtracing^1": {
- "sampling_mode": 1,
- "sampling_rate": 100
- }
- }
- }
-}
-```
-
-| Element name | Required | Type | Description |
-|--|-||--|
-| `sampling_mode` | Yes | Integer | Two mode values are currently supported to turn sampling on and off. `1` is on, and `2` is off. |
-| `sampling_rate` | Yes | Integer | This value is a percentage. Only values from `0` to `100` (inclusive) are permitted. |
-
-## Query and visualize
-
-To see all the traces that an IoT hub has logged, query the log store that you selected in diagnostic settings. This section shows how to query by using Log Analytics.
-
-If you've set up [Log Analytics with resource logs](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage), query by looking for logs in the `DistributedTracing` category. For example, this query shows all the logged traces:
-
-```Kusto
-// All distributed traces
-AzureDiagnostics
-| where Category == "DistributedTracing"
-| project TimeGenerated, Category, OperationName, Level, CorrelationId, DurationMs, properties_s
-| order by TimeGenerated asc
-```
-
-Here are a few example logs in Log Analytics:
-
-| Time generated | Operation name | Category | Level | Correlation ID | Duration in milliseconds | Properties |
-|--||--|||||
-| 2018-02-22T03:28:28.633Z | DiagnosticIoTHubD2C | DistributedTracing | Informational | 00-8cd869a412459a25f5b4f31311223344-0144d2590aacd909-01 | | `{"deviceId":"AZ3166","messageSize":"96","callerLocalTimeUtc":"2018-02-22T03:27:28.633Z","calleeLocalTimeUtc":"2018-02-22T03:27:28.687Z"}` |
-| 2018-02-22T03:28:38.633Z | DiagnosticIoTHubIngress | DistributedTracing | Informational | 00-8cd869a412459a25f5b4f31311223344-349810a9bbd28730-01 | 20 | `{"isRoutingEnabled":"false","parentSpanId":"0144d2590aacd909"}` |
-| 2018-02-22T03:28:48.633Z | DiagnosticIoTHubEgress | DistributedTracing | Informational | 00-8cd869a412459a25f5b4f31311223344-349810a9bbd28730-01 | 23 | `{"endpointType":"EventHub","endpointName":"myEventHub", "parentSpanId":"0144d2590aacd909"}` |
-
-To understand the types of logs, see [Azure IoT Hub distributed tracing logs](monitor-iot-hub-reference.md#distributed-tracing-preview).
-
-## Understand Azure IoT distributed tracing
-
-Many IoT solutions, including the [Azure IoT reference architecture](/azure/architecture/reference-architectures/iot) (English only), generally follow a variant of the [microservice architecture](/azure/architecture/microservices/). As an IoT solution grows more complex, you end up using a dozen or more microservices. These microservices might or might not be from Azure.
-
-Pinpointing where IoT messages are dropping or slowing down can be challenging. For example, imagine that you have an IoT solution that uses five different Azure services and 1,500 active devices. Each device sends 10 device-to-cloud messages per second, for a total of 15,000 messages per second. But you notice that your web app sees only 10,000 messages per second. How do you find the culprit?
-
-For you to reconstruct the flow of an IoT message across services, each service should propagate a *correlation ID* that uniquely identifies the message. After Azure Monitor collects correlation IDs in a centralized system, you can use those IDs to see message flow. This method is called the [distributed tracing pattern](/azure/architecture/microservices/logging-monitoring#distributed-tracing).
-
-To support wider adoption for distributed tracing, Microsoft is contributing to [W3C standard proposal for distributed tracing](https://w3c.github.io/trace-context/). When distributed tracing support for IoT Hub is enabled, it follows this flow:
-
-1. A message is generated on the IoT device.
-1. The IoT device decides (with help from the cloud) that this message should be assigned with a trace context.
-1. The SDK adds a `tracestate` value to the message property, which contains the time stamp for message creation.
-1. The IoT device sends the message to IoT Hub.
-1. The message arrives at the IoT Hub gateway.
-1. IoT Hub looks for the `tracestate` value in the message properties and checks whether it's in the correct format. If so, IoT Hub generates a globally unique `trace-id` value for the message and a `span-id` value for the "hop." IoT Hub records these values in the [IoT Hub distributed tracing logs](monitor-iot-hub-reference.md#distributed-tracing-preview) under the `DiagnosticIoTHubD2C` operation.
-1. When the message processing is finished, IoT Hub generates another `span-id` value and logs it, along with the existing `trace-id` value, under the `DiagnosticIoTHubIngress` operation.
-1. If routing is enabled for the message, IoT Hub writes it to the custom endpoint. IoT Hub logs another `span-id` value with the same `trace-id` value under the `DiagnosticIoTHubEgress` category.
-1. IoT Hub repeats the preceding steps for each message that's generated.
-
-## Public preview limits and considerations
--- The proposal for the W3C Trace Context standard is currently a working draft.-- The only development language that the client SDK currently supports is C.-- Cloud-to-device twin capability isn't available for the [IoT Hub basic tier](iot-hub-scaling.md#basic-and-standard-tiers). However, IoT Hub still logs to Azure Monitor if it sees a properly composed trace context header.-- To ensure efficient operation, IoT Hub imposes a throttle on the rate of logging that can occur as part of distributed tracing.- ## Next steps - To learn more about the general distributed tracing pattern in microservices, see [Microservice architecture pattern: distributed tracing](https://microservices.io/patterns/observability/distributed-tracing.html).-- To set up configuration to apply distributed tracing settings to a large number of devices, see [Configure and monitor IoT devices at scale](./iot-hub-automatic-device-management.md).-- To learn more about Azure Monitor, see [What is Azure Monitor?](../azure-monitor/overview.md).-- To learn more about using Azure Monitor with IoT Hub, see [Monitor IoT Hub](monitor-iot-hub.md).
iot-operations Tutorial Connect Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/tutorial-connect-event-grid.md
Previously updated : 11/15/2023 Last updated : 02/28/2024 #CustomerIntent: As an operator, I want to configure IoT MQ to bridge to Azure Event Grid MQTT broker PaaS so that I can process my IoT data at the edge and in the cloud.
In this tutorial, you learn how to configure IoT MQ for bi-directional MQTT brid
* [Deploy Azure IoT Operations](../get-started/quickstart-deploy.md)
+## Set environment variables
++
+Sign in with Azure CLI:
+
+```azurecli
+az login
+```
+
+Set environment variables for the rest of the setup. Replace values in `<>` with valid values or names of your choice. A new Azure Event Grid namespace and topic space are created in your Azure subscription based on the names you provide:
+
+```azurecli
+# For this tutorial, the steps assume the IoT Operations cluster and the Event Grid
+# are in the same subscription, resource group, and location.
+
+# Name of the resource group of Azure Event Grid and IoT Operations cluster
+export RESOURCE_GROUP=<RESOURCE_GROUP_NAME>
+
+# Azure region of Azure Event Grid and IoT Operations cluster
+export LOCATION=<LOCATION>
+
+# Name of the Azure Event Grid namespace
+export EVENT_GRID_NAMESPACE=<EVENT_GRID_NAMESPACE>
+
+# Name of the Arc-enabled IoT Operations cluster
+export CLUSTER_NAME=<CLUSTER_NAME>
+
+# Subscription ID of Azure Event Grid and IoT Operations cluster
+export SUBSCRIPTION_ID=<SUBSCRIPTION_ID>
+```
+ ## Create Event Grid namespace with MQTT broker enabled
-[Create Event Grid namespace](../../event-grid/create-view-manage-namespaces.md) with Azure CLI. Replace `<EG_NAME>`, `<RESOURCE_GROUP>`, and `<LOCATION>` with your own values. The location should be the same as the one you used to deploy Azure IoT Operations.
+[Create Event Grid namespace](../../event-grid/create-view-manage-namespaces.md) with Azure CLI. The location should be the same as the one you used to deploy Azure IoT Operations.
```azurecli
-az eventgrid namespace create -n <EG_NAME> -g <RESOURCE_GROUP> --location <LOCATION> --topic-spaces-configuration "{state:Enabled,maximumClientSessionsPerAuthenticationName:3}"
+az eventgrid namespace create \
+ --namespace-name $EVENT_GRID_NAMESPACE \
+ --resource-group $RESOURCE_GROUP \
+ --location $LOCATION \
+ --topic-spaces-configuration "{state:Enabled,maximumClientSessionsPerAuthenticationName:3}"
``` By setting the `topic-spaces-configuration`, this command creates a namespace with:
The max client sessions option allows IoT MQ to spawn multiple instances and sti
## Create a topic space
-In the Event Grid namespace, create a topic space named `tutorial` with a topic template `telemetry/#`. Replace `<EG_NAME>` and `<RESOURCE_GROUP>` with your own values.
+In the Event Grid namespace, create a topic space named `tutorial` with a topic template `telemetry/#`.
```azurecli
-az eventgrid namespace topic-space create -g <RESOURCE_GROUP> --namespace-name <EG_NAME> --name tutorial --topic-templates "telemetry/#"
+az eventgrid namespace topic-space create \
+ --resource-group $RESOURCE_GROUP \
+ --namespace-name $EVENT_GRID_NAMESPACE \
+ --name tutorial \
+ --topic-templates "telemetry/#"
``` By using the `#` wildcard in the topic template, you can publish to any topic under the `telemetry` topic space. For example, `telemetry/temperature` or `telemetry/humidity`. ## Give IoT MQ access to the Event Grid topic space
-Using `az k8s-extension show`, find the principal ID for the Azure IoT MQ Arc extension.
+Using `az k8s-extension show`, find the principal ID for the Azure IoT MQ Arc extension. The command stores the principal ID in a variable for later use.
```azurecli
-az k8s-extension show --resource-group <RESOURCE_GROUP> --cluster-name <CLUSTER_NAME> --name mq --cluster-type connectedClusters --query identity.principalId -o tsv
+export PRINCIPAL_ID=$(az k8s-extension show \
+ --resource-group $RESOURCE_GROUP \
+ --cluster-name $CLUSTER_NAME \
+ --name mq \
+ --cluster-type connectedClusters \
+ --query identity.principalId -o tsv)
+echo $PRINCIPAL_ID
``` Take note of the output value for `identity.principalId`, which is a GUID value with the following format:
Take note of the output value for `identity.principalId`, which is a GUID value
d84481ae-9181-xxxx-xxxx-xxxxxxxxxxxx ```
-Then, use Azure CLI to assign publisher and subscriber roles to IoT MQ for the topic space you created. Replace `<MQ_ID>` with the principal ID you found in the previous step, and replace `<SUBSCRIPTION_ID>`, `<RESOURCE_GROUP>`, `<EG_NAME>` with your values matching the Event Grid namespace you created.
+Then, use Azure CLI to assign publisher and subscriber roles to IoT MQ for the topic space you created.
-Assigning the publisher role:
+Assign the publisher role:
```azurecli
-az role assignment create --assignee <MQ_ID> --role "EventGrid TopicSpaces Publisher" --scope /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.EventGrid/namespaces/<EG_NAME>/topicSpaces/tutorial
+az role assignment create \
+ --assignee $PRINCIPAL_ID \
+ --role "EventGrid TopicSpaces Publisher" \
+ --scope /subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.EventGrid/namespaces/$EVENT_GRID_NAMESPACE/topicSpaces/tutorial
```
-Assigning the subscriber role:
+Assign the subscriber role:
```azurecli
-az role assignment create --assignee <MQ_ID> --role "EventGrid TopicSpaces Subscriber" --scope /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.EventGrid/namespaces/<EG_NAME>/topicSpaces/tutorial
+az role assignment create \
+ --assignee $PRINCIPAL_ID \
+ --role "EventGrid TopicSpaces Subscriber" \
+ --scope /subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.EventGrid/namespaces/$EVENT_GRID_NAMESPACE/topicSpaces/tutorial
``` > [!TIP]
az role assignment create --assignee <MQ_ID> --role "EventGrid TopicSpaces Subsc
## Event Grid MQTT broker hostname
-Use Azure CLI to get the Event Grid MQTT broker hostname. Replace `<EG_NAME>` and `<RESOURCE_GROUP>` with your own values.
+Use Azure CLI to get the Event Grid MQTT broker hostname.
```azurecli
-az eventgrid namespace show -g <RESOURCE_GROUP> -n <EG_NAME> --query topicSpacesConfiguration.hostname -o tsv
+az eventgrid namespace show \
+ --resource-group $RESOURCE_GROUP \
+ --namespace-name $EVENT_GRID_NAMESPACE \
+ --query topicSpacesConfiguration.hostname \
+ -o tsv
``` Take note of the output value for `topicSpacesConfiguration.hostname` that is a hostname value that looks like:
machine-learning How To Machine Learning Interpretability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-machine-learning-interpretability.md
Previously updated : 11/04/2022 Last updated : 02/29/2024 # Model interpretability
Interpret-Community serves as the host for the following supported explainers, a
|Mimic Explainer (Global Surrogate)| Mimic Explainer is based on the idea of training [global surrogate models](https://christophm.github.io/interpretable-ml-book/global.html) to mimic opaque-box models. A global surrogate model is an intrinsically interpretable model that's trained to approximate the predictions of *any opaque-box model* as accurately as possible. Data scientists can interpret the surrogate model to draw conclusions about the opaque-box model. You can use one of the following interpretable models as your surrogate model: LightGBM (LGBMExplainableModel), Linear Regression (LinearExplainableModel), Stochastic Gradient Descent explainable model (SGDExplainableModel), or Decision Tree (DecisionTreeExplainableModel).|Model-agnostic| |Permutation Feature Importance Explainer| Permutation Feature Importance (PFI) is a technique used to explain classification and regression models that's inspired by [Breiman's Random Forests paper](https://www.stat.berkeley.edu/~breiman/randomforest2001.pdf) (see section 10). At a high level, the way it works is by randomly shuffling data one feature at a time for the entire dataset and calculating how much the performance metric of interest changes. The larger the change, the more important that feature is. PFI can explain the overall behavior of *any underlying model* but doesn't explain individual predictions. |Model-agnostic|
-Besides the interpretability techniques described above, we support another SHAP-based explainer, called Tabular Explainer. Depending on the model, Tabular Explainer uses one of the supported SHAP explainers:
+Besides the interpretability techniques described in the previous section, we support another SHAP-based explainer, called Tabular Explainer. Depending on the model, Tabular Explainer uses one of the supported SHAP explainers:
* Tree Explainer for all tree-based models * Deep Explainer for deep neural network (DNN) models
The `azureml.interpret` package of the SDK supports models that are trained with
* `iml.datatypes.DenseData` * `scipy.sparse.csr_matrix`
-The explanation functions accept both models and pipelines as input. If a model is provided, it must implement the prediction function `predict` or `predict_proba` that conforms to the Scikit convention. If your model doesn't support this, you can wrap it in a function that generates the same outcome as `predict` or `predict_proba` in Scikit and use that wrapper function with the selected explainer.
+The explanation functions accept both models and pipelines as input. If a model is provided, it must implement the prediction function `predict` or `predict_proba` that conforms to the Scikit convention. If your model doesn't support this, you can wrap it in a function that generates the same outcome as `predict` or `predict_proba` in Scikit and use that wrapper function with the selected explainer.
-If you provide a pipeline, the explanation function assumes that the running pipeline script returns a prediction. When you use this wrapping technique, `azureml.interpret` can support models that are trained via PyTorch, TensorFlow, and Keras deep learning frameworks as well as classic machine learning models.
+If you provide a pipeline, the explanation function assumes that the running pipeline script returns a prediction. When you use this wrapping technique, `azureml.interpret` can support models that are trained via PyTorch, TensorFlow, Keras deep learning frameworks, and classic machine learning models.
## Local and remote compute target
-The `azureml.interpret` package is designed to work with both local and remote compute targets. If you run the package locally, the SDK functions won't contact any Azure services.
+The `azureml.interpret` package is designed to work with both local and remote compute targets. If you run the package locally, the SDK functions won't contact any Azure services.
You can run the explanation remotely on Azure Machine Learning Compute and log the explanation info into the Azure Machine Learning Run History Service. After this information is logged, reports and visualizations from the explanation are readily available on Azure Machine Learning studio for analysis.
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-data.md
To use the data in your cloud-based storage solution, we recommend this data del
This screenshot shows the recommended workflow: ## Connect to storage with datastores
For more information, visit [Create a dataset monitor](how-to-monitor-datasets.m
## Next steps - [Create a dataset in Azure Machine Learning studio or with the Python SDK](how-to-create-register-datasets.md)-- Try out dataset training examples with our [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/)
+- Try out dataset training examples with our [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/)
mariadb Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-connectivity-architecture.md
We strongly encourage customers to move away from relying on any individual Gate
| Australia Central | 20.36.105.32 | 20.36.105.32/29, 20.53.48.96/27 | | Australia Central2 | 20.36.113.32 | 20.36.113.32/29, 20.53.56.32/27 | | Australia East | 13.70.112.32 | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29, 40.79.160.32/29, 20.53.46.128/27 |
-| Australia South East |13.77.49.33 |3.77.49.32/29, 104.46.179.160/27|
+| Australia South East |13.77.49.33 |13.77.49.32/29, 104.46.179.160/27|
| Brazil South | 191.233.201.8, 191.233.200.16 | 191.234.153.32/27, 191.234.152.32/27, 191.234.157.136/29, 191.233.200.32/29, 191.234.144.32/29, 191.234.142.160/27| |Brazil Southeast|191.233.48.2|191.233.48.32/29, 191.233.15.160/27|
-| Canada Central | 13.71.168.32| 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29, 20.48.196.32/27|
+| Canada Central | 13.71.168.32| 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29, 20.48.196.32/27|
| Canada East |40.69.105.32 | 40.69.105.32/29, 52.139.106.192/27 | | Central US | 52.182.136.37, 52.182.136.38 | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29, 20.40.228.128/27|
-| China East | 52.130.112.139 | 52.130.112.136/29, 52.130.13.96/2752.130.112.136/29, 52.130.13.96/27|
+| China East | 52.130.112.139 | 52.130.112.136/29, 52.130.13.96/27|
| China East 2 | 40.73.82.1, 52.130.120.89 | 52.130.120.88/29, 52.130.7.0/27| | China North | 52.130.128.89| 52.130.128.88/29, 40.72.77.128/27 | | China North 2 |40.73.50.0 | 52.130.40.64/29, 52.130.21.160/27| | East Asia |13.75.33.20, 13.75.33.21 | 20.205.77.176/29, 20.205.83.224/29, 20.205.77.200/29, 13.75.32.192/29, 13.75.33.192/29, 20.195.72.32/27|
-| East US | 40.71.8.203, 40.71.83.113|20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29, 20.62.132.160/27|
+| East US | 40.71.8.203, 40.71.83.113|20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29, 20.62.132.160/27|
| East US 2 |52.167.105.38, 40.70.144.38| 104.208.150.192/29, 40.70.144.192/29, 52.167.104.192/29, 20.62.58.128/27| | France Central |40.79.129.1 | 40.79.128.32/29, 40.79.136.32/29, 40.79.144.32/29, 20.43.47.192/27 | | France South |40.79.176.40 | 40.79.176.40/29, 40.79.177.32/29, 52.136.185.0/27| | Germany North| 51.116.56.0| 51.116.57.32/29, 51.116.54.96/27| | Germany West Central | 51.116.152.0 | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29, 51.116.149.32/27|
-| India Central |20.192.96.33 | 40.80.48.32/29, 104.211.86.32/29, 20.192.96.32/29, 20.192.43.160/27|
-| India South | 40.78.192.32| 40.78.192.32/29, 40.78.193.32/29, 52.172.113.96/27|
-| India West | 104.211.144.32| 104.211.144.32/29, 104.211.145.32/29, 52.136.53.160/27|
+| Central India | 20.192.96.33 | 40.80.48.32/29, 104.211.86.32/29, 20.192.96.32/29, 20.192.43.160/27|
+| South India | 40.78.192.32| 40.78.192.32/29, 40.78.193.32/29, 52.172.113.96/27|
+| West India | 104.211.144.32| 104.211.144.32/29, 104.211.145.32/29, 52.136.53.160/27|
| Japan East | 40.79.184.8, 40.79.192.23| 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29, 20.191.165.160/27 | | Japan West |40.74.96.6| 20.18.179.192/29, 40.74.96.32/29, 20.189.225.160/27 | | Jio India Central| 20.192.233.32|20.192.233.32/29, 20.192.48.32/27|
mysql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connectivity-architecture.md
We strongly encourage customers to move away from relying on any individual Gate
| Australia Central | 20.36.105.32 | 20.36.105.32/29, 20.53.48.96/27 | | Australia Central2 | 20.36.113.32 | 20.36.113.32/29, 20.53.56.32/27 | | Australia East | 13.70.112.32 | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29, 40.79.160.32/29, 20.53.46.128/27 |
-| Australia South East |13.77.49.33 |3.77.49.32/29, 104.46.179.160/27|
+| Australia South East |13.77.49.33 |13.77.49.32/29, 104.46.179.160/27|
| Brazil South | 191.233.201.8, 191.233.200.16 | 191.234.153.32/27, 191.234.152.32/27, 191.234.157.136/29, 191.233.200.32/29, 191.234.144.32/29, 191.234.142.160/27| |Brazil Southeast|191.233.48.2|191.233.48.32/29, 191.233.15.160/27|
-| Canada Central | 13.71.168.32| 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29, 20.48.196.32/27|
+| Canada Central | 13.71.168.32| 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29, 20.48.196.32/27|
| Canada East |40.69.105.32 | 40.69.105.32/29, 52.139.106.192/27 | | Central US | 52.182.136.37, 52.182.136.38 | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29, 20.40.228.128/27|
-| China East | 52.130.112.139 | 52.130.112.136/29, 52.130.13.96/2752.130.112.136/29, 52.130.13.96/27|
+| China East | 52.130.112.139 | 52.130.112.136/29, 52.130.13.96/27|
| China East 2 | 40.73.82.1, 52.130.120.89 | 52.130.120.88/29, 52.130.7.0/27| | China North | 52.130.128.89| 52.130.128.88/29, 40.72.77.128/27 | | China North 2 |40.73.50.0 | 52.130.40.64/29, 52.130.21.160/27| | East Asia |13.75.33.20, 13.75.33.21 | 20.205.77.176/29, 20.205.83.224/29, 20.205.77.200/29, 13.75.32.192/29, 13.75.33.192/29, 20.195.72.32/27|
-| East US | 40.71.8.203, 40.71.83.113|20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29, 20.62.132.160/27|
+| East US | 40.71.8.203, 40.71.83.113|20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29, 20.62.132.160/27|
| East US 2 |52.167.105.38, 40.70.144.38| 104.208.150.192/29, 40.70.144.192/29, 52.167.104.192/29, 20.62.58.128/27| | France Central |40.79.129.1 | 40.79.128.32/29, 40.79.136.32/29, 40.79.144.32/29, 20.43.47.192/27 | | France South |40.79.176.40 | 40.79.176.40/29, 40.79.177.32/29, 52.136.185.0/27| | Germany North| 51.116.56.0| 51.116.57.32/29, 51.116.54.96/27| | Germany West Central | 51.116.152.0 | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29, 51.116.149.32/27|
-| India Central |20.192.96.33 | 40.80.48.32/29, 104.211.86.32/29, 20.192.96.32/29, 20.192.43.160/27|
-| India South | 40.78.192.32| 40.78.192.32/29, 40.78.193.32/29, 52.172.113.96/27|
-| India West | 104.211.144.32| 104.211.144.32/29, 104.211.145.32/29, 52.136.53.160/27|
+| Central India | 20.192.96.33 | 40.80.48.32/29, 104.211.86.32/29, 20.192.96.32/29, 20.192.43.160/27|
+| South India | 40.78.192.32| 40.78.192.32/29, 40.78.193.32/29, 52.172.113.96/27|
+| West India | 104.211.144.32| 104.211.144.32/29, 104.211.145.32/29, 52.136.53.160/27|
| Japan East | 40.79.184.8, 40.79.192.23| 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29, 20.191.165.160/27 | | Japan West |40.74.96.6| 20.18.179.192/29, 40.74.96.32/29, 20.189.225.160/27 | | Jio India Central| 20.192.233.32|20.192.233.32/29, 20.192.48.32/27|
operational-excellence Overview Relocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/overview-relocation.md
+
+ Title: Relocation guidance overview for Microsoft Azure products and services (Preview)
+description: Relocation guidance overview for Microsoft Azure products and services. View Azure service specific relocation guides.
+++ Last updated : 01/16/2024++
+ - subject-relocation
++
+# Azure services relocation guidance overview (Preview)
+
+As Microsoft continues to expand Azure global infrastructure and launch new Azure regions worldwide, there's an increasing number of options available for you to relocate your workloads into new regions. Region relocation options vary by service and by workload architecture. To successfully relocate a workload to another region, you need to plan your relocation strategy with an understanding of what each service in your workload requires and supports.
+
+Azure region relocation documentation (Preview) contains service-specific relocation guidance for Azure products and services. The relocation documentation set is founded on both [Azure Cloud Adoption Framework - Relocate cloud workloads](/azure/cloud-adoption-framework/relocate/) as well as the following Well-architected Framework (WAF) Operational Excellence principles:
+
+- [Deploy with confidence](/azure/well-architected/operational-excellence/principles#deploy-with-confidence)
+- [Adopt safe deployment practices](/azure/well-architected/operational-excellence/principles#adopt-safe-deployment-practices)
++
+Each service specific guide can contain service-specific information on topics such as:
+
+- [Service-relocation automation tools](/azure/cloud-adoption-framework/relocate/select#select-service-relocation-automation).
+- [Data relocation automation](/azure/cloud-adoption-framework/relocate/select#select-data-relocation-automation).
+- [Cutover approaches](/azure/cloud-adoption-framework/relocate/select#select-cutover-approach).
+- Possible and actual service dependencies that also require relocation planning.
+- Lists of considerations, features, and limitations in relation to relocation planning for that service.
+- Links to how-tos and relevant product-specific relocation information.
++
+## Service categories across region types
++
+## Azure services relocation guides
+
+The following tables provide links to each Azure service relocation document. The tables also provide information on which kind of relocation method is supported.
+
+### ![An icon that signifies this service is foundational.](./media/relocation/icon-foundational.svg) Foundational services
+
+| Product | Relocation with data | Relocation without data | Resource Mover |
+| | | | |
+[Azure Event Hubs](relocation-event-hub.md)| ❌ | ✅| ❌ |
+[Azure Event Hubs Cluster](relocation-event-hub-cluster.md)| ❌ | ✅| ❌ |
+[Azure Key Vault](./relocation-key-vault.md)| ✅ | ✅| ❌ |
+[Azure Virtual Network](./relocation-virtual-network.md)| ❌ | ✅| ✅ |
+[Azure Virtual Network - Network Security Groups](./relocation-virtual-network-nsg.md)| ❌ | ✅| ✅ |
+
+### ![An icon that signifies this service is mainstream.](./media/relocation/icon-mainstream.svg) Mainstream services
+
+| Product | Relocation with data | Relocation without data | Resource Mover |
+| | | | |
+[Azure Monitor - Log Analytics](./relocation-log-analytics.md)| ❌ | ✅| ❌ |
+[Azure Database for PostgreSQL](./relocation-postgresql-flexible-server.md)| ✅ | ✅| ❌ |
+[Azure Private Link Service](./relocation-private-link.md) | ❌ | ✅| ❌ |
+[Storage Account](relocation-storage-account.md)| ✅ | ✅| ❌ |
+++
+### ![An icon that signifies this service is strategic.](./media/relocation/icon-strategic.svg) Strategic services
+
+| Product | Relocation with data | Relocation without data | Resource Mover |
+| | | | |
+[Azure Automation](./relocation-automation.md)| ✅ | ✅| ❌ |
+++
+## Additional information
+
+- [Azure Resources Mover documentation](/azure/resource-mover/)
+- [Azure Resource Manager (ARM) documentation](/azure/azure-resource-manager/templates/)
++
operational-excellence Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/overview.md
+
+ Title: Azure Operational Excellence documentation (Preview)
+description: Azure Operational Excellence documentation for guidance for specific workload operations and projects
++ Last updated : 03/01/2024+++
+ - subject-relocation
++
+# Azure Operational Excellence documentation (Preview)
++
+The Azure Operational Excellence documentation is an organized collection of service specific guidance in the context of targeted workflow operations. Each service specific operational excellence guidance set is grounded in the [Azure Well-Architected Framework (WAF) principles for Operational Excellence](/azure/well-architected/operational-excellence/principles), and designed based on the [Cloud Adoption Framework (CAF)](/azure/cloud-adoption-framework/) architecture.
+
+Currently in preview, the Operational Excellence documentation set contains the following service-specific content:
+
+- [Service region relocation](./overview-relocation.md). The region relocation documentation is designed to provide service-specific relocation guidance, so that you can move your services from one region to another safely and with confidence, in accordance with the following Well-architected Framework (WAF) principles:
+ - [Deploy with confidence](/azure/well-architected/operational-excellence/principles#deploy-with-confidence)
+ - [Adopt safe deployment practices](/azure/well-architected/operational-excellence/principles#adopt-safe-deployment-practices)
+
+ Each service-specific guide is derived from [Relocate cloud workloads](/azure/cloud-adoption-framework/relocate/) in the Cloud Adoption Framework for Azure.
++
operational-excellence Relocation Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-automation.md
+
+ Title: Relocation guidance for Azure Automation
+description: Learn how to relocate an Azure Automation to a new region
+++ Last updated : 01/19/2024+++
+ - subject-relocation
++
+# Relocate Azure Automation to another region
+
+This article covers relocation guidance for [Azure Automation](../automation/overview.md) across regions.
+
+If your Azure Automation instance doesn't have any configuration and the instance itself needs to be moved alone, you can choose to redeploy the NetApp File instance by using [Bicep, ARM Template, or Terraform](/azure/templates/microsoft.automation/automationaccounts?tabs=bicep&pivots=deployment-language-bicep).
++
+## Prerequisites
+
+- Identify all Automation dependant resources.
+- If the system-assigned managed identity isn't being used at source, you must map user-assigned managed identity at the target.
+- If the target Azure Automation needs to be enabled for private access, associate with Virtual Network for private endpoint.
+- If the source Azure Automation is enabled with a private connection, create a private link and configure the private link with DNS at target.
+- For Azure Automation to communicate with Hybrid RunBook Worker, Azure Update Manager, Change Tracking, Inventory Configuration, and Automation State Configuration, you must enable port 443 for both inbound and outbound internet access.
++
+## Prepare
+
+To get started, export a Resource Manager template. This template contains settings that describe your Automation namespace.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Select **All resources** and then select your Automation resource.
+3. Select **Export template**.
+4. Choose **Download** in the **Export template** page.
+5. Locate the .zip file that you downloaded from the portal, and unzip that file to a folder of your choice.
+
+ This zip file contains the .json files that include the template and scripts to deploy the template.
+
+## Redeploy
+
+In the diagram below, the red flow lines illustrate redeployment of the target instance along with configuration movement.
+++
+**To deploy the template to create an Automation instance in the target region:**
+
+1. Reconfigure the template parameters for the target.
+
+1. Deploy the template using [ARM](/azure/automation/quickstart-create-automation-account-template), [Portal](/azure/automation/automation-create-standalone-account?tabs=azureportal) or [PowerShell](/powershell/module/az.automation/import-azautomationrunbook?view=azps-11.2.0&preserve-view=true).
+
+1. Use PowerShell to export all associated runbooks from the source Azure Automation instance and import them to the target instance. Reconfigure the properties as per target. For more information, see [Export-AzAuotomationRunbook](/powershell/module/az.automation/export-azautomationrunbook?view=azps-11.2.0&viewFallbackFrom=azps-9.4.0&preserve-view=true).
+
+1. Associate the relocated Azure Automation instance to the target Log Analytics workspace.
+
+1. Configure the target virtual machines with desired state configuration from the relocated Azure Automation instance as per source.
+
+## Next steps
+
+To learn more about moving resources between regions and disaster recovery in Azure, refer to:
+
+- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)
+- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
operational-excellence Relocation Event Hub Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-event-hub-cluster.md
+
+ Title: Relocate an Azure Event Hubs dedicated cluster to another region
+description: This article shows you how to relocate an Azure Event Hubs dedicated cluster from the current region to another region.
+++ Last updated : 01/24/2024+++
+ - subject-relocation
+++
+# Relocate an Azure Event Hubs dedicated cluster to another region
+
+This article shows you how to export an Azure Resource Manager template for an existing Event Hubs dedicated cluster and then use the template to create a cluster with same configuration settings in another region.
+
+If you have other resources such as namespaces and event hubs in the Azure resource group that contains the Event Hubs cluster, you may want to export the template at the resource group level so that all related resources can be moved to the new region in one step. The steps in this article show you how to export an **Event Hubs cluster** to the template. The steps for exporting a **resource group** to the template are similar.
+
+## Prerequisites
+Ensure that the dedicated cluster can be created in the target region. The easiest way to find out is to use the Azure portal to try to [create an Event Hubs dedicated cluster](../event-hubs/event-hubs-dedicated-cluster-create-portal.md). You see the list of regions that are supported at that point of time for creating the cluster.
+
+## Prepare
+To get started, export a Resource Manager template. This template contains settings that describe your Event Hubs dedicated cluster.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Select **All resources** and then select your Event Hubs dedicated cluster.
+3. On the **Event Hubs Cluster** page, select **Export template** in the **Automation** section on the left menu.
+4. Choose **Download** in the **Export template** page.
+
+ :::image type="content" source="../event-hubs/media/move-cluster-across-regions/download-template.png" alt-text="Screenshot showing where to download Resource Manager template" lightbox="../event-hubs/media/move-cluster-across-regions/download-template.png":::
+5. Locate the .zip file that you downloaded from the portal, and unzip that file to a folder of your choice.
+
+ This zip file contains the .json files that include the template and scripts to deploy the template.
++
+## Move
+
+Deploy the template to create an Event Hubs dedicated cluster in the target region.
++
+1. In the Azure portal, select **Create a resource**.
+2. In **Search the Marketplace**, type **template deployment**, and select **Template deployment (deploy using custom templates)**.
+1. On the **Template deployment** page, select **Create**.
+1. Select **Build your own template in the editor**.
+1. Select **Load file**, and then follow the instructions to load the **template.json** file that you downloaded in the last section.
+1. Update the value of the `location` property to point to the new region. To obtain location codes, see [Azure locations](https://azure.microsoft.com/global-infrastructure/locations/). The code for a region is the region name with no spaces, for example, `West US` is equal to `westus`.
+1. Select **Save** to save the template.
+1. On the **Custom deployment** page, follow these steps:
+ 1. Select an Azure **subscription**.
+ 2. Select an existing **resource group** or create one.
+ 3. Select the target **location** or region. If you selected an existing resource group, this setting is read-only.
+ 4. In the **SETTINGS** section, do the following steps:
+ 1. Enter the new **cluster name**.
+
+ :::image type="content" source="../event-hubs/media/move-cluster-across-regions/deploy-template.png" alt-text="Screenshot showing Deploy Resource Manager template":::
+ 5. Select **Review + create** at the bottom of the page.
+ 1. On the **Review + create** page, review settings, and then select **Create**.
+
+## Discard or clean up
+After the deployment, if you want to start over, you can delete the **target Event Hubs dedicated cluster**, and repeat the steps described in the [Prepare](#prepare) and [Move](#move) sections of this article.
+
+To commit the changes and complete the move of an Event Hubs cluster, delete the **Event Hubs cluster** in the original region.
+
+To delete an Event Hubs cluster (source or target) by using the Azure portal:
+
+1. In the search window at the top of Azure portal, type **Event Hubs Clusters**, and select **Event Hubs Clusters** from search results. You see the Event Hubs cluster in a list.
+2. Select the cluster to delete, and select **Delete** from the toolbar.
+3. On the **Delete Cluster** page, confirm the deletion by typing the **cluster name**, and then select **Delete**.
+
+## Next steps
+
+In this tutorial, you learned how to move an Event Hubs dedicated cluster from one region to another.
+
+See the [Move Event Hubs namespaces across regions](relocation-event-hub.md) article for instructions on moving a namespace from one region to another region.
+
+To learn more about moving resources between regions and disaster recovery in Azure, refer to:
+
+- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)
+- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
operational-excellence Relocation Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-event-hub.md
+
+ Title: Relocation guidance in Azure Event Hubs
+description: Learn how to relocate Azure Event Hubs to a new region
+++ Last updated : 01/24/2024+++
+ - subject-relocation
++
+# Relocate Azure Event Hubs to another region
++
+This article shows you how to to copy an Event Hubs namespace and configuration settings to another region.
+
+If you have other resources in the Azure resource group that contains the Event Hubs namespace, you may want to export the template at the resource group level so that all related resources can be moved to the new region in one step. To learn how to export a **resource group** to the template, see [Move resources across regions(from resource group)](/azure/resource-mover/move-region-within-resource-group).
+
+
+## Prerequisites
+
+- Ensure that the services and features that your account uses are supported in the target region.
+
+- If you have **capture feature** enabled for event hubs in the namespace, move [Azure Storage or Azure Data Lake Store Gen 2](../storage/common/storage-account-move.md) accounts before moving the Event Hubs namespace. You can also move the resource group that contains both Storage and Event Hubs namespaces to the other region by following steps similar to the ones described in this article.
+
+- If the Event Hubs namespace is in an **Event Hubs cluster**, [move the dedicated cluster](../event-hubs/move-cluster-across-regions.md) to the **target region** before you go through steps in this article. You can also use the [quickstart template on GitHub](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.eventhub/eventhubs-create-cluster-namespace-eventhub/) to create an Event Hubs cluster. In the template, remove the namespace portion of the JSON to create only the cluster.
+
+- Identify all resources dependencies. Depending on how you've deployed Event Hub, the following services *may* need deployment in the target region:
+
+ - [Public IP](/azure/virtual-network/move-across-regions-publicip-portal)
+ - [Azure Private Link Service](./relocation-private-link.md)
+ - [Virtual Network](./relocation-virtual-network.md)
+ - Event Hub Namespace
+ - [Event Hub Cluster](./relocation-event-hub-cluster.md)
+ - [Storage Account](./relocation-storage-account.md)
+ >[!TIP]
+ >When Capture is enabled, you can either relocate a Storage Account from the source or use an existing one in the target region.
+
+- Identify all dependent resources. Event Hub is a messaging system that lets applications publish and subscribe for messages. Consider whether or not your application at target requires messaging support for the same set of dependent services that it had at the source target.
+
+## Prepare
+
+To get started, export a Resource Manager template. This template contains settings that describe your Event Hubs namespace.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Select **All resources** and then select your Event Hubs namespace.
+3. On the **Event Hubs Namespace** page, select **Export template** under **Automation** in the left menu.
+4. Choose **Download** in the **Export template** page.
+
+ ![Screenshot showing where to download Resource Manager template](../event-hubs/media/move-across-regions/download-template.png)
+5. Locate the .zip file that you downloaded from the portal, and unzip that file to a folder of your choice.
+
+ This zip file contains the .json files that include the template and scripts to deploy the template.
++
+## Redeploy
+
+Deploy the template to create an Event Hubs namespace in the target region.
++
+1. In the Azure portal, select **Create a resource**.
+2. In **Search the Marketplace**, type **template deployment**, and select **Template deployment (deploy using custom templates)**.
+5. Select **Build your own template in the editor**.
+6. Select **Load file**, and then follow the instructions to load the **template.json** file that you downloaded in the last section.
+1. Update the value of the `location` property to point to the new region. To obtain location codes, see [Azure locations](https://azure.microsoft.com/global-infrastructure/locations/). The code for a region is the region name with no spaces, for example, `West US` is equal to `westus`.
+1. Select **Save** to save the template.
+1. On the **Custom deployment** page, follow these steps:
+ 1. Select an Azure **subscription**.
+ 2. Select an existing **resource group** or create one. If the source namespace was in an Event Hubs cluster, select the resource group that contains cluster in the target region.
+ 3. Select the target **location** or region. If you selected an existing resource group, this setting is read-only.
+ 4. In the **SETTINGS** section, do the following steps:
+ 1. Enter the new **namespace name**.
+
+ ![Deploy Resource Manager template](../event-hubs//media/move-across-regions/deploy-template.png)
+ 2. If your source namespace was in an **Event Hubs cluster**, enter names of **resource group** and **Event Hubs cluster** as part of **external ID**.
+
+ ```
+ /subscriptions/<AZURE SUBSCRIPTION ID>/resourceGroups/<CLUSTER'S RESOURCE GROUP>/providers/Microsoft.EventHub/clusters/<CLUSTER NAME>
+ ```
+ 3. If event hub in your namespace uses a Storage account for capturing events, specify the resource group name and the storage account for `StorageAccounts_<original storage account name>_external` field.
+
+ ```
+ /subscriptions/0000000000-0000-0000-0000-0000000000000/resourceGroups/<STORAGE'S RESOURCE GROUP>/providers/Microsoft.Storage/storageAccounts/<STORAGE ACCOUNT NAME>
+ ```
+ 5. Select **Review + create** at the bottom of the page.
+ 1. On the **Review + create** page, review settings, and then select **Create**.
+
+## Discard or clean up
+After the deployment, if you want to start over, you can delete the **target Event Hubs namespace**, and repeat the steps described in the [Prepare](#prepare) and [Move](#redeploy) sections of this article.
+
+To commit the changes and complete the move of an Event Hubs namespace, delete the **Event Hubs namespace** in the original region. Make sure that you processed all the events in the namespace before deleting the namespace.
+
+To delete an Event Hubs namespace (source or target) by using the Azure portal:
+
+1. In the search window at the top of Azure portal, type **Event Hubs**, and select **Event Hubs** from search results. You see the Event Hubs namespaces in a list.
+2. Select the target namespace to delete, and select **Delete** from the toolbar.
+
+ ![Screenshot showing Delete namespace - button](../event-hubs//media/move-across-regions/delete-namespace-button.png)
+3. On the **Delete Namespace** page, confirm the deletion by typing the **namespace name**, and then select **Delete**.
+
+## Next steps
+
+In this how-to, you learned how to move an Event Hubs namespace from one region to another.
+
+See the [Relocate Event Hubs to another region](relocation-event-hub-cluster.md) article for instructions on moving an Event Hubs cluster from one region to another region.
+
+To learn more about moving resources between regions and disaster recovery in Azure, refer to:
+
+- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)
+- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
operational-excellence Relocation Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-key-vault.md
+
+ Title: Relocate Azure KeyVault to another region
+description: This article offers guidance on moving a key vault to a different region.
+++++ Last updated : 02/29/2024+++
+# Customer intent: As a key vault administrator, I want to move my vault to another region.
++
+# Relocate Azure KeyVault to another region
+
+Azure Key Vault does not allow you to move a key vault from one region to another. You can, however, create a key vault in the new region, manually backup/restore each individual key, secret, or certificate from your existing key vault to the new key vault, and then remove the original key vault.
+
+## Prerequisites
+
+It's critical to understand the implications of this workaround before you attempt to apply it in a production environment.
+
+## Prepare
+
+First, you must create a new key vault in the region to which you wish to move. You can do so through the [Azure portal](/azure/key-vault/general/quick-create-portal), the [Azure CLI](/azure/key-vault/general/quick-create-cli), or [Azure PowerShell](/azure/key-vault/general/quick-create-powershell).
+
+Keep in mind the following concepts:
+
+* Key vault names are globally unique. You can't reuse a vault name.
+* You need to reconfigure your access policies and network configuration settings in the new key vault.
+* You need to reconfigure soft-delete and purge protection in the new key vault.
+* The backup and restore operation won't preserve your autorotation settings. You might need to reconfigure the settings.
+
+## Move
+
+Export your keys, secrets, or certificates from your old key vault, and then import them into your new vault.
+
+You can back up each individual secret, key, and certificate in your vault by using the backup command. Your secrets are downloaded as an encrypted blob. For step by step guidance, see [Azure Key Vault backup and restore](/azure/key-vault/general/backup).
+
+Alternatively, you can download certain secret types manually. For example, you can download certificates as a PFX file. This option eliminates the geographical restrictions for some secret types, such as certificates. You can upload the PFX files to any key vault in any region. The secrets are downloaded in a non-password protected format. You are responsible for securing your secrets during the move.
+
+After you have downloaded your keys, secrets, or certificates, you can restore them to your new key vault.
+
+Using the backup and restore commands has two limitations:
+
+* You can't back up a key vault in one geography and restore it into another geography. For more information, see [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/).
+
+* The backup command backs up all versions of each secret. If you have a secret with a large number of previous versions (more than 10), the request size might exceed the allowed maximum and the operation might fail.
+
+## Verify
+
+Before deleting your old key vault, verify that the new vault contains all of the required keys, secrets, and certificates.
++
+## Next steps
+
+- [Azure Key Vault backup and restore](/azure/key-vault/general/backup)
+- [Moving an Azure Key Vault across resource groups](/azure/key-vault/general/move-resourcegroup)
+- [Moving an Azure Key Vault to another subscription](/azure/key-vault/general/move-subscription)
operational-excellence Relocation Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-log-analytics.md
+
+ Title: Relocation guidance for Log Analytics workspace
+description: Learn how to relocate Log Analytics workspace to a new region.
++ Last updated : 03/01/2024++++
+ - subject-relocation
+#CustomerIntent: As a cloud architect/engineer, I want to learn how to relocate Log Analytics workspace to another region.
++
+# Relocate Azure Monitor - Log Analytics workspace to another region
+
+A relocation plan for Log Analytics workspace must include the relocation of any resources that log data with Log Analytics Workspace.
+
+Log Analytics workspace doesn't natively support migrating workspace data from one region to another and associated devices. Instead, you must create a new Log Analytics workspace in the target region and reconfigure the devices and settings in the new workspace.
+
+The diagram below illustrates the relocation pattern for a Log Analytics workspace. The red flow lines represent the redeployment of the target instance along with data movement and updating domains and endpoints.
++
+## Prerequisites
+
+- To export the workspace configuration to a template that can be deployed to another region, you need the [Log Analytics Contributor](../role-based-access-control/built-in-roles.md#log-analytics-contributor) or [Monitoring Contributor](../role-based-access-control/built-in-roles.md#monitoring-contributor) role, or higher.
+
+- Identify all the resources that are currently associated with your workspace, including:
+ - *Connected agents*: Enter **Logs** in your workspace and query a [heartbeat](../azure-monitor/insights/solution-agenthealth.md#azure-monitor-log-records) table to list connected agents.
+ ```kusto
+ Heartbeat
+ | summarize by Computer, Category, OSType, _ResourceId
+ ```
+ - *Diagnostic settings*: Resources can send logs to Azure Diagnostics or dedicated tables in your workspace. Enter **Logs** in your workspace, and run this query for resources that send data to the `AzureDiagnostics` table:
+
+ ```kusto
+ AzureDiagnostics
+ | where TimeGenerated > ago(12h)
+ | summarize by ResourceProvider , ResourceType, Resource
+ | sort by ResourceProvider, ResourceType
+ ```
+
+ Run this query for resources that send data to dedicated tables:
+
+ ```kusto
+ search *
+ | where TimeGenerated > ago(12h)
+ | where isnotnull(_ResourceId)
+ | extend ResourceProvider = split(_ResourceId, '/')[6]
+ | where ResourceProvider !in ('microsoft.compute', 'microsoft.security')
+ | extend ResourceType = split(_ResourceId, '/')[7]
+ | extend Resource = split(_ResourceId, '/')[8]
+ | summarize by tostring(ResourceProvider) , tostring(ResourceType), tostring(Resource)
+ | sort by ResourceProvider, ResourceType
+ ```
+
+ - *Installed solutions*: Select **Legacy solutions** on the workspace navigation pane for a list of installed solutions.
+ - *Data collector API*: Data arriving through a [Data Collector API](../azure-monitor/logs/data-collector-api.md) is stored in custom log tables. For a list of custom log tables, select **Logs** on the workspace navigation pane, and then select **Custom log** on the schema pane.
+ - *Linked services*: Workspaces might have linked services to dependent resources such as an Azure Automation account, a storage account, or a dedicated cluster. Remove linked services from your workspace. Reconfigure them manually in the target workspace.
+ - *Alerts*: To list alerts, select **Alerts** on your workspace navigation pane, and then select **Manage alert rules** on the toolbar. Alerts in workspaces created after June 1, 2019, or in workspaces that were [upgraded from the Log Analytics Alert API to the scheduledQueryRules API](../azure-monitor/alerts/alerts-log-api-switch.md) can be included in the template.
+
+ You can [check if the scheduledQueryRules API is used for alerts in your workspace](../azure-monitor/alerts/alerts-log-api-switch.md#check-switching-status-of-workspace). Alternatively, you can configure alerts manually in the target workspace.
+ - *Query packs*: A workspace can be associated with multiple query packs. To identify query packs in your workspace, select **Logs** on the workspace navigation pane, select **queries** on the left pane, and then select the ellipsis to the right of the search box. A dialog with the selected query packs opens on the right. If your query packs are in the same resource group as the workspace that you're moving, you can include it with this migration.
+- Verify that your Azure subscription allows you to create Log Analytics workspaces in the target region.
+++
+## Prepare
+
+The following procedures show how to prepare the workspace and resources for the move by using a Resource Manager template.
+
+> [!NOTE]
+> Not all resources can be exported through a template. You'll need to configure these separately after the workspace is created in the target region.
++
+1. Sign in to the [Azure portal](https://portal.azure.com), and then select **Resource Groups**.
+1. Find the resource group that contains your workspace and select it.
+1. To view an alert resource, select the **Show hidden types** checkbox.
+1. Select the **Type** filter. Select **Log Analytics workspace**, **Solution**, **SavedSearches**, **microsoft.insights/scheduledqueryrules**, **defaultQueryPack**, and other workspace-related resources that you have (such as an Automation account). Then select **Apply**.
+1. Select the workspace, solutions, saved searches, alerts, query packs, and other workspace-related resources that you have (such as an Automation account). Then select **Export template** on the toolbar.
+
+ > [!NOTE]
+ > Microsoft Sentinel can't be exported with a template. You need to [onboard Sentinel](../sentinel/quickstart-onboard.md) to a target workspace.
+
+1. Select **Deploy** on the toolbar to edit and prepare the template for deployment.
+1. Select **Edit parameters** on the toolbar to open the *parameters.json* file in the online editor.
+1. To edit the parameters, change the `value` property under `parameters`. Here's an example:
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "workspaces_name": {
+ "value": "my-workspace-name"
+ },
+ "workspaceResourceId": {
+ "value": "/subscriptions/resource-id/resourceGroups/resource-group-name/providers/Microsoft.OperationalInsights/workspaces/workspace-name"
+ },
+ "alertName": {
+ "value": "my-alert-name"
+ },
+ "querypacks_name": {
+ "value": "my-default-query-pack-name"
+ }
+ }
+ }
+ ```
+
+1. Select **Save** in the editor.
+
+## Edit the template
++
+1. Select **Edit template** on the toolbar to open the *template.json* file in the online editor.
+1. To edit the target region where the Log Analytics workspace will be deployed, change the `location` property under `resources` in the online editor.
+
+ To get region location codes, see [Data residency in Azure](https://azure.microsoft.com/global-infrastructure/locations/). The code for a region is the region name with no spaces. For example, **Central US** should be `centralus`.
+1. Remove linked-services resources (`microsoft.operationalinsights/workspaces/linkedservices`) if they're present in the template. You should reconfigure these resources manually in the target workspace.
+
+ The following example template includes the workspace, saved search, solutions, alerts, and query pack:
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "workspaces_name": {
+ "type": "String"
+ },
+ "workspaceResourceId": {
+ "type": "String"
+ },
+ "alertName": {
+ "type": "String"
+ },
+ "querypacks_name": {
+ "type": "String"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "microsoft.operationalinsights/workspaces",
+ "apiVersion": "2020-08-01",
+ "name": "[parameters('workspaces_name')]",
+ "location": "france central",
+ "properties": {
+ "sku": {
+ "name": "pergb2018"
+ },
+ "retentionInDays": 30,
+ "features": {
+ "enableLogAccessUsingOnlyResourcePermissions": true
+ },
+ "workspaceCapping": {
+ "dailyQuotaGb": -1
+ },
+ "publicNetworkAccessForIngestion": "Enabled",
+ "publicNetworkAccessForQuery": "Enabled"
+ }
+ },
+ {
+ "type": "Microsoft.OperationalInsights/workspaces/savedSearches",
+ "apiVersion": "2020-08-01",
+ "name": "[concat(parameters('workspaces_name'), '/2b5112ec-5ad0-5eda-80e9-ad98b51d4aba')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.OperationalInsights/workspaces', parameters('workspaces_name'))]"
+ ],
+ "properties": {
+ "category": "VM Monitoring",
+ "displayName": "List all versions of curl in use",
+ "query": "VMProcess\n| where ExecutableName == \"curl\"\n| distinct ProductVersion",
+ "tags": [],
+ "version": 2
+ }
+ },
+ {
+ "type": "Microsoft.OperationsManagement/solutions",
+ "apiVersion": "2015-11-01-preview",
+ "name": "[concat('Updates(', parameters('workspaces_name'))]",
+ "location": "france central",
+ "dependsOn": [
+ "[resourceId('microsoft.operationalinsights/workspaces', parameters('workspaces_name'))]"
+ ],
+ "plan": {
+ "name": "[concat('Updates(', parameters('workspaces_name'))]",
+ "promotionCode": "",
+ "product": "OMSGallery/Updates",
+ "publisher": "Microsoft"
+ },
+ "properties": {
+ "workspaceResourceId": "[resourceId('microsoft.operationalinsights/workspaces', parameters('workspaces_name'))]",
+ "containedResources": [
+ "[concat(resourceId('microsoft.operationalinsights/workspaces', parameters('workspaces_name')), '/views/Updates(', parameters('workspaces_name'), ')')]"
+ ]
+ }
+ }
+ {
+ "type": "Microsoft.OperationsManagement/solutions",
+ "apiVersion": "2015-11-01-preview",
+ "name": "[concat('VMInsights(', parameters('workspaces_name'))]",
+ "location": "france central",
+ "plan": {
+ "name": "[concat('VMInsights(', parameters('workspaces_name'))]",
+ "promotionCode": "",
+ "product": "OMSGallery/VMInsights",
+ "publisher": "Microsoft"
+ },
+ "properties": {
+ "workspaceResourceId": "[resourceId('microsoft.operationalinsights/workspaces', parameters('workspaces_name'))]",
+ "containedResources": [
+ "[concat(resourceId('microsoft.operationalinsights/workspaces', parameters('workspaces_name')), '/views/VMInsights(', parameters('workspaces_name'), ')')]"
+ ]
+ }
+ },
+ {
+ "type": "microsoft.insights/scheduledqueryrules",
+ "apiVersion": "2021-08-01",
+ "name": "[parameters('alertName')]",
+ "location": "france central",
+ "properties": {
+ "displayName": "[parameters('alertName')]",
+ "severity": 3,
+ "enabled": true,
+ "evaluationFrequency": "PT5M",
+ "scopes": [
+ "[parameters('workspaceResourceId')]"
+ ],
+ "windowSize": "PT15M",
+ "criteria": {
+ "allOf": [
+ {
+ "query": "Heartbeat | where computer == 'my computer name'",
+ "timeAggregation": "Count",
+ "operator": "LessThan",
+ "threshold": 14,
+ "failingPeriods": {
+ "numberOfEvaluationPeriods": 1,
+ "minFailingPeriodsToAlert": 1
+ }
+ }
+ ]
+ },
+ "autoMitigate": true,
+ "actions": {}
+ }
+ },
+ {
+ "type": "Microsoft.OperationalInsights/querypacks",
+ "apiVersion": "2019-09-01-preview",
+ "name": "[parameters('querypacks_name')]",
+ "location": "francecentral",
+ "properties": {}
+ },
+ {
+ "type": "Microsoft.OperationalInsights/querypacks/queries",
+ "apiVersion": "2019-09-01-preview",
+ "name": "[concat(parameters('querypacks_name'), '/00000000-0000-0000-0000-000000000000')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.OperationalInsights/querypacks', parameters('querypacks_name'))]"
+ ],
+ "properties": {
+ "displayName": "my-query-name",
+ "body": "my-query-text",
+ "related": {
+ "categories": [],
+ "resourceTypes": [
+ "microsoft.operationalinsights/workspaces"
+ ]
+ },
+ "tags": {
+ "labels": []
+ }
+ }
+ }
+ ]
+ }
+ ```
+
+1. Select **Save** in the online editor.
+
+## Redeploy
+
+1. Select **Subscription** to choose the subscription where the target workspace will be deployed.
+1. Select **Resource group** to choose the resource group where the target workspace will be deployed. You can select **Create new** to create a new resource group for the target workspace.
+1. Verify that **Region** is set to the target location where you want the network security group to be deployed.
+1. Select the **Review + create** button to verify your template.
+1. Select **Create** to deploy the workspace and the selected resource to the target region.
+1. Your workspace, including selected resources, is now deployed in the target region. You can complete the remaining configuration in the workspace for paring functionality to the original workspace.
+ - *Connect agents*: Use any of the available options, including Data Collection Rules, to configure the required agents on virtual machines and virtual machine scale sets and to specify the new target workspace as the destination.
+ - *Diagnostic settings*: Update diagnostic settings in identified resources, with the target workspace as the destination.
+ - *Install solutions*: Some solutions, such as [Microsoft Sentinel](../sentinel/quickstart-onboard.md), require certain onboarding procedures and weren't included in the template. You should onboard them separately to the new workspace.
+ - *Configure the Data Collector API*: Configure Data Collector API instances to send data to the target workspace.
+ - *Configure alert rules*: When alerts aren't exported in the template, you need to configure them manually in the target workspace.
+1. Verify that new data isn't ingested to the original workspace. Run the following query in your original workspace, and observe that there's no ingestion after the migration:
+
+ ```kusto
+ search *
+ | where TimeGenerated > ago(12h)
+ | summarize max(TimeGenerated) by Type
+ ```
+
+After data sources are connected to the target workspace, ingested data is stored in the target workspace. Older data stays in the original workspace and is subject to the retention policy. You can perform a [cross-workspace query](../azure-monitor/logs/cross-workspace-query.md). If both workspaces were assigned the same name, use a qualified name (*subscriptionName/resourceGroup/componentName*) in the workspace reference.
+
+Here's an example for a query across two workspaces that have the same name:
+
+```kusto
+union
+ workspace('subscription-name1/<resource-group-name1/<original-workspace-name>')Update,
+ workspace('subscription-name2/<resource-group-name2/<target-workspace-name>').Update,
+| where TimeGenerated >= ago(1h)
+| where UpdateState == "Needed"
+| summarize dcount(Computer) by Classification
+```
+
+## Discard
+
+If you want to discard the source workspace, delete the exported resources or the resource group that contains these resources:
+
+1. Select the target resource group in the Azure portal.
+1. On the **Overview** page:
+
+ - If you created a new resource group for this deployment, select **Delete resource group** on the toolbar to delete the resource group.
+ - If the template was deployed to an existing resource group, select the resources that were deployed with the template, and then select **Delete** on the toolbar to delete selected resources.
+
+## Clean up
+
+While new data is being ingested to your new workspace, older data in the original workspace remains available for query and is subject to the retention policy defined in the workspace. We recommend that you keep the original workspace for as long as you need older data to [query across](../azure-monitor/logs/cross-workspace-query.md) workspaces.
+
+If you no longer need access to older data in the original workspace:
+
+1. Select the original resource group in the Azure portal.
+1. Select any resources that you want to remove, and then select **Delete** on the toolbar.
+
+## Related content
+
+To learn more about moving resources between regions and disaster recovery in Azure, refer to:
+
+- [Migrate Log Analytics to availability zone support](../reliability/migrate-monitor-log-analytics.md)
+
+- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)
+
+- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
operational-excellence Relocation Postgresql Flexible Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-postgresql-flexible-server.md
+
+ Title: Relocate Azure Database for PostgreSQL to another region
+description: Learn how to relocate an Azure Database for PostgreSQL to another region using Azure services and tools.
+++ Last updated : 02/14/2024+++
+ - subject-relocation
++
+# Relocate Azure Database for PostgreSQL to another region
+
+This article covers relocation guidance for Azure Database for PostgreSQL, Single Server, and Flexible Servers across geographies where region pairs aren't available for replication and geo-restore.
+
+To learn how to relocate Azure Cosmos DB for PostgreSQL (formerly called Azure Database for PostgreSQL - Hyperscale (Citus)), see [Read replicas in Azure Cosmos DB for PostgreSQL](/azure/cosmos-db/postgresql/concepts-read-replicas).
+
+For an overview of the region pairs supported by native replication, see [cross-region replication](../postgresql/concepts-read-replicas.md#cross-region-replication).
+
+## Prerequisites
+
+Prerequisites only apply to [redeployment with data](#redeploy-with-data). To move your database without data, you can skip to [Prepare](#prepare).
+
+- To relocate PostgreSQL with data from one region to another, you must have an additional compute resource to run the backup and restore tools. The examples in this guide use an Azure VM running Ubuntu 20.04 LTS. The compute resources must:
+ - Have network access to both the source and the target server, either on a private network or by inclusion in the firewall rules.
+ - Be located in either the source or target region.
+ - Use [Accelerated Networking](/azure/virtual-network/accelerated-networking-overview) (if available).
+ - The database content isn't saved to any intermediate storage; the output of the logical backup tool is sent directly to the target server.
+- Depending on your Azure Database for PostgreSQL instance design, the following dependent resources might need to be deployed and configured in the target region prior to relocation:
+ - [Public IP](/azure/virtual-network/move-across-regions-publicip-portal)
+ - [Azure Private Link](./relocation-private-link.md)
+ - [Virtual Network](./relocation-virtual-network.md)
+ - [Network Peering](/azure/virtual-network/scripts/virtual-network-powershell-sample-peer-two-virtual-networks)
+
+## Prepare
+
+To get started, export a Resource Manager template. This template contains settings that describe your Automation namespace.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select **All resources** and then select your Automation resource.
+1. Select **Export template**.
+1. Choose **Download** in the **Export template** page.
+1. Locate the .zip file you downloaded from the portal and unzip that file to a folder of your choice.
+
+ This zip file contains the .json files that include the template and scripts to deploy the template.
+
+## Redeploy without data
+
+1. Adjust the exported template parameters to match the destination region.
+
+ > [!IMPORTANT]
+ > The target server must be different from the name of the source server. You must reconfigure clients to point to the new server.
+
+1. Redeploy the template to the new region. For an example of how to use an ARM template to create an Azure Database for PostgreSQL, see [Quickstart: Use an ARM template to create an Azure Database for PostgreSQL - Flexible Server](/azure/postgresql/flexible-server/quickstart-create-server-arm-template?tabs=portal%2Cazure-portal).
+
+## Redeploy with data
+
+Redeployment with data migration for Azure Database for PostgreSQL is based on logical backup and restore and requires native tools. As a result, you can expect noticeable downtime during restoration.
+
+> [!TIP]
+> You can use the Azure portal to relocate an Azure Database for PostgreSQL - Flexible Server. To learn how to perform replication for Single Server, see [Move an Azure Database for PostgreSQL - Flexible Server to another region by using the Azure portal](/azure/postgreSQL/single-server/how-to-move-regions-portal).
+
+1. Adjust the exported template parameters to match the destination region.
+ > [!IMPORTANT]
+ > The target server name must be different from the source server name. You must reconfigure clients to point to the new server.
+1. Redeploy the template to the new region. For an example of how to use an ARM template to create an Azure Database for PostgreSQL, see [Quickstart: Use an ARM template to create an Azure Database for PostgreSQL - Flexible Server](/azure/postgresql/flexible-server/quickstart-create-server-arm-template?tabs=portal%2Cazure-portal).
+
+1. On the compute resource provisioned for the migration, install the PostgreSQL client tools for the PostgreSQL version to be migrated. The following example uses PostgreSQL version 13 on an Azure VM that runs Ubuntu 20.04 LTS:
+
+ ```bash
+ sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
+ wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
+ sudo apt-get update
+ sudo apt-get install -y postgresql-client-13
+ ```
+
+ For more information on the installation of PostgreSQL components in Ubuntu, refer to [Linux downloads (Ubuntu)](https://www.postgresql.org/download/linux/ubuntu/).
+
+ For other platforms, go to [PostgreSQL Downloads](https://www.postgresql.org/download/).
+
+1. (Optional) If you created additional roles in the source server, create them in the target server. To get a list of existing roles, use the following query:
+
+ ```sql
+ select *
+ from pg_catalog.pg_roles
+ where rolename not like 'pg_%' and rolename not in ('azuresu', 'azure_pg_admin', 'replication')
+ order by rolename;
+ ```
+
+1. To migrate each database, do the following steps:
+ 1. Stop all database activity on the source server.
+ 1. Replace credentials information, source server, target server, and database name in the following script:
+
+ ```sql
+ export USER=admin_username
+ export PGPASSWORD=admin_password
+ export SOURCE=pgsql-arpp-source.postgres.database.azure.com
+ export TARGET=pgsql-arpp-target.postgres.database.azure.com
+ export DATABASE=database_name
+ pg_dump -h $SOURCE -U $USER --create --exclude-schema=pg_catalog $DATABASE | psql -h $TARGET -U $USER postgres
+ ```
+ 1. To migrate the database, run the script.
+
+ 1. Configure the clients to point to the target server.
+ 1. Perform functional tests on the applications.
+ 1. Ensure that the `ignoreMissingVnetServiceEndpoint` flag is set to `False`, so the IaC fails to deploy the database when the service endpoint isn't configured in the target region.
+
+## Related content
+
+- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)
+- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
operational-excellence Relocation Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-private-link.md
+
+ Title: Relocate Azure Private Link Service to another region
+description: Learn how to relocate an Azure Private Link Service to a new region
+++ Last updated : 01/31/2024+++
+ - subject-relocation
++
+# Relocate Azure Private Link Service to another region
+
+This article shows you how to relocate [Azure Private Link Service](/azure/private-link/private-link-overview) when moving your workload to another region.
+
+To learn how to to reconfigure [private endpoints](/azure/private-link/private-link-overview) for a particular service, see the [appropriate service relocation guide](overview-relocation.md).
++
+## Prepare
+
+Identify all resources that are used by Private Link Service, such as Standard load balancer, virtual machines, virtual network, etc.
+
+## Redeploy
+
+1. Redeploy all resources that are used by Private Link Service.
+
+1. Ensure that a standard load balancer with all dependent resources is relocated to the target region.
+
+1. Create a Private Link Service that references the relocated load balancer. To create the Private Link, you can use [Azure Portal](/azure/private-link/create-private-link-service-portal), [PowerShell](/azure/private-link/create-private-link-service-powershell), or [Azure CLI](/azure/private-link/create-private-link-service-cli).
+
+ In the load balancer selection process:
+ - Choose the frontend IP configuration where you want to receive the traffic.
+ - Choose a subnet for NAT IP addresses for the Private Link Service.
+ - Choose Private Link Service settings that are the same as the source Private Link Service.
+
+1. Redeploy the private endpoint into the relocated virtual network.
+
+1. Configure your DNS settings by following guidance in [Private DNS zone values](/azure/private-link/private-endpoint-dns?branch=main).
+++
+## Next steps
+
+To learn more about moving resources between regions and disaster recovery in Azure, refer to:
+
+- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)
+- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
operational-excellence Relocation Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-storage-account.md
+
+ Title: Relocate Azure Storage Account to another region
+description: Learn how to relocate Azure Storage Account to another region
+++ Last updated : 01/25/2024+++
+ - subject-relocation
+++
+# Relocate Azure Storage Account to another region
+
+This article shows you how to:
+
+This article shows you how to relocate an Azure Storage Account to a new region by creating a copy of your storage account into another region. You also learn how to relocate your data to that account by using AzCopy, or another tool of your choice.
++
+### Prerequisites
+
+- Ensure that the services and features that your account uses are supported in the target region.
+- For preview features, ensure that your subscription is allowlisted for the target region.
+- Depending on your Storage Account deployment, the following dependent resources may need to be deployed and configured in the target region *prior* to relocation:
+
+ - [Virtual Network, Network Security Groups, and User Defined Route](./relocation-virtual-network.md)
+ - [Azure Key Vault](./relocation-key-vault.md)
+ - [Azure Automation](./relocation-automation.md)
+ - [Public IP](/azure/virtual-network/move-across-regions-publicip-portal)
+ - [Azure Private Link Service](./relocation-private-link.md)
+
+## Prepare
+
+To prepare, you must export and then modify a Resource Manager template.
+
+### Export a template
+
+A Resource Manager template contains settings that describe your storage account.
+
+# [Portal](#tab/azure-portal)
+
+To export a template by using Azure portal:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. Select **All resources** and then select your storage account.
+
+3. Select > **Automation** > **Export template**.
+
+4. Choose **Download** in the **Export template** blade.
+
+5. Locate the .zip file that you downloaded from the portal, and unzip that file to a folder of your choice.
+
+ This zip file contains the .json files that comprise the template and scripts to deploy the template.
+
+# [PowerShell](#tab/azure-powershell)
+
+To export a template by using PowerShell:
+
+1. Sign in to your Azure subscription with the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) command and follow the on-screen directions:
+
+ ```azurepowershell-interactive
+ Connect-AzAccount
+ ```
+
+2. If your identity is associated with more than one subscription, then set your active subscription to subscription of the storage account that you want to move.
+
+ ```azurepowershell-interactive
+ $context = Get-AzSubscription -SubscriptionId <subscription-id>
+ Set-AzContext $context
+ ```
+
+3. Export the template of your source storage account. These commands save a json template to your current directory.
+
+ ```azurepowershell-interactive
+ $resource = Get-AzResource `
+ -ResourceGroupName <resource-group-name> `
+ -ResourceName <storage-account-name> `
+ -ResourceType Microsoft.Storage/storageAccounts
+ Export-AzResourceGroup `
+ -ResourceGroupName <resource-group-name> `
+ -Resource $resource.ResourceId
+ ```
++++
+### Modify the template
+
+Modify the template by changing the storage account name and region.
+
+# [Portal](#tab/azure-portal)
+
+To deploy the template by using Azure portal:
+
+1. In the Azure portal, select **Create a resource**.
+
+2. In **Search the Marketplace**, type **template deployment**, and then press **ENTER**.
+
+3. Select **Template deployment**.
+
+ ![Azure Resource Manager templates library](../storage/common/media/storage-account-move/azure-resource-manager-template-library.png)
+
+4. Select **Create**.
+
+5. Select **Build your own template in the editor**.
+
+6. Select **Load file**, and then follow the instructions to load the **template.json** file that you downloaded in the last section.
+
+7. In the **template.json** file, name the target storage account by setting the default value of the storage account name. This example sets the default value of the storage account name to `mytargetaccount`.
+
+ ```json
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "storageAccounts_mysourceaccount_name": {
+ "defaultValue": "mytargetaccount",
+ "type": "String"
+ }
+ },
+
+8. Edit the **location** property in the **template.json** file to the target region. This example sets the target region to `centralus`.
+
+ ```json
+ "resources": [{
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2019-04-01",
+ "name": "[parameters('storageAccounts_mysourceaccount_name')]",
+ "location": "centralus"
+ }]
+ ```
+
+ To obtain region location codes, see [Azure Locations](https://azure.microsoft.com/global-infrastructure/locations/). The code for a region is the region name with no spaces, **Central US** = **centralus**.
+
+# [PowerShell](#tab/azure-powershell)
+
+To deploy the template by using PowerShell:
+
+1. In the **template.json** file, name the target storage account by setting the default value of the storage account name. This example sets the default value of the storage account name to `mytargetaccount`.
+
+ ```json
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "storageAccounts_mysourceaccount_name": {
+ "defaultValue": "mytargetaccount",
+ "type": "String"
+ }
+ },
+ ```
+
+2. Edit the **location** property in the **template.json** file to the target region. This example sets the target region to `eastus`.
+
+ ```json
+ "resources": [{
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2019-04-01",
+ "name": "[parameters('storageAccounts_mysourceaccount_name')]",
+ "location": "eastus"
+ }]
+ ```
+
+ You can obtain region codes by running the [Get-AzLocation](/powershell/module/az.resources/get-azlocation) command.
+
+ ```azurepowershell-interactive
+ Get-AzLocation | format-table
+ ```
+++
+## Redeploy
+
+Deploy the template to create a new storage account in the target region.
+
+# [Portal](#tab/azure-portal)
+
+1. Save the **template.json** file.
+
+2. Enter or select the property values:
+
+ - **Subscription**: Select an Azure subscription.
+
+ - **Resource group**: Select **Create new** and give the resource group a name.
+
+ - **Location**: Select an Azure location.
+
+3. Select **I agree to the terms and conditions stated above**, and then select **Select Purchase**.
+
+# [PowerShell](#tab/azure-powershell)
+
+1. Obtain the subscription ID where you want to deploy the target public IP with [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription):
+
+ ```azurepowershell-interactive
+ Get-AzSubscription
+ ```
+
+2. Use these commands to deploy your template:
+
+ ```azurepowershell-interactive
+ $resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
+ $location = Read-Host -Prompt "Enter the location (i.e. centralus)"
+
+ New-AzResourceGroup -Name $resourceGroupName -Location "$location"
+ New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri "<name of your local template file>"
+ ```
+++
+> [!TIP]
+> If you receive an error which states that the XML specified is not syntactically valid, compare the JSON in your template with the schemas described in the [Azure Resource Manager documentation](/azure/templates/microsoft.storage/allversions).
+
+### Configure the new storage account
+
+Some features won't export to a template, so you'll have to add them to the new storage account.
+
+The following table lists these features along with guidance for adding them to your new storage account.
+
+| Feature | Guidance |
+|--|--|
+| **Lifecycle management policies** | [Manage the Azure Blob storage lifecycle](../storage/blobs/storage-lifecycle-management-concepts.md) |
+| **Static websites** | [Host a static website in Azure Storage](../storage/blobs/storage-blob-static-website-how-to.md) |
+| **Event subscriptions** | [Reacting to Blob storage events](../storage/blobs/storage-blob-event-overview.md) |
+| **Alerts** | [Create, view, and manage activity log alerts by using Azure Monitor](../azure-monitor/alerts/alerts-activity-log.md) |
+| **Content Delivery Network (CDN)** | [Use Azure CDN to access blobs with custom domains over HTTPS](../storage/blobs/storage-https-custom-domain-cdn.md) |
+
+> [!NOTE]
+> If you set up a CDN for the source storage account, just change the origin of your existing CDN to the primary blob service endpoint (or the primary static website endpoint) of your new account.
+
+### Move data to the new storage account
+
+AzCopy is the preferred tool to move your data over due to its performance optimization. With AzCopy, data is copied directly between storage servers, and so it doesn't use the network bandwidth of your computer. You can run AzCopy at the command line or as part of a custom script. For more information, see [Copy blobs between Azure storage accounts by using AzCopy](/azure/storage/common/storage-use-azcopy-blobs-copy?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&bc=%2Fazure%2Fstorage%2Fblobs%2Fbreadcrumb%2Ftoc.json&branch=pr-en-us-259662).
+
+You can also use Azure Data Factory to move your data over. To learn how to use Data Factory to relocate your data see one of the following guides:
+
+ - [Copy data to or from Azure Blob storage by using Azure Data Factory](/azure/data-factory/connector-azure-blob-storage)
+ - [Copy data to or from Azure Data Lake Storage Gen2 using Azure Data Factory](/azure/data-factory/connector-azure-data-lake-storage)
+ - [Copy data from or to Azure Files by using Azure Data Factory](/azure/data-factory/connector-azure-file-storage)
+ - [Copy data to and from Azure Table storage by using Azure Data Factory](/azure/data-factory/connector-azure-table-storage)
+++
+## Discard or clean up
+
+After the deployment, if you want to start over, you can delete the target storage account, and repeat the steps described in the [Prepare](#prepare) and [Redeploy](#redeploy) sections of this article.
+
+To commit the changes and complete the move of a storage account, delete the source storage account.
+
+# [Portal](#tab/azure-portal)
+
+To remove a storage account by using the Azure portal:
+
+1. In the Azure portal, expand the menu on the left side to open the menu of services, and choose **Storage accounts** to display the list of your storage accounts.
+
+2. Locate the target storage account to delete, and right-click the **More** button (**...**) on the right side of the listing.
+
+3. Select **Delete**, and confirm.
+
+# [PowerShell](#tab/azure-powershell)
+
+To remove the resource group and its associated resources, including the new storage account, use the [Remove-AzStorageAccount](/powershell/module/az.storage/remove-azstorageaccount) command:
+
+```powershell
+Remove-AzStorageAccount -ResourceGroupName $resourceGroup -AccountName $storageAccount
+```
+++
+## Next steps
+
+To learn more about moving resources between regions and disaster recovery in Azure, refer to:
+
+- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)
+- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
operational-excellence Relocation Virtual Network Nsg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-virtual-network-nsg.md
+
+ Title: Relocate Azure NSG to another region
+description: Learn how to use ARM templates to relocate Azure network security group (NSG) to another region
++ Last updated : 03/01/2024+++
+ - subject-relocation
+ - devx-track-arm-template
+++
+# Relocate Azure network security group (NSG) to another region
+
+This article shows you how to relocate an NSG to a new region by creating a copy of the source configuration and security rules of the NSG to another region.
++
+## Prerequisites
+
+- Make sure that the Azure network security group is in the target Azure region.
+
+- Associate the new NSG to resources in the target region.
+
+- To export an NSG configuration and deploy a template to create an NSG in another region, you'll need the Network Contributor role or higher.
+
+- Identify the source networking layout and all the resources that you're currently using. This layout includes but isn't limited to load balancers, public IPs, and virtual networks.
+
+- Verify that your Azure subscription allows you to create NSGs in the target region that's used. Contact support to enable the required quota.
+
+- Make sure that your subscription has enough resources to support the addition of NSGs for this process. See [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits).
++
+## Prepare
+
+The following steps show how to prepare the network security group for the configuration and security rule move using a Resource Manager template, and move the NSG configuration and security rules to the target region using the portal.
++
+### Export and modify a template
+
+# [Portal](#tab/azure-portal)
+
+To export and modify a template by using Azure portal:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. Select **All resources** and then select your storage account.
+
+3. Select > **Automation** > **Export template**.
+
+4. Choose **Deploy** in the **Export template** blade.
+
+5. Select **TEMPLATE** > **Edit parameters** to open the **parameters.json** file in the online editor.
+
+6. To edit the parameter of the NSG name, change the **value** property under **parameters**:
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "networkSecurityGroups_myVM1_nsg_name": {
+ "value": "<target-nsg-name>"
+ }
+ }
+ }
+ ```
+
+7. Change the source NSG value in the editor to a name of your choice for the target NSG. Ensure you enclose the name in quotes.
+
+8. Select **Save** in the editor.
+
+9. Select **TEMPLATE** > **Edit template** to open the **template.json** file in the online editor.
+
+10. To edit the target region where the NSG configuration and security rules will be moved, change the **location** property under **resources** in the online editor:
+
+ ```json
+ "resources": [
+ {
+ "type": "Microsoft.Network/networkSecurityGroups",
+ "apiVersion": "2019-06-01",
+ "name": "[parameters('networkSecurityGroups_myVM1_nsg_name')]",
+ "location": "<target-region>",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "resourceGuid": "2c846acf-58c8-416d-be97-ccd00a4ccd78",
+ }
+ }
+ ]
+
+ ```
+
+11. To obtain region location codes, see [Azure Locations](https://azure.microsoft.com/global-infrastructure/locations/). The code for a region is the region name with no spaces, **Central US** = **centralus**.
+
+12. You can also change other parameters in the template if you choose, and are optional depending on your requirements:
+
+ * **Security rules** - You can edit which rules are deployed into the target NSG by adding or removing rules to the **securityRules** section in the **template.json** file:
+
+ ```json
+ "resources": [
+ {
+ "type": "Microsoft.Network/networkSecurityGroups",
+ "apiVersion": "2019-06-01",
+ "name": "[parameters('networkSecurityGroups_myVM1_nsg_name')]",
+ "location": "<target-region>",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "resourceGuid": "2c846acf-58c8-416d-be97-ccd00a4ccd78",
+ "securityRules": [
+ {
+ "name": "RDP",
+ "etag": "W/\"c630c458-6b52-4202-8fd7-172b7ab49cf5\"",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "protocol": "TCP",
+ "sourcePortRange": "*",
+ "destinationPortRange": "3389",
+ "sourceAddressPrefix": "*",
+ "destinationAddressPrefix": "*",
+ "access": "Allow",
+ "priority": 300,
+ "direction": "Inbound",
+ "sourcePortRanges": [],
+ "destinationPortRanges": [],
+ "sourceAddressPrefixes": [],
+ "destinationAddressPrefixes": []
+ }
+ },
+ ]
+ }
+ ```
+
+ To complete the addition or the removal of the rules in the target NSG, you must also edit the custom rule types at the end of the **template.json** file in the format of the example below:
+
+ ```json
+ {
+ "type": "Microsoft.Network/networkSecurityGroups/securityRules",
+ "apiVersion": "2019-06-01",
+ "name": "[concat(parameters('networkSecurityGroups_myVM1_nsg_name'), '/Port_80')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Network/networkSecurityGroups', parameters('networkSecurityGroups_myVM1_nsg_name'))]"
+ ],
+ "properties": {
+ "provisioningState": "Succeeded",
+ "protocol": "*",
+ "sourcePortRange": "*",
+ "destinationPortRange": "80",
+ "sourceAddressPrefix": "*",
+ "destinationAddressPrefix": "*",
+ "access": "Allow",
+ "priority": 310,
+ "direction": "Inbound",
+ "sourcePortRanges": [],
+ "destinationPortRanges": [],
+ "sourceAddressPrefixes": [],
+ "destinationAddressPrefixes": []
+ }
+ ```
+
+13. Select **Save** in the online editor.
++
+# [PowerShell](#tab/azure-powershell)
+
+To export and modify a template by using PowerShell:
+
+1. Sign in to your Azure subscription with the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) command and follow the on-screen directions:
+
+ ```azurepowershell-interactive
+ Connect-AzAccount
+ ```
+
+2. Obtain the resource ID of the NSG you want to move to the target region and place it in a variable using [Get-AzNetworkSecurityGroup](/powershell/module/az.network/get-aznetworksecuritygroup):
+
+ ```azurepowershell-interactive
+ $sourceNSGID = (Get-AzNetworkSecurityGroup -Name <source-nsg-name> -ResourceGroupName <source-resource-group-name>).Id
+
+ ```
+3. Export the source NSG to a .json file into the directory where you execute the command [Export-AzResourceGroup](/powershell/module/az.resources/export-azresourcegroup):
+
+ ```azurepowershell-interactive
+ Export-AzResourceGroup -ResourceGroupName <source-resource-group-name> -Resource $sourceNSGID -IncludeParameterDefaultValue
+ ```
+
+4. The file downloaded will be named after the resource group the resource was exported from. Locate the file that was exported from the command named **\<resource-group-name>.json** and open it in an editor of your choice:
+
+ ```azurepowershell
+ notepad <source-resource-group-name>.json
+ ```
+
+5. To edit the parameter of the NSG name, change the property **defaultValue** of the source NSG name to the name of your target NSG, ensure the name is in quotes:
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "networkSecurityGroups_myVM1_nsg_name": {
+ "defaultValue": "<target-nsg-name>",
+ "type": "String"
+ }
+ }
+
+ ```
++
+6. To edit the target region where the NSG configuration and security rules will be moved, change the **location** property under **resources**:
+
+ ```json
+ "resources": [
+ {
+ "type": "Microsoft.Network/networkSecurityGroups",
+ "apiVersion": "2019-06-01",
+ "name": "[parameters('networkSecurityGroups_myVM1_nsg_name')]",
+ "location": "<target-region>",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "resourceGuid": "2c846acf-58c8-416d-be97-ccd00a4ccd78",
+ }
+ }
+ ```
+
+7. To obtain region location codes, you can use the Azure PowerShell cmdlet [Get-AzLocation](/powershell/module/az.resources/get-azlocation) by running the following command:
+
+ ```azurepowershell-interactive
+
+ Get-AzLocation | format-table
+
+ ```
+8. You can also change other parameters in the **\<resource-group-name>.json** if you choose, and are optional depending on your requirements:
+
+ * **Security rules** - You can edit which rules are deployed into the target NSG by adding or removing rules to the **securityRules** section in the **\<resource-group-name>.json** file:
+
+ ```json
+ "resources": [
+ {
+ "type": "Microsoft.Network/networkSecurityGroups",
+ "apiVersion": "2019-06-01",
+ "name": "[parameters('networkSecurityGroups_myVM1_nsg_name')]",
+ "location": "TARGET REGION",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "resourceGuid": "2c846acf-58c8-416d-be97-ccd00a4ccd78",
+ "securityRules": [
+ {
+ "name": "RDP",
+ "etag": "W/\"c630c458-6b52-4202-8fd7-172b7ab49cf5\"",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "protocol": "TCP",
+ "sourcePortRange": "*",
+ "destinationPortRange": "3389",
+ "sourceAddressPrefix": "*",
+ "destinationAddressPrefix": "*",
+ "access": "Allow",
+ "priority": 300,
+ "direction": "Inbound",
+ "sourcePortRanges": [],
+ "destinationPortRanges": [],
+ "sourceAddressPrefixes": [],
+ "destinationAddressPrefixes": []
+ }
+ ]
+ }
+
+ ```
+
+ To complete the addition or the removal of the rules in the target NSG, you must also edit the custom rule types at the end of the **\<resource-group-name>.json** file in the format of the example below:
+
+ ```json
+ {
+ "type": "Microsoft.Network/networkSecurityGroups/securityRules",
+ "apiVersion": "2019-06-01",
+ "name": "[concat(parameters('networkSecurityGroups_myVM1_nsg_name'), '/Port_80')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Network/networkSecurityGroups', parameters('networkSecurityGroups_myVM1_nsg_name'))]"
+ ],
+ "properties": {
+ "provisioningState": "Succeeded",
+ "protocol": "*",
+ "sourcePortRange": "*",
+ "destinationPortRange": "80",
+ "sourceAddressPrefix": "*",
+ "destinationAddressPrefix": "*",
+ "access": "Allow",
+ "priority": 310,
+ "direction": "Inbound",
+ "sourcePortRanges": [],
+ "destinationPortRanges": [],
+ "sourceAddressPrefixes": [],
+ "destinationAddressPrefixes": []
+ }
+ ```
+
+9. Save the **\<resource-group-name>.json** file.
++++
+## Redeploy
+
+# [Portal](#tab/azure-portal)
++
+1. Select **BASICS** > **Subscription** to choose the subscription where the target NSG will be deployed.
+
+1. Select **BASICS** > **Resource group** to choose the resource group where the target NSG will be deployed. You can click **Create new** to create a new resource group for the target NSG. Ensure the name isn't the same as the source resource group of the existing NSG.
+
+1. Select **BASICS** > **Location** is set to the target location where you wish for the NSG to be deployed.
+
+1. Verify under **SETTINGS** that the name matches the name that you entered in the parameters editor above.
+
+1. Check the box under **TERMS AND CONDITIONS**.
+
+1. Select the **Purchase** button to deploy the target network security group.
+
+# [PowerShell](#tab/azure-powershell)
+
+1. Create a resource group in the target region for the target NSG to be deployed using [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup):
+
+ ```azurepowershell-interactive
+ New-AzResourceGroup -Name <target-resource-group-name> -location <target-region>
+ ```
+
+1. Deploy the edited **\<resource-group-name>.json** file to the resource group created in the previous step using [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment):
+
+ ```azurepowershell-interactive
+
+ New-AzResourceGroupDeployment -ResourceGroupName <target-resource-group-name> -TemplateFile <source-resource-group-name>.json
+
+ ```
+
+1. To verify the resources were created in the target region, use [Get-AzResourceGroup](/powershell/module/az.resources/get-azresourcegroup) and [Get-AzNetworkSecurityGroup](/powershell/module/az.network/get-aznetworksecuritygroup):
+
+ ```azurepowershell-interactive
+
+ Get-AzResourceGroup -Name <target-resource-group-name>
+
+ ```
+
+ ```azurepowershell-interactive
+
+ Get-AzNetworkSecurityGroup -Name <target-nsg-name> -ResourceGroupName <target-resource-group-name>
+
+ ```
+++
+## Discard
+
+# [Portal](#tab/azure-portal)
+
+If you wish to discard the target NSG, delete the resource group that contains the target NSG. To do so, select the resource group from your dashboard in the portal and select **Delete** at the top of the overview page.
+
+# [PowerShell](#tab/azure-powershell)
+
+After the deployment, if you wish to start over or discard the NSG in the target, delete the resource group that was created in the target and the moved NSG will be deleted. To remove the resource group, use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup):
+
+```azurepowershell-interactive
+
+Remove-AzResourceGroup -Name <target-resource-group-name>
+
+```
++
+## Clean up
+
+# [Portal](#tab/azure-portal)
+
+To commit the changes and complete the move of the NSG, delete the source NSG or resource group. To do so, select the network security group or resource group from your dashboard in the portal and select **Delete** at the top of each page.
+
+# [PowerShell](#tab/azure-powershell)
+
+To commit the changes and complete the move of the NSG, delete the source NSG or resource group, use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) or [Remove-AzNetworkSecurityGroup](/powershell/module/az.network/remove-aznetworksecuritygroup):
+
+```azurepowershell-interactive
+
+Remove-AzResourceGroup -Name <source-resource-group-name>
+
+```
+
+``` azurepowershell-interactive
+
+Remove-AzNetworkSecurityGroup -Name <source-nsg-name> -ResourceGroupName <source-resource-group-name>
+
+```
+++
+## Next steps
+
+In this tutorial, you moved an Azure network security group from one region to another and cleaned up the source resources. To learn more about moving resources between regions and disaster recovery in Azure, refer to:
++
+- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)
+- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
+++
operational-excellence Relocation Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-virtual-network.md
+
+ Title: Relocate Azure Virtual Network to another region
+description: Learn how to relocate Azure Virtual Network to another region
+++ Last updated : 01/25/2023+++
+ - subject-relocation
+++
+# Relocate Azure Virtual Network to another region
+
+This article shows you how to relocate a virtual network to a new region by redeploying the virtual network. Redeployment supports both independent relocation of multiple workloads and private IP address range change in the target region. It's recommended that you use a Resource Manager template to relocate your virtual network.
+
+However, can also choose to move your virtual network with Azure Resource Mover. However, if you choose to move your virtual network with Azure Resource Mover, make sure that you understand the following considerations:
+
+**If you choose to use Resource Mover:**
+
+- All workloads in a virtual network must be relocated together.
+
+- A relocation using Azure Resource Mover doesn't support private IP address range change.
+
+- Azure Resource Mover can move resources such as Network Security Group and User Defined Route along with the virtual network. However, it's recommended that you move them separately. Moving them altogether can lead to failure of the Validate dependencies stage.
+
+- Resource Mover can't directly move NAT gateway instances from one region to another. To work around this limitation, see Create and configure NAT gateway after moving resources to another region.
+
+- Azure Resource Mover doesnΓÇÖt support any changes to the address space during the relocation process. As a result, when movement completes, both source and target have the same, and thus conflicting, address space. It's recommended that you do manual update of address space as soon as relocation completes.
+
+- Virtual Network Peering must be reconfigured after the relocation. It's recommended that you move the peering virtual network either before or with the source virtual network.
+
+- While performing the Initiate move steps with Azure Resource Mover, resources may be temporarily unavailable.
+
+To learn how to move your virtual network using Resource Mover, see [Move Azure VMs across regions](/azure/resource-mover/tutorial-move-region-virtual-machines).
+
+## Prerequisites
+
+- Confirm that your virtual network is in the source Azure region.
+
+- To export a virtual network and deploy a template to create a virtual network in another region, you need to have the Network Contributor role or higher.
+
+- Identify the source networking layout and all the resources that you're currently using. This layout includes but isn't limited to load balancers, network security groups (NSGs), and public IPs.
+
+- Verify that your Azure subscription allows you to create virtual networks in the target region. To enable the required quota, contact support.
+
+- Confirm that your subscription has enough resources to support the addition of virtual networks for this process. For more information, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits).
+
+- Understand the following considerations:
+
+ - If you enable private IP address range change, multiple workloads in a virtual network can be relocated independently of each other,
+ - The redeployment method supports the option to enable and disable private IP address range change in the target region.
+ - If you don't enable private IP address change in the target region, data migration scenarios that require communication between source and target region can only be established using public endpoints (public IP addresses).
+
++
+> [!IMPORTANT]
+> Starting July 1, 2021, you won't be able to add new tests in an existing workspace or enable a new workspace with Network performance monitor. You can continue to use the tests created prior to July 1, 2021. To minimize service disruption to your current workloads, migrate your tests from Network performance monitor to the new Connection monitor in Azure Network Watcher before February 29, 2024.
+++
+## Plan
+
+To plan for your relocation of an Azure Virtual Network, you must understand whether you're relocating your virtual network in a connected or disconnected scenario. In a connected scenario, the virtual network has a routed IP connection to an on-premises datacenter using a hub, VPN Gateway, or an ExpressRoute connection. In a disconnected scenario, the virtual network is used by workload components to communicate with each other.
++++
+### Disconnected scenario
+
+| Relocation with no IP Address Change | Relocation with IP Address Change |
+| --|--|
+| No other IP address ranges are needed. | Other IP Address ranges are needed. |
+| No IP Address change for resources after relocation. | IP Address change of resources after relocation |
+| All workloads in a virtual network must be relocated together. | Workload relocation without considering dependencies or partial relocation is possible (Take communication latency into account) |
+| Virtual Network in the source region needs to be disconnected or removed before the Virtual Network in the target region can be connected. | Enable communication shortcuts between source and target region using vNetwork peering. |
+| No support for data migration scenarios where you need communication between source and target region. | If communication between source and target region is required in data migration scenarios, you can establish network peering during relocation. |
+
+#### Disconnected relocation with the same IP-address range
+++
+#### Disconnected relocation with a new IP-address range
++
+### Connected Scenario
+
+| Relocation with no IP Address Change | Relocation with IP Address Change |
+|--|--|
+| No other IP address ranges are needed.| Other IP Address ranges are needed. |
+| No IP Address change for resources after relocation. | IP Address change of resources after relocation.
+| All workloads with dependencies on each other need to be relocated together. | Workload relocation without considering dependencies possible (Take communication latency into account). |
+| No communication between the two virtual networks in the source and target regions is possible. | Possible to enable communication between source and target region using vNetwork peering.|
+| Data migrations where communication between source and target region isn't possible or can only established through public endpoints. | If communication between source and target region is required in data migration scenarios, you can establish network peering during relocation.|
+
+#### Connected relocation with the same IP-address range
++
+#### Connected relocation with a new IP-address range
++++
+## Prepare
+
+1. Remove any virtual network peers. Virtual network peerings can't be re-created, and they'll fail if they're still present in the template. In the [Redeploy](#redeploy) section, you'll reconfigure peerings at the target virtual network.
+
+1. Move the diagnostic storage account that contains Network Watcher NSG logs. To learn how to move a storage account, see [Relocate Azure Storage Account to another region](./relocation-storage-account.md).
+
+1. [Relocate the Network Security Groups(NSG)](./relocation-virtual-network-nsg.md).
+
+1. Disable [DDoS Protection Plan](/azure/ddos-protection/manage-ddos-protection).
+++
+### Export and modify a template
+
+# [Portal](#tab/azure-portal)
+
+**To export the virtual network and deploy the target virtual network by using the Azure portal:**
+
+1. Sign in to the [Azure portal](https://portal.azure.com), and then select **Resource Groups**.
+1. Locate the resource group that contains the source virtual network, and then select it.
+1. Select **Automation** > **Export template**.
+1. In the **Export template** pane, select **Deploy**.
+1. To open the *parameters.json* file in your online editor, select **Template** > **Edit parameters**.
+1. To edit the parameter of the virtual network name, change the **value** property under **parameters**:
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "virtualNetworks_myVNET1_name": {
+ "value": "<target-virtual-network-name>"
+ }
+ }
+ }
+ ```
+
+1. In the editor, change the source virtual network name value in the editor to a name that you want for the target virtual network. Be sure to enclose the name in quotation marks.
+
+1. Select **Save** in the editor.
+
+1. To open the *template.json* file in the online editor, select **Template** > **Edit template**.
+
+1. In the online editor, to edit the target region, change the **location** property under **resources**:
+
+ ```json
+ "resources": [
+ {
+ "type": "Microsoft.Network/virtualNetworks",
+ "apiVersion": "2019-06-01",
+ "name": "[parameters('virtualNetworks_myVNET1_name')]",
+ "location": "<target-region>",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "resourceGuid": "6e2652be-35ac-4e68-8c70-621b9ec87dcb",
+ "addressSpace": {
+ "addressPrefixes": [
+ "10.0.0.0/16"
+ ]
+ },
+
+ ```
+
+1. To obtain region location codes, see [Azure Locations](https://azure.microsoft.com/global-infrastructure/locations/). The code for a region is the region name, without spaces (for example, **Central US** = **centralus**).
+
+1. (Optional) You can also change other parameters in the template, depending on your requirements:
+
+ * **Address Space**: Before you save the file, you can alter the address space of the virtual network by modifying the **resources** > **addressSpace** section and changing the **addressPrefixes** property:
+
+ ```json
+ "resources": [
+ {
+ "type": "Microsoft.Network/virtualNetworks",
+ "apiVersion": "2019-06-01",
+ "name": "[parameters('virtualNetworks_myVNET1_name')]",
+ "location": "<target-region",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "resourceGuid": "6e2652be-35ac-4e68-8c70-621b9ec87dcb",
+ "addressSpace": {
+ "addressPrefixes": [
+ "10.0.0.0/16"
+ ]
+ },
+
+ ```
+
+ * **Subnet**: You can change or add to the subnet name and the subnet address space by changing the template's **subnets** section. You can change the name of the subnet by changing the **name** property. And you can change the subnet address space by changing the **addressPrefix** property:
+
+ ```json
+ "subnets": [
+ {
+ "name": "subnet-1",
+ "etag": "W/\"d9f6e6d6-2c15-4f7c-b01f-bed40f748dea\"",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "addressPrefix": "10.0.0.0/24",
+ "delegations": [],
+ "privateEndpointNetworkPolicies": "Enabled",
+ "privateLinkServiceNetworkPolicies": "Enabled"
+ }
+ },
+ {
+ "name": "GatewaySubnet",
+ "etag": "W/\"d9f6e6d6-2c15-4f7c-b01f-bed40f748dea\"",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "addressPrefix": "10.0.1.0/29",
+ "serviceEndpoints": [],
+ "delegations": [],
+ "privateEndpointNetworkPolicies": "Enabled",
+ "privateLinkServiceNetworkPolicies": "Enabled"
+ }
+ }
+
+ ]
+ ```
+
+ To change the address prefix in the *template.json* file, edit it in two places:
+ - In the code in the preceding section
+ - In the **type** section of the following code.
+
+ Also, change the **addressPrefix** property in the following code to match the **addressPrefix** property in the code in the preceding section.
+
+ ```json
+ "type": "Microsoft.Network/virtualNetworks/subnets",
+ "apiVersion": "2019-06-01",
+ "name": "[concat(parameters('virtualNetworks_myVNET1_name'), '/GatewaySubnet')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Network/virtualNetworks', parameters('virtualNetworks_myVNET1_name'))]"
+ ],
+ "properties": {
+ "provisioningState": "Succeeded",
+ "addressPrefix": "10.0.1.0/29",
+ "serviceEndpoints": [],
+ "delegations": [],
+ "privateEndpointNetworkPolicies": "Enabled",
+ "privateLinkServiceNetworkPolicies": "Enabled"
+ }
+ },
+ {
+ "type": "Microsoft.Network/virtualNetworks/subnets",
+ "apiVersion": "2019-06-01",
+ "name": "[concat(parameters('virtualNetworks_myVNET1_name'), '/subnet-1')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Network/virtualNetworks', parameters('virtualNetworks_myVNET1_name'))]"
+ ],
+ "properties": {
+ "provisioningState": "Succeeded",
+ "addressPrefix": "10.0.0.0/24",
+ "delegations": [],
+ "privateEndpointNetworkPolicies": "Enabled",
+ "privateLinkServiceNetworkPolicies": "Enabled"
+ }
+ }
+ ]
+ ```
+
+1. In the online editor, select **Save**.
+
+# [PowerShell](#tab/azure-powershell)
++
+**To export the virtual network and deploy the target virtual network by using PowerShell:**
+
+1. Sign in to your Azure subscription with the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) command, and then follow the on-screen directions:
+
+ ```azurepowershell-interactive
+ Connect-AzAccount
+ ```
+
+1. Obtain the resource ID of the virtual network that you want to move to the target region, and then place it in a variable by using [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork):
+
+ ```azurepowershell-interactive
+ $sourceVNETID = (Get-AzVirtualNetwork -Name <source-virtual-network-name> -ResourceGroupName <source-resource-group-name>).Id
+ ```
+
+1. Export the source virtual network to a .json file in the directory where you execute the command [Export-AzResourceGroup](/powershell/module/az.resources/export-azresourcegroup):
+
+ ```azurepowershell-interactive
+ Export-AzResourceGroup -ResourceGroupName <source-resource-group-name> -Resource $sourceVNETID -IncludeParameterDefaultValue
+ ```
+
+1. The downloaded file has the same name as the resource group that the resource was exported from. Locate the *\<resource-group-name>.json* file, which you exported with the command, and then open it in your editor:
+
+ ```azurepowershell
+ notepad <source-resource-group-name>.json
+ ```
+
+1. To edit the parameter of the virtual network name, change the **defaultValue** property of the source virtual network name to the name of your target virtual network. Be sure to enclose the name in quotation marks.
+
+ ```json
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentmyResourceGroupVNET.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "virtualNetworks_myVNET1_name": {
+ "defaultValue": "<target-virtual-network-name>",
+ "type": "String"
+ }
+ ```
+
+1. To edit the target region where the virtual network will be moved, change the **location** property under resources:
+
+ ```json
+ "resources": [
+ {
+ "type": "Microsoft.Network/virtualNetworks",
+ "apiVersion": "2019-06-01",
+ "name": "[parameters('virtualNetworks_myVNET1_name')]",
+ "location": "<target-region>",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "resourceGuid": "6e2652be-35ac-4e68-8c70-621b9ec87dcb",
+ "addressSpace": {
+ "addressPrefixes": [
+ "10.0.0.0/16"
+ ]
+ },
+
+ ```
+
+1. To obtain region location codes, you can use the Azure PowerShell cmdlet [Get-AzLocation](/powershell/module/az.resources/get-azlocation) by running the following command:
+
+ ```azurepowershell-interactive
+
+ Get-AzLocation | format-table
+ ```
+
+1. (Optional) You can also change other parameters in the *\<resource-group-name>.json* file, depending on your requirements:
+
+ * **Address Space**: Before you save the file, you can alter the address space of the virtual network by modifying the **resources** > **addressSpace** section and changing the **addressPrefixes** property:
+
+ ```json
+ "resources": [
+ {
+ "type": "Microsoft.Network/virtualNetworks",
+ "apiVersion": "2019-06-01",
+ "name": "[parameters('virtualNetworks_myVNET1_name')]",
+ "location": "<target-region",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "resourceGuid": "6e2652be-35ac-4e68-8c70-621b9ec87dcb",
+ "addressSpace": {
+ "addressPrefixes": [
+ "10.0.0.0/16"
+ ]
+ },
+ ```
+
+ * **Subnet**: You can change or add to the subnet name and the subnet address space by changing the file's **subnets** section. You can change the name of the subnet by changing the **name** property. And you can change the subnet address space by changing the **addressPrefix** property:
+
+ ```json
+ "subnets": [
+ {
+ "name": "subnet-1",
+ "etag": "W/\"d9f6e6d6-2c15-4f7c-b01f-bed40f748dea\"",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "addressPrefix": "10.0.0.0/24",
+ "delegations": [],
+ "privateEndpointNetworkPolicies": "Enabled",
+ "privateLinkServiceNetworkPolicies": "Enabled"
+ }
+ },
+ {
+ "name": "GatewaySubnet",
+ "etag": "W/\"d9f6e6d6-2c15-4f7c-b01f-bed40f748dea\"",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "addressPrefix": "10.0.1.0/29",
+ "serviceEndpoints": [],
+ "delegations": [],
+ "privateEndpointNetworkPolicies": "Enabled",
+ "privateLinkServiceNetworkPolicies": "Enabled"
+ }
+ }
+
+ ]
+ ```
+
+ To change the address prefix, edit the file in two places: in the code in the preceding section and in the **type** section of the following code. Change the **addressPrefix** property in the following code to match the **addressPrefix** property in the code in the preceding section.
+
+ ```json
+ "type": "Microsoft.Network/virtualNetworks/subnets",
+ "apiVersion": "2019-06-01",
+ "name": "[concat(parameters('virtualNetworks_myVNET1_name'), '/GatewaySubnet')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Network/virtualNetworks', parameters('virtualNetworks_myVNET1_name'))]"
+ ],
+ "properties": {
+ "provisioningState": "Succeeded",
+ "addressPrefix": "10.0.1.0/29",
+ "serviceEndpoints": [],
+ "delegations": [],
+ "privateEndpointNetworkPolicies": "Enabled",
+ "privateLinkServiceNetworkPolicies": "Enabled"
+ }
+ },
+ {
+ "type": "Microsoft.Network/virtualNetworks/subnets",
+ "apiVersion": "2019-06-01",
+ "name": "[concat(parameters('virtualNetworks_myVNET1_name'), '/subnet-1')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Network/virtualNetworks', parameters('virtualNetworks_myVNET1_name'))]"
+ ],
+ "properties": {
+ "provisioningState": "Succeeded",
+ "addressPrefix": "10.0.0.0/24",
+ "delegations": [],
+ "privateEndpointNetworkPolicies": "Enabled",
+ "privateLinkServiceNetworkPolicies": "Enabled"
+ }
+ }
+ ]
+ ```
+
+1. Save the *\<resource-group-name>.json* file.
+++
+## Redeploy
+
+# [Portal](#tab/azure-portal)
++
+1. To choose the subscription where the target virtual network will be deployed, select **Basics** > **Subscription**.
+
+1. To choose the resource group where the target virtual network will be deployed, select **Basics** > **Resource group**.
+
+ If you need to create a new resource group for the target virtual network, select **Create new**. Make sure that the name isn't the same as the source resource group name in the existing virtual network.
+
+1. Verify that **Basics** > **Location** is set to the target location where you want the virtual network to be deployed.
+
+1. Under **Settings**, verify that the name matches the name that you entered previously in the parameters editor.
+
+1. Select the **Terms and Conditions** check box.
+
+1. To deploy the target virtual network, select **Purchase**.
+
+1. [Reconfigure Virtual Network Peering](/azure/virtual-network/virtual-network-manage-peering).
+
+1. Enable Connection Monitor by following the guidelines in ([Migrate to Connection monitor from Network performance monitor](/azure/network-watcher/migrate-to-connection-monitor-from-network-performance-monitor)).
+
+1. Enable the DDoS Protection Plan. After the move, the auto-tuned policy thresholds for all the protected public IP addresses in the virtual network are reset.
+
+# [PowerShell](#tab/azure-powershell)
++
+1. Create a resource group in the target region for the target virtual network to be deployed by using [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup):
+
+ ```azurepowershell-interactive
+ New-AzResourceGroup -Name <target-resource-group-name> -location <target-region>
+ ```
+
+1. Deploy the edited *\<resource-group-name>.json* file to the resource group that you created in the previous step by using [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment):
+
+ ```azurepowershell-interactive
+
+ New-AzResourceGroupDeployment -ResourceGroupName <target-resource-group-name> -TemplateFile <source-resource-group-name>.json
+ ```
+
+1. To verify that the resources were created in the target region, use [Get-AzResourceGroup](/powershell/module/az.resources/get-azresourcegroup) and [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork):
+
+ ```azurepowershell-interactive
+
+ Get-AzResourceGroup -Name <target-resource-group-name>
+ ```
+
+ ```azurepowershell-interactive
+
+ Get-AzVirtualNetwork -Name <target-virtual-network-name> -ResourceGroupName <target-resource-group-name>
+ ```
+
+1. [Reconfigure Virtual Network Peering](/azure/virtual-network/scripts/virtual-network-powershell-sample-peer-two-virtual-networks).
+
+1. Enable Connection Monitor by following the guidelines in ([Migrate to Connection monitor from Network performance monitor](/azure/network-watcher/migrate-to-connection-monitor-from-network-performance-monitor)).
+
+1. Enable the DDoS Protection Plan. After the move, the auto-tuned policy thresholds for all the protected public IP addresses in the virtual network are reset.
++
+## Discard
+
+# [Portal](#tab/azure-portal)
+
+To discard the target virtual network, you delete the resource group that contains the target virtual network. To do so:
+1. On the Azure portal dashboard, select the resource group.
+1. At the top of the **Overview** pane, select **Delete**.
+
+# [PowerShell](#tab/azure-powershell)
++++
+## Clean up
+
+# [Portal](#tab/azure-portal)
+
+To commit the changes and complete the virtual network move, you delete the source virtual network or resource group. To do so:
+1. On the Azure portal dashboard, select the virtual network or resource group.
+1. At the top of each pane, select **Delete**.
+
+# [PowerShell](#tab/azure-powershell)
+
+To commit your changes and complete the virtual network move, do either of the following:
+
+* Delete the resource group by using [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup):
+
+ ```azurepowershell-interactive
+
+ Remove-AzResourceGroup -Name <source-resource-group-name>
+ ```
+
+* Delete the source virtual network by using [Remove-AzVirtualNetwork](/powershell/module/az.network/remove-azvirtualnetwork):
+ ``` azurepowershell-interactive
+
+ Remove-AzVirtualNetwork -Name <source-virtual-network-name> -ResourceGroupName <source-resource-group-name>
+ ```
+++
+## Next steps
+
+To learn more about moving resources between regions and disaster recovery in Azure, refer to:
+
+- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)
+- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
postgresql Generative Ai Azure Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-azure-openai.md
Title: Generate vector embeddings with Azure OpenAI
-description: Generate vector embeddings with Azure OpenAI on Azure Database for PostgreSQL - Flexible Server.
+ Title: Generate vector embeddings with Azure OpenAI in Azure Database for PostgreSQL.
+description: Use vector indexes and Azure Open AI embeddings in PostgreSQL for retrieval augmented generation (RAG) patterns.
Last updated 01/02/2024
postgresql Reference Pg Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/reference-pg-azure-storage.md
Title: Azure Storage Extension Preview reference
-description: Azure Storage Extension in Azure Database for PostgreSQL - Flexible Server -Preview reference
+ Title: Copy data with Azure Storage Extension on Azure Database for PostgreSQL.
+description: Copy, export or read data from Azure Blob Storage with the Azure Storage extension for Azure Database for PostgreSQL - Flexible Server.
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
Previously updated : 01/24/2024 Last updated : 03/3/2024 # Release notes - Azure Database for PostgreSQL - Flexible Server
Last updated 01/24/2024
This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant to Azure Database for PostgreSQL flexible server.
+## Release: February 2024
+* Support for [minor versions](./concepts-supported-versions.md) 16.1, 15.5, 14.10, 13.13, 12.17, 11.22 <sup>$</sup>
+ ## Release: January 2024 * General availability of [Server logs](./how-to-server-logs-portal.md) including Portal and CLI support. * General availability of UAE Central region.
private-multi-access-edge-compute-mec Partner Programs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-multi-access-edge-compute-mec/partner-programs.md
Our operator partners include:
- Deutsche Telekom - Elisa - Etisalat-- Tampnet
+- [Tampnet](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/tampnetas1686124551117.azure_tampnet_private_network?tab=Overview)
- TIM Brasil ### Technology Partners
Our application ISV partners include:
- [Red Viking](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/redviking1587070336894.rv_argonaut_on_mec?exp=ubp8&tab=Overview) - [Scenera](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/scenerainc1695952178961.scenera-maistro-saas-1?tab=Overview) - [Sensing Feeling](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/sensingfeelinglimited1671143541932.001?exp=ubp8)--[Tampnet](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/tampnetas1686124551117.azure_tampnet_private_network?tab=Overview) - Taqtile - [Trilogy Networks](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/trilogynetworksinc1688507869081.farmgrid-preview?tab=Overview&flightCodes=dec2dcd1-ef23-41d8-bf58-ce0c9d9b17c1) - [Unmanned Life](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/unmanned_life.robot-orchestration?tab=Overview)
reliability Availability Service By Category https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-service-by-category.md
Availability of services across Azure regions depends on a region's type. There
## Service categories across region types
-Azure services are grouped into three categories: *foundational*, *mainstream*, and *strategic*. Azure's general policy on deploying services into any given region is primarily driven by region type, service categories, and customer demand.
--- **Foundational**: Available in all recommended and alternate regions when the region is generally available, or within 90 days of a new foundational service becoming generally available.-- **Mainstream**: Available in all recommended regions within 90 days of the region general availability. Demand-driven in alternate regions, and many are already deployed into a large subset of alternate regions.-- **Strategic** (previously Specialized): Targeted service offerings, often industry-focused or backed by customized hardware. Demand-driven availability across regions, and many are already deployed into a large subset of recommended regions.-
-To see which services are deployed in a region and the future roadmap for preview or general availability of services in a region, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/).
-
-If a service offering isn't available in a region, contact your Microsoft sales representative for more information and to explore options.
-
-| Region type | Non-regional | Foundational | Mainstream | Strategic | Availability zones | Data residency |
-| | | | | | | |
-| Recommended | **Y** | **Y** | **Y** | Demand-driven | **Y** | **Y** |
-| Alternate | **Y** | **Y** | Demand-driven | Demand-driven | N/A | **Y** |
## Available services by region category
reliability Availability Zones Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-migration-overview.md
The table below lists each product that offers migration guidance and/or informa
| [Azure Cache for Redis](migrate-cache-redis.md)| | [Azure AI Search](migrate-search-service.md)| | [Azure Container Instances](migrate-container-instances.md)|
+| [Azure Container Registry](/azure/container-registry/zone-redundancy?toc=/azure/reliability) |
+| [Azure Cosmos DB](/azure/cosmos-db/high-availability?toc=/azure/reliability) |
| [Azure Database for MySQL - Flexible Server](migrate-database-mysql-flex.md)|
+| [Azure Database for PostgreSQL](/azure/postgresql/flexible-server/how-to-manage-high-availability-portal#enable-high-availability-during-server-creation)|
+| [Azure Elastic SAN](reliability-elastic-san.md#availability-zone-migration)|
+| [Azure Functions](reliability-functions.md#availability-zone-migration)|
+| [Azure HDInsight](reliability-hdinsight.md#availability-zone-migration)|
+| [Azure Key Vault](/azure/key-vault/general/disaster-recovery-guidance?toc=/azure/reliability)|
+| [Azure Kubernetes Service](/azure/aks/availability-zones?toc=/azure/reliability)|
+| [Azure Logic Apps](/azure/logic-apps/set-up-zone-redundancy-availability-zones?tabs=standard&toc=/azure/reliability)|
| [Azure Monitor: Log Analytics](migrate-monitor-log-analytics.md)|
+| [Azure Service Bus](/azure/service-bus-messaging/service-bus-geo-dr#availability-zones?toc=/azure/reliability)|
| [Azure SQL Managed Instance](migrate-sql-managed-instance.md)|
reliability Migrate Monitor Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-monitor-log-analytics.md
Last updated 07/21/2022--+ # Migrate Log Analytics workspaces to availability zone support
-This guide describes how to migrate Log Analytics workspaces from non-availability zone support to availability support. We'll take you through the different options for migration.
+This guide describes how to migrate Log Analytics workspaces from non-availability zone support to availability support.
> [!NOTE]
-> Application Insights resources can also use availability zones, but only if they are workspace-based and the workspace uses a dedicated cluster as explained below. Classic (non-workspace-based) Application Insights resources cannot use availability zones.
+> Application Insights resources can also use availability zones, but only if they are workspace-based and the workspace uses a dedicated cluster. Classic (non-workspace-based) Application Insights resources cannot use availability zones.
## Prerequisites
There are no downtime requirements.
### Step 1: Determine the current cluster for your workspace
-To determine the current workspace link status for your workspace, use [CLI, PowerShell or REST](../azure-monitor/logs/logs-dedicated-clusters.md#check-workspace-link-status) to retrieve the [cluster details](../azure-monitor/logs/logs-dedicated-clusters.md#check-cluster-provisioning-status). If the cluster uses an availability zone, then it will have a property called `isAvailabilityZonesEnabled` with a value of `true`. Once a cluster is created, this property cannot be altered.
+To determine the current workspace link status for your workspace, use [CLI, PowerShell, or REST](../azure-monitor/logs/logs-dedicated-clusters.md#check-workspace-link-status) to retrieve the [cluster details](../azure-monitor/logs/logs-dedicated-clusters.md#check-cluster-provisioning-status). If the cluster uses an availability zone, then it has a property called `isAvailabilityZonesEnabled` with a value of `true`. Once a cluster is created, this property cannot be altered.
### Step 2: Create a dedicated cluster with availability zone support
-Move your workspace to an availability zone by [creating a new dedicated cluster](../azure-monitor/logs/logs-dedicated-clusters.md#create-a-dedicated-cluster) in a region that supports availability zones. The cluster will automatically be enabled for availability zones. Then [link your workspace to the new cluster](../azure-monitor/logs/logs-dedicated-clusters.md#link-a-workspace-to-a-cluster).
+Move your workspace to an availability zone by [creating a new dedicated cluster](../azure-monitor/logs/logs-dedicated-clusters.md#create-a-dedicated-cluster) in a region that supports availability zones. The cluster is automatically enabled for availability zones. Then [link your workspace to the new cluster](../azure-monitor/logs/logs-dedicated-clusters.md#link-a-workspace-to-a-cluster).
> [!IMPORTANT] > Availability zone is defined on the cluster at creation time and canΓÇÖt be modified.
-Transitioning to a new cluster can be a gradual process. Don't remove the previous cluster until it has been purged of any data. For example, if your workspace retention is set 60 days, you may want to keep your old cluster running for that period before removing it.
+Transitioning to a new cluster can be a gradual process. Don't remove the previous cluster until it is purged of any data. For example, if your workspace retention is set 60 days, you may want to keep your old cluster running for that period before removing it.
-Any queries against your workspace will query both clusters as required to provide you with a single, unified result set. That means that all Azure Monitor features relying on the workspace such as workbooks and dashboards will keep getting the full, unified result set based on data from both clusters.
+Any queries against your workspace queries both clusters as required to provide you with a single, unified result set. As a result, all Azure Monitor features that rely on the workspace, such as workbooks and dashboards, continue to receive the full, unified result set based on data from both clusters.
## Billing There is a [cost for using a dedicated cluster](../azure-monitor/logs/logs-dedicated-clusters.md#create-a-dedicated-cluster). It requires a daily capacity reservation of 500 GB.
-If you already have a dedicated cluster and choose to retain it to access its data, youΓÇÖll be charged for both dedicated clusters. Starting August 4, 2021, the minimum required capacity reservation for dedicated clusters is reduced from 1000GB/Daily to 500GB/Daily, so weΓÇÖd recommend applying that minimum to your old cluster to reduce charges.
+If you already have a dedicated cluster and choose to retain it to access its data, you are charged for both dedicated clusters. Starting August 4, 2021, the minimum required capacity reservation for dedicated clusters is reduced from 1000 GB/Daily to 500 GB/Daily, so weΓÇÖd recommend applying that minimum to your old cluster to reduce charges.
The new cluster isnΓÇÖt billed during its first day to avoid double billing during configuration. Only the data ingested before the migration completes would still be billed on the date of migration.
-## Next steps
+## Related content
Learn more about:
-> [!div class="nextstepaction"]
-> [Azure Monitor Logs Dedicated Clusters](../azure-monitor/logs/logs-dedicated-clusters.md)
+- [Relocate Log Analytics workspaces to another region](../operational-excellence/relocation-log-analytics.md)
-> [!div class="nextstepaction"]
-> [Azure Services that support Availability Zones](availability-zones-service-support.md)
+- [Azure Monitor Logs Dedicated Clusters](../azure-monitor/logs/logs-dedicated-clusters.md)
+
+- [Azure Services that support Availability Zones](availability-zones-service-support.md)
reliability Reliability Azure Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-azure-container-apps.md
You should still use safe deployment techniques such as [blue-green deployment](
If you have enabled [session affinity](../container-apps/sticky-sessions.md), and a zone goes down, clients for that zone are routed to new replicas because the previous replicas are no longer available. Any state associated with the previous replicas is lost.
-### Availability zone redeployment and migration
+### Availability zone migration
To take advantage of availability zones, enable zone redundancy as you create the Container Apps environment. The environment must include a virtual network with an available subnet. You can't migrate an existing Container Apps environment from nonavailability zone support to availability zone support.
reliability Reliability Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-batch.md
Azure Batch account doesn't reallocate or create new nodes to compensate for nod
To prepare for a possible availability zone failure, you should over-provision capacity of service to ensure that the solution can tolerate 1/3 loss of capacity and continue to function without degraded performance during zone-wide outages. Since the platform spreads VMs across three zones and you need to account for at least the failure of one zone, multiply peak workload instance count by a factor of zones/(zones-1), or 3/2. For example, if your typical peak workload requires four instances, you should provision six instances: (2/3 * 6 instances) = 4 instances.
-### Availability zone redeployment and migration
+### Availability zone migration
You can't migrate an existing Batch pool to availability zone support. If you wish to recreate your Batch pool across availability zones, see [Create an Azure Batch pool across availability zones](/azure/batch/create-pool-availability-zones).
reliability Reliability Elastic San https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-elastic-san.md
If you deployed an LRS elastic SAN, you may need to deploy a new SAN, using snap
The latency differences between an elastic SAN on LRS and an elastic SAN on ZRS isn't particularly high. However, for workloads sensitive to latency spikes, consider an elastic SAN on LRS since it offers the lowest latency.
-### Availability zone redeployment and migration
+### Availability zone migration
To migrate an elastic SAN on LRs to ZRS, you must snapshot your elastic SAN's volumes, export them to managed disk snapshots, deploy an elastic SAN on ZRS, and then create volumes on the SAN on ZRS using those disk snapshots. To learn how to use snapshots (preview), see [Snapshot Azure Elastic SAN volumes (preview)](../storage/elastic-san/elastic-san-snapshots.md).
reliability Reliability Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-functions.md
To learn more about these templates, see [Automate resource deployment in Azure
After the zone-redundant plan is created and deployed, any function app hosted on your new plan is considered zone-redundant.
-### Migrate your function app to a zone-redundant plan
+### Availability zone migration
Azure Function Apps currently doesn't support in-place migration of existing function apps instances. For information on how to migrate the public multitenant Premium plan from non-availability zone to availability zone support, see [Migrate App Service to availability zone support](../reliability/migrate-functions.md).
reliability Reliability Hdinsight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-hdinsight.md
When the HDInsight cluster is ready, you can check the location to see which ava
You can scale up an HDInsight cluster with more worker nodes. The newly added worker nodes will be placed in the same availability zone of this cluster.
-### Availability zone redeployment
+### Availability zone migration
Azure HDInsight clusters currently doesn't support in-place migration of existing cluster instances to availability zone support. However, you can choose to [recreate your cluster](#create-an-hdinsight-cluster-using-availability-zone), and choose a different availability zone or region during the cluster creation. A secondary standby cluster in a different region and a different availability zone can be used in disaster recovery scenarios.
security Azure Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-domains.md
--++ Last updated 07/07/2020
sentinel Connect Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-aws.md
The following instructions apply for public **Azure Commercial clouds** only. Fo
1. Edit the new role's trust policy and add another condition:<br>`"sts:RoleSessionName": "MicrosoftSentinel_{WORKSPACE_ID)"`
+ > [!IMPORTANT]
+ > The value of the `sts:RoleSessionName` parameter must have the exact prefix `MicrosoftSentinel_`, otherwise the connector will not function properly.
+ The finished trust policy should look like this: ```json
sentinel Amazon Web Services S3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/amazon-web-services-s3.md
Title: "Amazon Web Services S3 connector for Microsoft Sentinel (preview)"
description: "Learn how to install the connector Amazon Web Services S3 to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/02/2024
This connector allows you to ingest AWS service logs, collected in AWS S3 bucket
* VPC Flow Logs * AWS GuardDuty
+For more information, see the [Microsoft Sentinel documentation](https://go.microsoft.com/fwlink/p/?linkid=2218883&wt.mc_id=sentinel_dataconnectordocs_content_cnl_csasci).
+ ## Connector attributes | Connector attribute | Description |
sentinel Amazon Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/amazon-web-services.md
Title: "Amazon Web Services connector for Microsoft Sentinel"
description: "Learn how to install the connector Amazon Web Services to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 03/02/2024 # Amazon Web Services connector for Microsoft Sentinel
-Follow these instructions to connect to AWS and stream your CloudTrail logs into Microsoft Sentinel.
+Follow these instructions to connect to AWS and stream your CloudTrail logs into Microsoft Sentinel. For more information, see the [Microsoft Sentinel documentation](https://go.microsoft.com/fwlink/p/?linkid=2218883&wt.mc_id=sentinel_dataconnectordocs_content_cnl_csasci).
## Connector attributes
sentinel Enable Entity Behavior Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/enable-entity-behavior-analytics.md
Title: Enable entity behavior analytics to detect advanced threats description: Enable User and Entity Behavior Analytics in Microsoft Sentinel, and configure data sources + Last updated 07/05/2023- # Enable User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel
To enable or disable this feature (these prerequisites are not required to use t
In this article, you learned how to enable and configure User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel. For more information about UEBA: > [!div class="nextstepaction"]
->>[Configure data retention and archive](configure-data-retention-archive.md)
+>>[Investigate entities with entity pages](entity-pages.md)
sentinel Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/feature-availability.md
Previously updated : 07/25/2023 Last updated : 02/11/2024 # Microsoft Sentinel feature support for Azure commercial/other clouds
This article describes the features available in Microsoft Sentinel across diffe
|Feature |Feature stage |Azure commercial |Azure Government |Azure China 21Vianet | |||||| |[Analytics rules health](monitor-analytics-rule-integrity.md) |Public preview |&#x2705; |&#10060; |&#10060; |
-|[MITRE ATT&CK dashboard](mitre-coverage.md) |Public preview |&#x2705; |&#10060; |&#10060; |
+|[MITRE ATT&CK dashboard](mitre-coverage.md) |Public preview |&#x2705; |&#x2705; |&#x2705; |
|[NRT rules](near-real-time-rules.md) |GA |&#x2705; |&#x2705; |&#x2705; | |[Recommendations](detection-tuning.md) |Public preview |&#x2705; |&#x2705; |&#10060; | |[Scheduled](detect-threats-built-in.md) and [Microsoft rules](create-incidents-from-alerts.md) |GA |&#x2705; |&#x2705; |&#x2705; |
This article describes the features available in Microsoft Sentinel across diffe
|Feature |Feature stage |Azure commercial |Azure Government |Azure China 21Vianet | ||||||
-|[Amazon Web Services](connect-aws.md?tabs=ct) |GA |&#x2705; |&#10060; |&#10060; |
+|[Amazon Web Services](connect-aws.md?tabs=ct) |GA |&#x2705; |&#x2705; |&#10060; |
|[Amazon Web Services S3 (Preview)](connect-aws.md?tabs=s3) |Public preview |&#x2705; |&#x2705; |&#10060; | |[Microsoft Entra ID](connect-azure-active-directory.md) |GA |&#x2705; |&#x2705;|&#x2705; <sup>[1](#logsavailable)</sup> | |[Microsoft Entra ID Protection](connect-services-api-based.md) |GA |&#x2705;| &#x2705; |&#10060; |
This article describes the features available in Microsoft Sentinel across diffe
|[Office 365](connect-services-api-based.md) |GA |&#x2705;|&#x2705; |&#x2705; | |[Security Events via Legacy Agent](connect-services-windows-based.md#log-analytics-agent-legacy) |GA |&#x2705; |&#x2705;|&#x2705; | |[Syslog](connect-syslog.md) |GA |&#x2705;| &#x2705;|&#x2705; |
-|[Windows DNS Events via AMA (Preview)](connect-dns-ama.md) |Public preview |&#x2705; |&#10060;|&#10060; |
+|[Windows DNS Events via AMA](connect-dns-ama.md) |GA |&#x2705; |&#x2705;|&#x2705; |
|[Windows Firewall](data-connectors/windows-firewall.md) |GA |&#x2705; |&#x2705;|&#x2705; | |[Windows Forwarded Events](connect-services-windows-based.md) |GA |&#x2705;|&#x2705; |&#x2705; | |[Windows Security Events via AMA](connect-services-windows-based.md) |GA |&#x2705; |&#x2705;|&#x2705; |
This article describes the features available in Microsoft Sentinel across diffe
|Feature |Feature stage |Azure commercial |Azure Government |Azure China 21Vianet | ||||||
-|[Add entities to threat intelligence](add-entity-to-threat-intelligence.md?tabs=incidents) |Public preview |&#x2705; |&#x2705; |&#10060; |
+|[Add entities to threat intelligence](add-entity-to-threat-intelligence.md?tabs=incidents) |Public preview |&#x2705; |&#x2705; |&#x2705; |
|[Advanced and/or conditions](add-advanced-conditions-to-automation-rules.md) |GA |&#x2705; |&#x2705;| &#x2705; | |[Automation rules](automate-incident-handling-with-automation-rules.md) |GA |&#x2705; |&#x2705;| &#x2705; | |[Automation rules health](monitor-automation-health.md) |Public preview |&#x2705; |&#x2705;| &#10060; |
This article describes the features available in Microsoft Sentinel across diffe
|Feature |Feature stage |Azure commercial |Azure Government |Azure China 21Vianet | ||||||
-|[Notebooks](notebooks.md) |GA |&#x2705;|&#x2705; |&#x2705; |
-|[Notebook integration with Azure Synapse](notebooks-with-synapse.md) |Public preview |&#x2705;|&#x2705; |&#x2705; |
+|[Notebooks](notebooks.md) |GA |&#x2705; |&#x2705; |&#x2705; |
+|[Notebook integration with Azure Synapse](notebooks-with-synapse.md) |Public preview |&#x2705; |&#x2705; |&#x2705; |
## SAP
sentinel Identify Threats With Entity Behavior Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/identify-threats-with-entity-behavior-analytics.md
Title: Identify advanced threats with User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel | Microsoft Docs description: Create behavioral baselines for entities (users, hostnames, IP addresses) and use them to detect anomalous behavior and identify zero-day advanced persistent threats (APT). + Last updated 08/08/2022- # Identify advanced threats with User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel
Information about **entity pages** can now be found at [Investigate entities wit
## Querying behavior analytics data
-Using [KQL](/azure/data-explorer/kusto/query/), we can query the Behavioral Analytics Table.
+Using [KQL](/azure/data-explorer/kusto/query/), we can query the **BehaviorAnalytics** table.
For example ΓÇô if we want to find all the cases of a user that failed to sign in to an Azure resource, where it was the user's first attempt to connect from a given country/region, and connections from that country/region are uncommon even for the user's peers, we can use the following query:
Microsoft Sentinel calculates and ranks a user's peers, based on the userΓÇÖs Mi
You can use the [Jupyter notebook](https://github.com/Azure/Azure-Sentinel-Notebooks/tree/master/scenario-notebooks/UserSecurityMetadata) provided in the Microsoft Sentinel GitHub repository to visualize the user peers metadata. For detailed instructions on how to use the notebook, see the [Guided Analysis - User Security Metadata](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/scenario-notebooks/UserSecurityMetadata/Guided%20Analysis%20-%20User%20Security%20Metadata.ipynb) notebook.
-### Permission analytics - table and notebook
-
-Permission analytics helps determine the potential impact of the compromising of an organizational asset by an attacker. This impact is also known as the asset's "blast radius." Security analysts can use this information to prioritize investigations and incident handling.
-
-Microsoft Sentinel determines the direct and transitive access rights held by a given user to Azure resources, by evaluating the Azure subscriptions the user can access directly or via groups or service principals. This information, as well as the full list of the user's Microsoft Entra security group membership, is then stored in the **UserAccessAnalytics** table. The screenshot below shows a sample row in the UserAccessAnalytics table, for the user Alex Johnson. **Source entity** is the user or service principal account, and **target entity** is the resource that the source entity has access to. The values of **access level** and **access type** depend on the access-control model of the target entity. You can see that Alex has Contributor access to the Azure subscription *Contoso Hotels Tenant*. The access control model of the subscription is Azure RBAC.
--
-You can use the [Jupyter notebook](https://github.com/Azure/Azure-Sentinel-Notebooks/tree/master/scenario-notebooks/UserSecurityMetadata) (the same notebook mentioned above) from the Microsoft Sentinel GitHub repository to visualize the permission analytics data. For detailed instructions on how to use the notebook, see the [Guided Analysis - User Security Metadata](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/scenario-notebooks/UserSecurityMetadata/Guided%20Analysis%20-%20User%20Security%20Metadata.ipynb) notebook.
+> [!NOTE]
+> The *UserAccessAnalytics* table has been deprecated.
### Hunting queries and exploration queries
sentinel Investigate With Ueba https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/investigate-with-ueba.md
The **IdentityInfo** table synchronizes with your Microsoft Entra workspace to c
## Identify password spray and spear phishing attempts
-Without multi-factor authentication (MFA) enabled, user credentials are vulnerable to attackers looking to compromise attacks with [password spraying](https://www.microsoft.com/security/blog/2020/04/23/protecting-organization-password-spray-attacks/) or [spear phishing](https://www.microsoft.com/security/blog/2019/12/02/spear-phishing-campaigns-sharper-than-you-think/) attempts.
+Without multifactor authentication (MFA) enabled, user credentials are vulnerable to attackers looking to compromise attacks with [password spraying](https://www.microsoft.com/security/blog/2020/04/23/protecting-organization-password-spray-attacks/) or [spear phishing](https://www.microsoft.com/security/blog/2019/12/02/spear-phishing-campaigns-sharper-than-you-think/) attempts.
### Investigate a password spray incident with UEBA insights
The Investigation graph includes a node for the detonated URL, as well as the fo
- **DetonationVerdict**. The high-level, Boolean determination from detonation. For example, **Bad** means that the side was classified as hosting malware or phishing content. - **DetonationFinalURL**. The final, observed landing page URL, after all redirects from the original URL.-- **DetonationScreenshot**. A screenshot of what the page looked like at the time that the alert was triggered. Select the screenshot to enlarge. For example:
storage Storage Explorer Support Policy Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-support-policy-lifecycle.md
This table describes the release date and the end of support date for each relea
| Storage Explorer version | Release date | End of support date | |:-:|::|:-:|
+| v1.33.0 | March 1, 2024 | March 1, 2025 |
| v1.32.1 | November 15, 2023 | November 1, 2024 | | v1.32.0 | November 1, 2023 | November 1, 2024 | | v1.31.2 | October 3, 2023 | August 11, 2024 |
storage Vs Azure Tools Storage Manage With Storage Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/storage-explorer/vs-azure-tools-storage-manage-with-storage-explorer.md
Additional requirements include:
The following versions of macOS support Storage Explorer:
-* macOS 10.13 High Sierra and later versions
+* macOS 10.15 Catalina and later versions
Starting with Storage Explorer version 1.31.0, both x64 (Intel) and ARM64 (Apple Silicon) versions of Storage Explorer are available for download.
synapse-analytics Sql Data Warehouse Service Capacity Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-service-capacity-limits.md
Title: Capacity limits for dedicated SQL pool
description: Maximum values allowed for various components of dedicated SQL pool in Azure Synapse Analytics. - Previously updated : 6/20/2023+ Last updated : 03/01/2024 -+
+ - azure-synapse
# Capacity limits for dedicated SQL pool in Azure Synapse Analytics
Maximum values allowed for various components of dedicated SQL pool in Azure Syn
| Category | Description | Maximum | |: |: |: | | [Data Warehouse Units (DWU)](what-is-a-data-warehouse-unit-dwu-cdwu.md) |Max DWU for a single dedicated SQL pool | Gen1: DW6000<br></br>Gen2: DW30000c |
-| [Data Warehouse Units (DWU)](what-is-a-data-warehouse-unit-dwu-cdwu.md) |Default DTU per server |54,000<br></br>By default, each SQL server (for example, myserver.database.windows.net) has a DTU Quota of 54,000, which allows up to DW6000c. This quota is simply a safety limit. You can increase your quota by [creating a support ticket](sql-data-warehouse-get-started-create-support-ticket.md) and selecting *Quota* as the request type. To calculate your DTU needs, multiply the 7.5 by the total DWU needed, or multiply 9 by the total cDWU needed. For example:<br></br>DW6000 x 7.5 = 45,000 DTUs<br></br>DW7500c x 9 = 67,500 DTUs.<br></br>You can view your current DTU consumption from the SQL server option in the portal. Both paused and unpaused databases count toward the DTU quota. |
-| Database connection |Maximum Concurrent open sessions |1024<br/><br/>The number of concurrent open sessions will vary based on the selected DWU. DWU1000c and above support a maximum of 1024 open sessions. DWU500c and below, support a maximum concurrent open session limit of 512. Note, there are limits on the number of queries that can execute concurrently. When the concurrency limit is exceeded, the request goes into an internal queue where it waits to be processed. |
+| [Data Warehouse Units (DWU)](what-is-a-data-warehouse-unit-dwu-cdwu.md) |Default [Database Transaction Unit (DTU)](/azure/azure-sql/database/service-tiers-dtu?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json) per server |54,000<br></br>By default, each SQL server (for example, `myserver.database.windows.net`) has a DTU Quota of 54,000, which allows up to DW6000c. This quota is simply a safety limit. You can increase your quota by [creating a support ticket](sql-data-warehouse-get-started-create-support-ticket.md) and selecting *Quota* as the request type. To calculate your DTU needs, multiply the 7.5 by the total DWU needed, or multiply 9 by the total cDWU needed. For example:<br></br>DW6000 x 7.5 = 45,000 DTUs<br></br>DW7500c x 9 = 67,500 DTUs.<br></br>You can view your current DTU consumption from the SQL server option in the portal. Both paused and unpaused databases count toward the DTU quota. |
+| Database connection |Maximum Concurrent open sessions |1024<br/><br/>The number of concurrent open sessions vary based on the selected DWU. DWU1000c and higher support a maximum of 1,024 open sessions. DWU500c and lower support a maximum concurrent open session limit of 512. Note, there are limits on the number of queries that can execute concurrently. When the concurrency limit is exceeded, the request goes into an internal queue where it waits to be processed.<br><br/>Idle session connections are not automatically closed. |
| Database connection |Maximum memory for prepared statements |20 MB |
-| [Workload management](resource-classes-for-workload-management.md) |Maximum concurrent queries |128<br/><br/> A maximum of 128 concurrent queries will execute and remaining queries will be queued.<br/><br/>The number of concurrent queries can decrease when users are assigned to higher resource classes or when the [data warehouse unit](memory-concurrency-limits.md) setting is lowered. Some queries, like DMV queries, are always allowed to run and do not impact the concurrent query limit. For more information on concurrent query execution, see the [concurrency maximums](memory-concurrency-limits.md) article. |
-| [tempdb](sql-data-warehouse-tables-temporary.md) |Maximum GB |399 GB per DW100c. For example, at DWU1000c, tempdb is sized to 3.99 TB. |
-||||
+| [Workload management](resource-classes-for-workload-management.md) |Maximum concurrent queries |128<br/><br/>A maximum of 128 concurrent queries can execute and remaining queries are queued.<br/><br/>The number of concurrent queries can decrease when users are assigned to higher resource classes or when the [data warehouse unit](memory-concurrency-limits.md) setting is lowered. Some queries, like DMV queries, are always allowed to run and do not affect the concurrent query limit. For more information on concurrent query execution, see the [concurrency maximums](memory-concurrency-limits.md) article. |
+| [tempdb](sql-data-warehouse-tables-temporary.md) |Maximum GB |399 GB per DW100c. For example, at DWU1000c, `tempdb` is sized to 3.99 TB. |
## Database objects | Category | Description | Maximum | |: |: |: |
-| Database |Max size | Gen1: 240 TB compressed on disk. This space is independent of tempdb or log space, and therefore this space is dedicated to permanent tables. Clustered columnstore compression is estimated at 5X. This compression allows the database to grow to approximately 1 PB when all tables are clustered columnstore (the default table type). <br/><br/> Gen2: Unlimited storage for columnstore tables. Rowstore portion of the database is still limited to 240 TB compressed on disk. |
+| Database |Max size | Gen1: 240 TB compressed on disk. This space is independent of `tempdb` or log space, and therefore this space is dedicated to permanent tables. Clustered columnstore compression is estimated at 5X. This compression allows the database to grow to approximately 1 PB when all tables are clustered columnstore (the default table type). <br/><br/> Gen2: Unlimited storage for columnstore tables. Rowstore portion of the database is still limited to 240 TB compressed on disk. |
| Table |Max size |Unlimited size for columnstore tables. <br>60 TB for rowstore tables compressed on disk. | | Table |Tables per database | 100,000 |
-| Table |Columns per table |1024 columns |
+| Table |Columns per table |1,024 columns |
| Table |Bytes per column |Dependent on column [data type](sql-data-warehouse-tables-data-types.md). Limit is 8000 for char data types, 4000 for nvarchar, or 2 GB for MAX data types. |
-| Table |Bytes per row, defined size |8060 bytes<br/><br/>The number of bytes per row is calculated in the same manner as it is for SQL Server with page compression. Like SQL Server, row-overflow storage is supported, which enables **variable length columns** to be pushed off-row. When variable length rows are pushed off-row, only 24-byte root is stored in the main record. For more information, see [Row-Overflow Data Exceeding 8 KB](/previous-versions/sql/sql-server-2008-r2/ms186981(v=sql.105)). |
+| Table |Bytes per row, defined size |8,060 bytes<br/><br/>The number of bytes per row is calculated in the same manner as it is for SQL Server with page compression. Like SQL Server, row-overflow storage is supported, which enables **variable length columns** to be pushed off-row. When variable length rows are pushed off-row, only 24-byte root is stored in the main record. For more information, see [Row-Overflow Data Exceeding 8 KB](/previous-versions/sql/sql-server-2008-r2/ms186981(v=sql.105)). |
| Table |Partitions per table |15,000<br/><br/>For high performance, we recommend minimizing the number of partitions you need while still supporting your business requirements. As the number of partitions grows, the overhead for Data Definition Language (DDL) and Data Manipulation Language (DML) operations grows and causes slower performance. | | Table |Characters per partition boundary value. |4000 |
-| Index |Non-clustered indexes per table. |50<br/><br/>Applies to rowstore tables only. |
+| Index |Nonclustered indexes per table. |50<br/><br/>Applies to rowstore tables only. |
| Index |Clustered indexes per table. |1<br><br/>Applies to both rowstore and columnstore tables. | | Index |Index key size. |900 bytes.<br/><br/>Applies to rowstore indexes only.<br/><br/>Indexes on varchar columns with a maximum size of more than 900 bytes can be created if the existing data in the columns does not exceed 900 bytes when the index is created. However, later INSERT or UPDATE actions on the columns that cause the total size to exceed 900 bytes will fail. | | Index |Key columns per index. |16<br/><br/>Applies to rowstore indexes only. Clustered columnstore indexes include all columns. |
Maximum values allowed for various components of dedicated SQL pool in Azure Syn
| Stored Procedures |Maximum levels of nesting. |8 | | View |Columns per view |1,024 | | Workload Classifier |User-defined classifier |100 |
-||||
## Loads | Category | Description | Maximum | |: |: |: | | Polybase Loads |MB per row |1<br/><br/>Polybase loads rows that are smaller than 1 MB. Loading LOB data types into tables with a Clustered Columnstore Index (CCI) is not supported.<br/> |
-|Polybase Loads|Total number of files|1,000,000<br/><br/>Polybase loads can not exceed more than 1M files. You may experience the following error: **Operation failed as split count exceeding upper bound of 1000000**.|
+|Polybase Loads|Total number of files|1,000,000<br/><br/>Polybase loads cannot exceed more than 1M files. You might experience the following error: **Operation failed as split count exceeding upper bound of 1000000**.|
## Queries
Maximum values allowed for various components of dedicated SQL pool in Azure Syn
| Query |Queued queries on system views |1000 | | Query |Maximum parameters |2098 | | Batch |Maximum size |65,536*4096 |
-| SELECT results |Columns per row |4096<br/><br/>You can never have more than 4096 columns per row in the SELECT result. There is no guarantee that you can always have 4096. If the query plan requires a temporary table, the 1024 columns per table maximum might apply. |
+| SELECT results |Columns per row |4096<br/><br/>You can never have more than 4,096 columns per row in the SELECT result. There is no guarantee that you can always have 4096. If the query plan requires a temporary table, the 1,024 columns per table maximum might apply. |
| SELECT |Nested subqueries |32<br/><br/>You can never have more than 32 nested subqueries in a SELECT statement. There is no guarantee that you can always have 32. For example, a JOIN can introduce a subquery into the query plan. The number of subqueries can also be limited by available memory. |
-| SELECT |Columns per JOIN |1024 columns<br/><br/>You can never have more than 1024 columns in the JOIN. There is no guarantee that you can always have 1024. If the JOIN plan requires a temporary table with more columns than the JOIN result, the 1024 limit applies to the temporary table. |
-| SELECT |Bytes per GROUP BY columns. |8060<br/><br/>The columns in the GROUP BY clause can have a maximum of 8060 bytes. |
-| SELECT |Bytes per ORDER BY columns |8060 bytes<br/><br/>The columns in the ORDER BY clause can have a maximum of 8060 bytes |
-| Identifiers per statement |Number of referenced identifiers |65,535<br/><br/> The number of identifiers that can be contained in a single expression of a query is limited. Exceeding this number results in SQL Server error 8632. For more information, see [Internal error: An expression services limit has been reached](https://support.microsoft.com/help/913050/error-message-when-you-run-a-query-in-sql-server-2005-internal-error-a). |
+| SELECT |Columns per JOIN |1,024 columns<br/><br/>You can never have more than 1,024 columns in the JOIN. There is no guarantee that you can always have 1024. If the JOIN plan requires a temporary table with more columns than the JOIN result, the 1024 limit applies to the temporary table. |
+| SELECT |Bytes per GROUP BY columns. |8060<br/><br/>The columns in the GROUP BY clause can have a maximum of 8,060 bytes. |
+| SELECT |Bytes per ORDER BY columns |8,060 bytes<br/><br/>The columns in the ORDER BY clause can have a maximum of 8,060 bytes |
+| Identifiers per statement |Number of referenced identifiers |65,535<br/><br/> The number of identifiers that can be contained in a single expression of a query is limited. Exceeding this number results in SQL Server error 8632. For more information, see [Internal error: An expression services limit has been reached](https://support.microsoft.com/help/913050/error-message-when-you-run-a-query-in-sql-server-2005-internal-error-a). |
| String literals | Number of string literals in a statement | 32,500 <br/><br/>The number of string constants in a single expression of a query is limited. Exceeding this number results in SQL Server error 8632.|
-||||
## Metadata
-DMV's will reset when a dedicated SQL pool is paused or when it is scaled.
+Cumulative data in DMVs reset when a dedicated SQL pool is paused or when it is scaled.
| System view | Maximum rows | |: |: |
DMV's will reset when a dedicated SQL pool is paused or when it is scaled.
| [sys.dm_pdw_errors](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-errors-transact-sql?view=azure-sqldw-latest&preserve-view=true) |10,000 | | [sys.dm_pdw_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-exec-requests-transact-sql?view=azure-sqldw-latest&preserve-view=true) |10,000 | | [sys.dm_pdw_exec_sessions](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-exec-sessions-transact-sql?view=azure-sqldw-latest&preserve-view=true) |10,000 |
-| [sys.dm_pdw_request_steps](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-request-steps-transact-sql?view=azure-sqldw-latest&preserve-view=true) |Total number of steps for the most recent 1000 SQL requests that are stored in sys.dm_pdw_exec_requests. |
-| [sys.dm_pdw_sql_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-sql-requests-transact-sql?view=azure-sqldw-latest&preserve-view=true) |The most recent 1000 SQL requests that are stored in sys.dm_pdw_exec_requests. |
-|||
+| [sys.dm_pdw_request_steps](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-request-steps-transact-sql?view=azure-sqldw-latest&preserve-view=true) |Total number of steps for the most recent 1000 SQL requests that are stored in `sys.dm_pdw_exec_requests`. |
+| [sys.dm_pdw_sql_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-sql-requests-transact-sql?view=azure-sqldw-latest&preserve-view=true) |The most recent 1000 SQL requests that are stored in `sys.dm_pdw_exec_requests`. |
-## Next steps
+## Related content
-For recommendations on using Azure Synapse, see the [Cheat Sheet](cheat-sheet.md).
+- [Cheat sheet for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics](cheat-sheet.md)
+- [Best practices for dedicated SQL pools in Azure Synapse Analytics](../sql/best-practices-dedicated-sql-pool.md)
+- [Synapse implementation success methodology: Evaluate dedicated SQL pool design](../guidance/implementation-success-evaluate-dedicated-sql-pool-design.md)
virtual-machines Agent Dependency Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-dependency-linux.md
The Azure Monitor for VMs Map feature gets its data from the Microsoft Dependenc
### Operating system
-The Azure VM Dependency agent extension for Linux can be run against the supported operating systems listed in the [Supported operating systems](../../azure-monitor/vm/vminsights-enable-overview.md#supported-operating-systems) section of the Azure Monitor for VMs deployment article.
+Because the Azure VM Dependency agent works at the kernel level, operating system support is also dependent on the kernel version. As of Dependency agent version 9.10.* the agent supports * kernels. The following table lists the major and minor Linux OS release and supported kernel versions for the Dependency agent.
+ ## Extension schema
virtual-machines Agent Dependency Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-dependency-windows.md
The Azure Monitor for VMs Map feature gets its data from the Microsoft Dependenc
## Operating system
-The Azure VM Dependency agent extension for Windows can be run against the supported operating systems listed in the [Supported operating systems](../../azure-monitor/vm/vminsights-enable-overview.md#supported-operating-systems) section of the Azure Monitor for VMs deployment article.
+The Azure VM Dependency agent extension for Windows can be run against the supported operating systems listed in the following table. All operating systems in the following table are assumed to be x64. x86 isn't supported for any operating system.
+ ## Extension schema