Updates from: 07/20/2024 01:10:19
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Use Sdk Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md
Previously updated : 06/11/2024 Last updated : 07/18/2024 zone_pivot_groups: programming-languages-set-formre
Choose from the following Document Intelligence models and analyze and extract d
::: zone pivot="programming-language-csharp" ::: moniker range="doc-intel-4.0.0" ::: moniker-end ::: moniker range="doc-intel-3.1.0 || doc-intel-3.0.0"
ai-services Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/tutorials/prompt-flow.md
This tutorial teaches you how to use Language in prompt flow utilizing [Azure AI
- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>. -- Access granted to Azure OpenAI in the desired Azure subscription.-
- Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue.
- - You need an Azure AI Studio hub or permissions to create one. Your user role must be **Azure AI Developer**, **Contributor**, or **Owner** on the hub. For more information, see [hubs](../../../ai-studio/concepts/ai-resources.md) and [Azure AI roles](../../../ai-studio/concepts/rbac-ai-studio.md). - If your role is **Contributor** or **Owner**, you can [create a hub in this tutorial](#create-a-project-in-azure-ai-studio). - If your role is **Azure AI Developer**, the hub must already be created.
ai-services Assistants Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/assistants-quickstart.md
Previously updated : 05/20/2024 Last updated : 07/18/2024 zone_pivot_groups: openai-quickstart-assistants recommendations: false
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
recommendations: false
# Azure OpenAI Service models
-Azure OpenAI Service is powered by a diverse set of models with different capabilities and price points. Model availability varies by region.
+Azure OpenAI Service is powered by a diverse set of models with different capabilities and price points. Model availability varies by region and cloud. For Azure Government model availability, please refer to [Azure Government OpenAI Service](../../../azure-government/compare-azure-government-global-azure.md#azure-ai-services-openai-service).
| Models | Description | |--|--|
In addition to the regions above which are available to all Azure OpenAI custome
| `gpt-4` (0314) <br> `gpt-4-32k` (0314) | East US <br> France Central <br> South Central US <br> UK South | | `gpt-4` (0613) <br> `gpt-4-32k` (0613) | East US <br> East US 2 <br> Japan East <br> UK South |
-#### Azure Government regions
-
-The following GPT-4 models are available with [Azure Government](/azure/azure-government/documentation-government-welcome):
-
-|Model ID | Model Availability |
-|--|--|
-| `gpt-4` (1106-Preview) | US Gov Virginia<br>US Gov Arizona |
- ### GPT-3.5 models > [!IMPORTANT]
See [model versions](../concepts/model-versions.md) to learn about how Azure Ope
[!INCLUDE [GPT-35-Turbo](../includes/model-matrix/standard-gpt-35-turbo.md)]
-#### Azure Government regions
-
-The following GPT-3.5 turbo models are available with [Azure Government](/azure/azure-government/documentation-government-welcome):
-
-|Model ID | Model Availability |
-|--|--|
-| `gpt-35-turbo` (1106-Preview) | US Gov Virginia |
- ### Embeddings models These models can only be used with Embedding API requests.
These models can only be used with Embedding API requests.
[!INCLUDE [Embeddings](../includes/model-matrix/standard-embeddings.md)]
-#### Azure Government regions
-
-The following Embeddings models are available with [Azure Government](/azure/azure-government/documentation-government-welcome):
-
-|Model ID | Model Availability |
-|--|--|
-|`text-embedding-ada-002` (version 2) |US Gov Virginia<br>US Gov Arizona |
- ### DALL-E models | Model ID | Feature Availability | Max Request (characters) |
ai-services Reproducible Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/reproducible-output.md
Previously updated : 04/09/2024 Last updated : 07/19/2024 recommendations: false
Reproducible output is only currently supported with the following:
### Supported models
-* `gpt-35-turbo` (1106) - [region availability](../concepts/models.md#gpt-35-turbo-model-availability)
-* `gpt-35-turbo` (0125) - [region availability](../concepts/models.md#gpt-35-turbo-model-availability)
-* `gpt-4` (1106-Preview) - [region availability](../concepts/models.md#gpt-4-and-gpt-4-turbo-model-availability)
-* `gpt-4` (0125-Preview) - [region availability](../concepts/models.md#gpt-4-and-gpt-4-turbo-model-availability)
+* `gpt-35-turbo` (1106)
+* `gpt-35-turbo` (0125)
+* `gpt-4` (1106-Preview)
+* `gpt-4` (0125-Preview)
+* `gpt-4` (turbo-2024-04-09)
+* `gpt-4o` (2024-05-13)
+
+Consult the [models page](../concepts/models.md) for the latest information on model regional availability.
### API Version
ai-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md
- ignite-2023 - references_regions Previously updated : 07/01/2024 Last updated : 07/18/2024
The following sections provide you with a quick guide to the default quotas and
| Max number of `/chat/completions` functions | 128 | | Max number of `/chat completions` tools | 128 | | Maximum number of Provisioned throughput units per deployment | 100,000 |
-| Max files per Assistant/thread | 20 |
+| Max files per Assistant/thread | 10,000 when using the API or AI Studio. 20 when using Azure OpenAI Studio.|
| Max file size for Assistants & fine-tuning | 512 MB | | Assistants token limit | 2,000,000 token limit | | GPT-4o max images per request (# of images in the messages array/conversation history) | 10 |
M = million | K = thousand
#### Usage tiers
-Global Standard deployments use Azure's global infrastructure, dynamically routing customer traffic to the data center with best availability for the customerΓÇÖs inference requests. This enables more consistent latency for customers with low to medium levels of traffic. Customers with high sustained levels of usage may see more variability in response latency.
+Global Standard deployments use Azure's global infrastructure, dynamically routing customer traffic to the data center with best availability for the customerΓÇÖs inference requests. This enables more consistent latency for customers with low to medium levels of traffic. Customers with high sustained levels of usage might see more variability in response latency.
The Usage Limit determines the level of usage above which customers might see larger variability in response latency. A customerΓÇÖs usage is defined per model and is the total tokens consumed across all deployments in all subscriptions in all regions for a given tenant.
To minimize issues related to rate limits, it's a good idea to use the following
### How to request increases to the default quotas and limits
-Quota increase requests can be submitted from the [Quotas](./how-to/quota.md) page of Azure OpenAI Studio. Please note that due to overwhelming demand, quota increase requests are being accepted and will be filled in the order they are received. Priority will be given to customers who generate traffic that consumes the existing quota allocation, and your request may be denied if this condition isn't met.
+Quota increase requests can be submitted from the [Quotas](./how-to/quota.md) page of Azure OpenAI Studio. Note that due to overwhelming demand, quota increase requests are being accepted and will be filled in the order they are received. Priority will be given to customers who generate traffic that consumes the existing quota allocation, and your request might be denied if this condition isn't met.
-For other rate limits, please [submit a service request](../cognitive-services-support-options.md?context=/azure/ai-services/openai/context/context).
+For other rate limits, [submit a service request](../cognitive-services-support-options.md?context=/azure/ai-services/openai/context/context).
## Next steps
ai-services On Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/references/on-your-data.md
description: Learn how to use Azure OpenAI On Your Data Python & REST API.
Previously updated : 06/13/2024 Last updated : 07/18/2024 recommendations: false
completion = client.chat.completions.create(
print(completion.model_dump_json(indent=2))
+# render the citations
+
+content = completion.choices[0].message.content
+context = completion.choices[0].message.context
+for citation_index, citation in enumerate(context["citations"]):
+ citation_reference = f"[doc{citation_index + 1}]"
+ url = "https://example.com/?redirect=" + citation["url"] # replace with actual host and encode the URL
+ filepath = citation["filepath"]
+ title = citation["title"]
+ snippet = citation["content"]
+ chunk_id = citation["chunk_id"]
+ replaced_html = f"<a href='{url}' title='{title}\n{snippet}''>(See from file {filepath}, Part {chunk_id})</a>"
+ content = content.replace(citation_reference, replaced_html)
+print(content)
``` # [REST](#tab/rest)
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
Refer to our [Default safety policy documentation](./concepts/default-safety-pol
API version `2024-06-01` is the latest GA data plane inference API release. It replaces API version `2024-02-01` and adds support for: - embeddings `encoding_format` & `dimensions` parameters.-- chat completions `logprops` & `top_logprobs` parameters.
+- chat completions `logprobs` & `top_logprobs` parameters.
Refer to our [data plane inference reference documentation](./reference.md) for more information.
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/overview.md
Previously updated : 1/21/2024 Last updated : 7/18/2024
Fast transcription API is used to transcribe audio files with returning results
- Video translation > [!NOTE]
-> Fast transcription API is only available via the speech to text REST API version 3.3.
+> Fast transcription API is only available via the speech to text REST API version 2024-05-15-preview.
To get started with fast transcription, see [use the fast transcription API (preview)](fast-transcription-create.md).
ai-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-to-text.md
In this overview, you learn about the benefits and capabilities of the speech to text feature of the Speech service, which is part of Azure AI services. Speech to text can be used for [real-time](#real-time-speech-to-text), [batch transcription](#batch-transcription-api), or [fast transcription](./fast-transcription-create.md) of audio streams into text. > [!NOTE]
-> To compare pricing of [real-time](#real-time-speech-to-text) to [batch transcription](#batch-transcription-api), see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+> To compare pricing of [real-time](#real-time-speech-to-text), [batch transcription](#batch-transcription-api), and [fast transcription](./fast-transcription-create.md), see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
For a full list of available speech to text languages, see [Language and voice support](language-support.md?tabs=stt).
ai-studio Deployments Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/deployments-overview.md
# Overview: Deploy models, flows, and web apps with Azure AI Studio Azure AI Studio supports deploying large language models (LLMs), flows, and web apps. Deploying an LLM or flow makes it available for use in a website, an application, or other production environments. This typically involves hosting the model on a server or in the cloud, and creating an API or other interface for users to interact with the model.
ai-studio Deploy Models Serverless Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-serverless-availability.md
# Region availability for models in serverless API endpoints | Azure AI Studio - In this article, you learn about which regions are available for each of the models supporting serverless API endpoint deployments. Certain models in the model catalog can be deployed as a serverless API with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need. This deployment option doesn't require quota from your subscription.
ai-studio Deploy Models Serverless Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-serverless-connect.md
# Consume serverless API endpoints from a different Azure AI Studio project or hub - In this article, you learn how to configure an existing serverless API endpoint in a different project or hub than the one that was used to create the deployment. [Certain models in the model catalog](deploy-models-serverless-availability.md) can be deployed as serverless APIs. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need. This deployment option doesn't require quota from your subscription.
Follow these steps to create a connection:
## Related content - [What is Azure AI Studio?](../what-is-ai-studio.md)-- [Azure AI FAQ article](../faq.yml)
+- [Azure AI FAQ article](../faq.yml)
ai-studio Deploy Models Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-serverless.md
description: Learn to deploy models as serverless APIs, using Azure AI Studio.
Previously updated : 5/21/2024 Last updated : 07/18/2024
# Deploy models as serverless APIs - In this article, you learn how to deploy a model from the model catalog as a serverless API with pay-as-you-go token based billing. [Certain models in the model catalog](deploy-models-serverless-availability.md) can be deployed as a serverless API with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need. This deployment option doesn't require quota from your subscription.
In this article, you learn how to deploy a model from the model catalog as a ser
You can use any compatible web browser to [deploy ARM templates](../../azure-resource-manager/templates/deploy-portal.md) in the Microsoft Azure portal or use any of the deployment tools. This tutorial uses the [Azure CLI](/cli/azure/).
-## Subscribe your project to the model offering
-
-For models offered through the Azure Marketplace, you can deploy them to serverless API endpoints to consume their predictions. If it's your first time deploying the model in the project, you have to subscribe your project for the particular model offering from the Azure Marketplace. Each project has its own subscription to the particular Azure Marketplace offering of the model, which allows you to control and monitor spending.
-
-> [!NOTE]
-> Models offered through the Azure Marketplace are available for deployment to serverless API endpoints in specific regions. Check [Model and region availability for Serverless API deployments](deploy-models-serverless-availability.md) to verify which models and regions are available. If the one you need is not listed, you can deploy to a workspace in a supported region and then [consume serverless API endpoints from a different workspace](deploy-models-serverless-connect.md).
+## Find your model and model ID in the model catalog
1. Sign in to [Azure AI Studio](https://ai.azure.com).
-1. Ensure your account has the **Azure AI Developer** role permissions on the resource group, or that you meet the [permissions required to subscribe to model offerings](#permissions-required-to-subscribe-to-model-offerings).
+1. For models offered through the Azure Marketplace, ensure that your account has the **Azure AI Developer** role permissions on the resource group, or that you meet the [permissions required to subscribe to model offerings](#permissions-required-to-subscribe-to-model-offerings).
+
+ Models that are offered by non-Microsoft providers (for example, Llama and Mistral models) are billed through the Azure Marketplace. For such models, you're required to subscribe your project to the particular model offering. Models that are offered by Microsoft (for example, Phi-3 models) don't have this requirement, as billing is done differently. For details about billing for serverless deployment of models in the model catalog, see [Billing for serverless APIs](model-catalog-overview.md#billing).
1. Select **Model catalog** from the left sidebar and find the model card of the model you want to deploy. In this article, you select a **Meta-Llama-3-8B-Instruct** model.
For models offered through the Azure Marketplace, you can deploy them to serverl
:::image type="content" source="../media/deploy-monitor/serverless/model-card.png" alt-text="A screenshot showing a model's details page." lightbox="../media/deploy-monitor/serverless/model-card.png"::: +
+The next section covers the steps for subscribing your project to a model offering. You can skip this section and go to [Deploy the model to a serverless API endpoint](#deploy-the-model-to-a-serverless-api-endpoint), if you're deploying a Microsoft model.
+
+## Subscribe your project to the model offering
+
+For non-Microsoft models offered through the Azure Marketplace, you can deploy them to serverless API endpoints to consume their predictions. If it's your first time deploying the model in the project, you have to subscribe your project for the particular model offering from the Azure Marketplace. Each project has its own subscription to the particular Azure Marketplace offering of the model, which allows you to control and monitor spending.
+
+> [!NOTE]
+> Models offered through the Azure Marketplace are available for deployment to serverless API endpoints in specific regions. Check [Model and region availability for Serverless API deployments](deploy-models-serverless-availability.md) to verify which models and regions are available. If the one you need is not listed, you can deploy to a workspace in a supported region and then [consume serverless API endpoints from a different workspace](deploy-models-serverless-connect.md).
+ 1. Create the model's marketplace subscription. When you create a subscription, you accept the terms and conditions associated with the model offer. # [AI Studio](#tab/azure-ai-studio)
- 1. On the model's **Details** page, select **Deploy** and then select **Serverless API** to open the deployment wizard.
+ 1. On the model's **Details** page, select **Deploy** and then select **Serverless API with Azure AI Content Safety (preview)** to open the deployment wizard.
- 1. Select the project in which you want to deploy your models. Notice that not all the regions are supported.
+ 1. Select the project in which you want to deploy your models. To use the serverless API model deployment offering, your project must belong to one of the [regions that are supported for serverless deployment](deploy-models-serverless-availability.md) for the particular model.
:::image type="content" source="../media/deploy-monitor/serverless/deploy-pay-as-you-go.png" alt-text="A screenshot showing how to deploy a model with the serverless API option." lightbox="../media/deploy-monitor/serverless/deploy-pay-as-you-go.png":::
For models offered through the Azure Marketplace, you can deploy them to serverl
} ```
-1. Once you sign up the project for the particular Azure Marketplace offering, subsequent deployments of the same offering in the same project don't require subscribing again.
+1. Once you subscribe the project for the particular Azure Marketplace offering, subsequent deployments of the same offering in the same project don't require subscribing again.
1. At any point, you can see the model offers to which your project is currently subscribed:
For models offered through the Azure Marketplace, you can deploy them to serverl
## Deploy the model to a serverless API endpoint
-Once you've created a model's subscription, you can deploy the associated model to a serverless API endpoint. The serverless API endpoint provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance organizations need. This deployment option doesn't require quota from your subscription.
+Once you've created a subscription for a non-Microsoft model, you can deploy the associated model to a serverless API endpoint. For Microsoft models (such as Phi-3 models), you don't need to create a subscription.
-In this article, you create an endpoint with name **meta-llama3-8b-qwerty**.
+The serverless API endpoint provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance organizations need. This deployment option doesn't require quota from your subscription.
+
+In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**.
1. Create the serverless endpoint # [AI Studio](#tab/azure-ai-studio)
- 1. From the previous wizard, select **Deploy** (if you've just subscribed the project to the model offer in the previous section), or select **Continue to deploy** (if your deployment wizard had the note *You already have an Azure Marketplace subscription for this project*).
+ 1. To deploy a Microsoft model that doesn't require subscribing to a model offering:
+ 1. Select **Deploy** and then select **Serverless API with Azure AI Content Safety (preview)** to open the deployment wizard.
+ 1. Select the project in which you want to deploy your model. Notice that not all the regions are supported.
+
+ 1. Alternatively, for a non-Microsoft model that requires a model subscription, if you've just subscribed your project to the model offer in the previous section, continue to select **Deploy**. Alternatively, select **Continue to deploy** (if your deployment wizard had the note *You already have an Azure Marketplace subscription for this project*).
:::image type="content" source="../media/deploy-monitor/serverless/deploy-pay-as-you-go-subscribed-project.png" alt-text="A screenshot showing a project that is already subscribed to the offering." lightbox="../media/deploy-monitor/serverless/deploy-pay-as-you-go-subscribed-project.png":::
In this article, you create an endpoint with name **meta-llama3-8b-qwerty**.
> [!TIP] > If you're using prompt flow in the same project or hub where the deployment was deployed, you still need to create the connection.
-## Using the serverless API endpoint
+## Use the serverless API endpoint
Models deployed in Azure Machine Learning and Azure AI studio in Serverless API endpoints support the [Azure AI Model Inference API](../reference/reference-model-inference-api.md) that exposes a common set of capabilities for foundational models and that can be used by developers to consume predictions from a diverse set of models in a uniform and consistent way.
-Read more about the [capabilities of this API](../reference/reference-model-inference-api.md#capabilities) and how [you can leverage it when building applications](../reference/reference-model-inference-api.md#getting-started).
+Read more about the [capabilities of this API](../reference/reference-model-inference-api.md#capabilities) and how [you can use it when building applications](../reference/reference-model-inference-api.md#getting-started).
## Delete endpoints and subscriptions
az resource delete --name <resource-name>
## Cost and quota considerations for models deployed as serverless API endpoints
-Models deployed as serverless API endpoints are offered through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying or fine-tuning the models.
+Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
+
+#### Cost for Microsoft models
+
+You can find the pricing information on the __Pricing and terms__ tab of the deployment wizard when deploying Microsoft models (such as Phi-3 models) as serverless API endpoints.
+
+#### Cost for non-Microsoft models
+
+Non-Microsoft models deployed as serverless API endpoints are offered through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying or fine-tuning these models.
Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference and fine-tuning; however, multiple meters are available to track each scenario independently.
For more information on how to track costs, see [Monitor costs for models offere
:::image type="content" source="../media/deploy-monitor/serverless/costs-model-as-service-cost-details.png" alt-text="A screenshot showing different resources corresponding to different model offers and their associated meters." lightbox="../media/deploy-monitor/serverless/costs-model-as-service-cost-details.png":::
-Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
- ## Permissions required to subscribe to model offerings Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Owner__, __Contributor__, or __Azure AI Developer__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions:
Azure role-based access controls (Azure RBAC) are used to grant access to operat
For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
-## Next step
+## Related content
+* [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
* [Fine-tune a Meta Llama 2 model in Azure AI Studio](fine-tune-model-llama.md)
aks Manage Ssh Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-ssh-node-access.md
To improve security and support your corporate security requirements or strategy
When you disable SSH at cluster creation time, it takes effect after the cluster is created. However, when you disable SSH on an existing cluster or node pool, AKS doesn't automatically disable SSH. At any time, you can choose to perform a nodepool upgrade operation. The disable/enable SSH keys operation takes effect after the node image update is complete.
+> [!NOTE]
+> When you disable SSH at the cluster level, it applies to all existing node pools. Any node pools created after this operation will have SSH enabled by default, and you'll need to run these commands again in order to disable it.
+ |SSH parameter |Description | |--|--| |`disabled` |The SSH service is disabled. |
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
description: Learn how to use Key Management Service (KMS) etcd encryption with
Previously updated : 06/26/2024 Last updated : 07/19/2024 # Add Key Management Service etcd encryption to an Azure Kubernetes Service cluster
Turn off KMS on an existing cluster and release the key vault:
az aks update --name myAKSCluster --resource-group MyResourceGroup --disable-azure-keyvault-kms ```
+Use the following command to update all secrets. If you don't run this command, secrets that were created earlier are still encrypted with the previous key. For larger clusters, you might want to subdivide the secrets by namespace or create an update script.
+
+```azurecli-interactive
+kubectl get secrets --all-namespaces -o json | kubectl replace -f -
+```
+ ### Change the key vault mode Update the key vault from public to private:
api-management Developer Portal Extend Custom Functionality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-extend-custom-functionality.md
The following table summarizes two options, with links to more detail.
> [!NOTE] > [Self-hosting the developer portal](developer-portal-self-host.md) is an extensibility option for customers who need to customize the source code of the entire portal core. It gives complete flexibility for customizing portal experience, but requires advanced configuration. With self-hosting, you're responsible for managing complete code lifecycle: fork code base, develop, deploy, host, patch, and upgrade. ## Use Custom HTML code widget
api-management Developer Portal Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-overview.md
This article introduces features of the developer portal, the types of content t
:::image type="content" source="media/developer-portal-overview/cover.png" alt-text="Screenshot of the API Management developer portal."::: - ## Developer portal architectural concepts The portal components can be logically divided into two categories: *code* and *content*.
The developer portal's administrative interface provides a visual editor for pub
[!INCLUDE [api-management-developer-portal-add](../../includes/api-management-developer-portal-add.md)] + ### Layouts and pages
api-management Developer Portal Wordpress Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-wordpress-plugin.md
+
+ Title: Customize developer portal on WordPress - Azure API Management
+description: Configure a WordPress plugin (preview) for the developer portal in your API Management instance. Use WordPress customizations to enhance the developer portal.
+++++ Last updated : 07/18/2024+++
+# Customize the API Management developer portal on WordPress
++
+This article shows how to configure an open-source developer portal plugin (preview) to customize the API Management developer portal on WordPress. With the plugin, turn any WordPress site into a developer portal. Take advantage of site capabilities in WordPress to customize and add features to your developer portal including localization, collapsible and expandable menus, custom stylesheets, file downloads, and more.
+
+In this article, you create a WordPress site on Azure App Service and configure the developer portal plugin on the WordPress site. Microsoft Entra ID is configured for authentication to the WordPress site and the developer portal.
+
+## Prerequisites
+
+* An API Management instance. If needed, [create an instance](get-started-create-service-instance.md).
+ > [!NOTE]
+ > Currently, the plugin isn't supported in the API Management v2 tiers.
+* Enable and publish the developer portal. For steps, see [Tutorial: Access and customize the developer portal](api-management-howto-developer-portal-customize.md).
+* Permissions to create an app registration in a Microsoft Entra tenant associated with your Azure subscription.
+* Installation files for the developer portal WordPress plugin and customized WordPress theme from the [plugin repo](https://aka.ms/apim/wpplugin). Download the following zip files from the [dist](https://github.com/Azure/AzureAPIM-Wordpress-plugin/tree/main/dist) folder in the repo:
+ * `apim-devportal.zip` - Plugin file
+ * `twentytwentyfour.zip` - Theme file
+
+## Step 1: Create WordPress on App Service
+
+For this scenario, you create a managed WordPress site hosted on Azure App Service. The following are brief steps:
+
+1. In the Azure portal, navigate to [https://portal.azure.com/#create/WordPress.WordPress](https://portal.azure.com/#create/WordPress.WordPress).
+
+1. On the **Create WordPress on App Service** page, in the **Basics** tab, enter your project details.
+
+ Record the WordPress admin username and password in a safe place. These credentials are required to sign into the WordPress admin site and install the plugin in a later step.
+
+1. On the **Add-ins** tab:
+
+ 1. Select the recommended default values for **Email with Azure Communication Services**, **Azure CDN**, and **Azure Blob Storage**.
+ 1. In **Virtual network**, select either the **New** value or an existing virtual network.
+1. On the **Deployment** tab, leave **Add staging slot** unselected.
+1. Select **Review + create** to run final validation.
+1. Select **Create** to complete app service deployment.
+
+It can take several minutes for the app service to deploy.
+
+## Step 2: Create a new Microsoft Entra app registration
+
+In this step, create a new Microsoft Entra app. In later steps, you configure this app as an identity provider for authentication to your app service and to the developer portal in your API Management instance.
+
+1. In the [Azure portal](https://portal.azure.com), navigate to **App registrations** > **+ New registration**.
+1. Provide the new app name, and in **Supported account types**, select **Accounts in this organizational directory only**. Select **Register**.
+1. On the **Overview** page, copy and safely store the **Application (client) Id** and **Directory (tenant) Id**. You need these values in later steps to configure authentication to your API Management instance and app service.
+ :::image type="content" source="media/developer-portal-wordpress-plugin/app-registration-overview.png" alt-text="Screenshot of Overview page of app registration in the portal.":::
+
+1. In the left menu, under **Manage**, select **Authentication** > **+ Add a platform**.
+1. On the **Configure platforms** page, select **Web**.
+1. On the **Configure Web** page, enter the following redirect URI, substituting the name of your app service, and select **Configure**:
+
+ `https://app-service-name>.azurewebsites.net/.auth/login/aad/callback`
+
+1. Select **+ Add a platform** again. Select **Single-page application**.
+1. On the **Configure single-page application** page, enter the following redirect URI, substituting the name of your API Management instance, and select **Configure**:
+
+ `https://<apim-instance-name>.developer.azure-api.net/signin`
+
+1. On the **Authentication** page, under **Single-page application**, select **Add URI** and enter the following URI, substituting the name of your API Management instance:
+
+ `https://<apim-instance-name>.developer.azure-api.net/`
+
+1. Under **Implicit grant and hybrid flows**, select **ID tokens** and select **Save**.
+1. In the left menu, under **Manage**, select **Token configuration** > **+ Add optional claim**.
+1. On the **Add optional claim** page, select **ID** and then select the following claims: **email, family_name, given_name, onprem_sid, preferred_username, upn**. Select **Add**.
+1. When prompted, select **Turn on the Microsoft Graph email, profile permission**. Select **Add**.
+1. In the left menu, under **Manage** select **API permissions** and confirm that the following Microsoft Graph permissions are present: **email, profile, User.Read**.
+
+ :::image type="content" source="media/developer-portal-wordpress-plugin/required-api-permissions.png" alt-text="Screenshot of API permissions in the portal.":::
+
+1. In the left menu, under **Manage**, select **Certificates & secrets** > **+ New client secret**.
+1. Configure settings for the secret and select **Add**. Copy and safely store the secret's **Value** immediately after it's generated. You need this value in later steps to add the application to your API Management instance and app service for authentication.
+1. In the left menu, under **Manage**, select **Expose an API**. Note the default **Application ID URI**. Optionally add custom scopes if needed.
+
+## Step 3: Enable authentication to the app service
+
+In this step, add the Microsoft Entra app registration as an identity provider for authentication to the WordPress app service.
+
+1. In the [portal](https://portal.azure.com), navigate to the WordPress app service.
+1. In the left menu, under **Settings**, select **Authentication** > **Add identity provider**.
+1. On the **Basics** tab, in **Identity provider**, select **Microsoft**.
+1. Under **App registration**, select **Provide the details of an existing app registration**.
+ 1. Enter the **Application (client) Id** and **Client secret** from the app registration that you created in the previous step.
+ 1. In **Issuer URL**, enter either of the following values for the authentication endpoint: `https://login.microsoftonline.com/<tenant-id>` or `https://sts.windows.net/<tenantid>`. Replace `<tenant-id>` with the **Directory (tenant) Id** from the app registration.
+ > [!IMPORTANT]
+ > Do not use the version 2.0 endpoint for the issuer URL (URL ending in `/v2.0`).
+1. In **Allowed token audiences**, enter the **Application ID URI** from the app registration. Example: `api://<app-id>`.
+1. Under **Additional checks**, select values appropriate for your environment, or use the default values.
+1. Accept the default values for the remaining settings and select **Add**.
+
+The identity provider is added to the app service.
+
+## Step 4: Enable authentication to the API Management developer portal
+
+Configure the same Microsoft Entra app registration as an identity provider for the API Management developer portal.
+
+1. In the [portal](https://portal.azure.com), navigate to your API Management instance.
+1. In the left menu, under **Developer portal**, select **Identities** > **+ Add**.
+1. On the **Add identity provider** page, select **Azure Active Directory** (Microsoft Entra ID).
+1. Enter the **Client Id**, **Client secret**, and **Signin tenant** values from the app registration that you created in a previous step. The **Signin tenant** is the **Directory (tenant) Id** from the app registration.
+1. In **Client library**, select **MSAL**.
+1. Accept default values for the remaining settings and select **Add**.
+1. [Republish the developer portal](developer-portal-overview.md#publish-the-portal) to apply the changes.
+1. Test the authentication by signing into the developer portal at the following URL, substituting the name of your API Management instance: `https://<apim-instance-name>.developer.azure-api.net/signin`. Select the **Azure Active Directory** (Microsoft Entra ID) button at the bottom to sign in.
+
+ When you open it for the first time, you may be prompted to consent to specific permissions.
+
+## Step 5: Configure developer portal settings in API Management
+
+Update the settings of the developer portal in the API Management instance to enable CORS and to include the app service site as a portal origin.
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
+1. In the left menu, under **Developer portal**, select **Portal overview**.
+1. On the **Portal overview** tab, select **Enable CORS**.
+1. In the left menu, under **Developer portal**, select **Portal settings**.
+1. On the **Self-hosted portal configuration** tab, enter the hostname of your WordPress on App Service site as a portal origin, substituting the name of your app service in the following URL: `https://<yourapp-service-name>.azurewebsites.net`
+1. [Republish the developer portal](developer-portal-overview.md#publish-the-portal) to apply the changes.
+
+Also, update the `cors` policy configuration in the API Management instance to add the app service site as an allowed origin. This value is needed to allow calls from the developer portal's test console on the WordPress site.
+
+Add the following `origin` value in your `cors` policy configuration:
+
+```xml
+<cors ...>
+ <allowed-origins>
+ [...]
+ <origin>https://<yourapp-service-name>.azurewebsites.net</origin>
+ </allowed-origins>
+</cors>
+```
+
+Learn more about how to [set or edit policies](set-edit-policies.md).
+
+## Step 6: Navigate to WordPress admin site and upload the customized theme
+
+In this step, you upload the customized theme for the API Management developer portal to the WordPress admin site.
+
+> [!IMPORTANT]
+> We recommend that you upload the theme provided in the plugin repo. The theme is based on the Twenty Twenty Four theme and is customized to support the developer portal functionality in WordPress. If you choose to use a different theme, some functionality may not work as expected or require additional customization.
+
+1. Open the WordPress admin website at the following URL, substituting the name of your app service: `http://<app-service-name>.azurewebsites.net/wp-admin`
+
+ When you open it for the first time, you may be prompted to consent to specific permissions.
+
+1. Sign into the WordPress admin site using the username and password that you entered while creating WordPress on App Service.
+1. In the left menu, select **Appearance** > **Themes** and then **Add New Theme**.
+1. Select **Upload Theme**. Select **Choose File** to upload the API Management developer portal theme zip file that you downloaded previously. Select **Install Now**.
+1. In the next screen, select **Replace active with uploaded**.
+1. If prompted, select **Activate**.
+
+> [!NOTE]
+> If your WordPress site includes a caching plug-in, clear the cache after activating the theme to ensure that the changes take effect.
+
+## Step 7: Install the developer portal plugin
+
+1. In the WordPress admin site, in the left menu, select **Plugins** > **Add New Plugin**.
+1. Select **Upload Plugin**. Select **Choose File** to upload the API Management developer portal plugin zip file (`apim-devportal.zip`) that you downloaded previously. Select **Install Now**.
+1. After successful installation, select **Activate Plugin**.
+1. In the left menu, select **Azure API Management Developer Portal**.
+1. On the **APIM Settings** page, enter the following settings and select **Save Changes**:
+ * **APIM service name** - Name of your API Management instance
+ * Enable **Create default pages** and **Create navigation menu**
+
+## Step 8: Add a custom stylesheet
+
+Add a custom stylesheet for the API Management developer portal.
+
+ 1. In the WordPress admin site, in the left menu, select **Appearance** > **Themes**.
+ 1. Select **Customize** and then navigate to **Styles**.
+ 1. Select the pencil icon (**Edit Styles**).
+ 1. In the **Styles pane**, select **More** (three dots) > **Additional CSS**.
+ 1. In **Additional CSS**, enter the following CSS:
+
+ ```css
+ .apim-table {
+ width: 100%;
+ border: 1px solid #D1D1D1;
+ border-radius: 4px;
+ border-spacing: 0;
+ }
+
+ .apim-table th {
+ background: #E6E6E6;
+ font-weight: bold;
+ text-align: left;
+ }
+
+ .apim-table th,
+ .apim-table td {
+ padding: .7em 1em;
+ }
+
+ .apim-table td {
+ border-top: #E6E6E6 solid 1px;
+ }
+
+ .apim-input,
+ .apim-button,
+ .apim-nav-link-btn {
+ border-radius: .33rem;
+ padding: 0.6rem 1rem;
+ }
+
+ .apim-button,
+ .apim-nav-link-btn {
+ background-color: var(--wp--preset--color--contrast);
+ border-color: var(--wp--preset--color--contrast);
+ border-width: 0;
+ color: var(--wp--preset--color--base);
+ font-size: var(--wp--preset--font-size--small);
+ font-weight: 500;
+ text-decoration: none;
+ cursor: pointer;
+ transition: all .25s ease;
+ }
+
+ .apim-nav-link-btn:hover {
+ background: var(--wp--preset--color--base);
+ color: var(--wp--preset--color--contrast);
+ }
+
+ .apim-button:hover {
+ background: var(--wp--preset--color--vivid-cyan-blue);
+ }
+
+ .apim-button:disabled {
+ background: var(--wp--preset--color--contrast-2);
+ cursor: not-allowed;
+ }
+
+ .apim-label {
+ display: block;
+ margin-bottom: 0.5rem;
+ }
+
+ .apim-input {
+ border: solid 1px var(--wp--preset--color--contrast);
+ }
+
+ .apim-grid {
+ display: grid;
+ grid-template-columns: 11em max-content;
+ }
+
+ .apim-link,
+ .apim-nav-link {
+ text-align: inherit;
+ font-size: inherit;
+ padding: 0;
+ background: none;
+ border: none;
+ font-weight: inherit;
+ cursor: pointer;
+ text-decoration: none;
+ color: var(--wp--preset--color--vivid-cyan-blue);
+ }
+
+ .apim-nav-link {
+ font-weight: 500;
+ }
+
+ .apim-link:hover,
+ .apim-nav-link:hover {
+ text-decoration: underline;
+ }
+
+ #apim-signIn {
+ display: flex;
+ align-items: center;
+ gap: 24px;
+ }
+ ```
+1. **Save** the changes. Select **Save** again to save the changes to the theme.
+1. **Log Out** of the WordPress admin site.
++
+## Step 9: Sign into the API Management developer portal deployed on WordPress
+
+Sign into the WordPress site to see your new API Management developer portal deployed on WordPress and hosted on App Service.
+
+> [!NOTE]
+> You can only sign in to the developer portal on WordPress using Microsoft Entra ID credentials. Basic authentication isn't supported.
+
+1. In a new browser window, navigate to your WordPress site, substituting the name of your app service in the following URL: `https://<yourapp-service-name>.azurewebsites.net`
+1. When prompted, sign in using Microsoft Entra ID credentials for a developer account.
++
+You can now use the following features of the API Management developer portal:
+
+* Sign into the portal
+* See list of APIs
+* Navigate to API details page and see list of operations
+* Test the API using valid API keys
+* See list of products
+* Subscribe to a product and generate subscription keys
+* Navigate to **Profile** tab with account and subscription details
+* Sign out of the portal
+
+The following screenshot shows a sample page of the API Management developer portal hosted on WordPress.
+
+
+## Troubleshooting
+
+### I don't see the latest developer portal pages on the WordPress site
+
+If you don't see the latest developer portal pages when you visit the WordPress site, check that the developer portal plugin is installed, activated, and configured in the WordPress admin site. See [Install the developer portal plugin](#step-7-install-the-developer-portal-plugin) for steps.
+
+You might also need to clear the cache on your WordPress site or in the CDN, if one is configured. Alternatively, you might need to clear the cache on your browser.
+
+### I'm having problems signing in or out of the developer portal
+
+If you're having problems signing in or out of the developer portal, clear the browser cache, or view the developer portal site in a separate browser session (using incognito or private browsing mode).
+
+### I don't see a sign-in button on the developer portal navigation bar
+
+If you're using a custom theme different from the one provided in the plugin repo, you may need to add the sign-in functionality to the navigation bar manually. On the Home page, add the following shortcode block: `[SignInButton]`. [Learn more](https://wordpress.org/documentation/article/shortcode-block/) in the WordPress documentation.
+
+You can also sign in or sign out manually by entering the following URLs in your browser:
+
+* Sign in: `https://<app-service-name>.azurewebsites.net/.auth/login/aad`
+* Sign out: `https://<app-service-name>.azurewebsites.net/.auth/logout`
++
+## Related content
+
+- [Create a WordPress site on Azure App Service](../app-service/quickstart-wordpress.md)
+- [Customize the developer portal](api-management-howto-developer-portal-customize.md)
+- [Authorize developer accounts by using Microsoft Entra ID in Azure API Management](api-management-howto-aad.md).
automation Automation Runbook Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md
Title: Azure Automation runbook types
description: This article describes the types of runbooks that you can use in Azure Automation and considerations for determining which type to use. Previously updated : 03/23/2024 Last updated : 07/19/2024
The PowerShell version is determined by the **Runtime version** specified (that
The same Azure sandbox and Hybrid Runbook Worker can execute multiple **PowerShell** runbooks targeting different runtime versions side by side. > [!NOTE]
-> - Currently, PowerShell 7.2 runtime version is supported for both Cloud and Hybrid jobs in all Public regions except Central India, UAE Central, Israel Central, Italy North, Germany North and Gov clouds.
+> - Currently, PowerShell 7.2 runtime version is supported for both Cloud and Hybrid jobs in all Public regions except Central India, UAE Central, Israel Central, Italy North, and Germany North.
> - At the time of runbook execution, if you select **Runtime Version** as **7.2**, PowerShell modules targeting 7.2 runtime version are used and if you select **Runtime Version** as **5.1**, PowerShell modules targeting 5.1 runtime version are used. This applies for PowerShell 7.1 (preview) modules and runbooks. Ensure that you select the right Runtime Version for modules.
The following are the current limitations and known issues with PowerShell runbo
**Limitations** > [!NOTE]
-> Currently, PowerShell 7.2 runtime version is supported for both Cloud and Hybrid jobs in all Public regions except Central India, UAE Central, Israel Central, Italy North, Germany North and Gov clouds.
+> Currently, PowerShell 7.2 runtime version is supported for both Cloud and Hybrid jobs in all Public regions except Central India, UAE Central, Israel Central, Italy North, and Germany North.
- For the PowerShell 7.2 runtime version, the module activities aren't extracted for the imported modules. Use [Azure Automation extension for VS code](automation-runbook-authoring.md) to simplify runbook authoring experience. - PowerShell 7.x doesn't support workflows. For more information, see [PowerShell workflow](/powershell/scripting/whats-new/differences-from-windows-powershell#powershell-workflow) for more details.
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
description: Significant updates to Azure Automation updated each month.
Previously updated : 05/06/2024 Last updated : 07/19/2024
Azure Automation receives improvements on an ongoing basis. To stay up to date w
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Automation](whats-new-archive.md).
+## July 2024
+
+### General Availability: Azure Automation supports PowerShell 7.2 runbooks in Government clouds
+
+Azure Automation now supports PowerShell 7.2 runbooks in Government clouds.
++ ## April 2024 ### Changes in Process Automation subscription and service limits and quotas
azure-app-configuration Concept Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-github-action.md
- Title: Sync your GitHub repository to App Configuration
-description: Use GitHub Actions to automatically update your App Configuration instance when you update your GitHub repository.
-- Previously updated : 05/28/2020----
-# Sync your GitHub repository to App Configuration
-
-Teams that want to continue using their existing source control practices can use GitHub Actions to automatically sync their GitHub repository with their App Configuration store. This allows you to make changes to your config files as you normally would, while getting App Configuration benefits like: <br>
-&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Centralized configuration outside of your code <br>
-&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Updating configuration without redeploying your entire app <br>
-&nbsp;&nbsp;&nbsp;&nbsp;ΓÇó Integration with services like Azure App Service and Functions.
-
-A GitHub Actions [workflow](https://docs.github.com/en/actions/learn-github-actions/introduction-to-github-actions#the-components-of-github-actions) defines an automated process in a GitHub repository. The *Azure App Configuration Sync* Action triggers updates to an App Configuration instance when changes are made to the source repository. It uses a YAML (.yml) file found in the `/.github/workflows/` path of your repository to define the steps and parameters. You can trigger configuration updates when pushing, reviewing, or branching app configuration files just as you do with app code.
-
-The GitHub [documentation](https://docs.github.com/en/actions/learn-github-actions/introduction-to-github-actions) provides in-depth view of GitHub workflows and actions.
-
-## Enable GitHub Actions in your repository
-To start using this GitHub Action, go to your repository and select the **Actions** tab. Select **New workflow**, then **Set up a workflow yourself**. Finally, search the marketplace for ΓÇ£Azure App Configuration Sync.ΓÇ¥
-> [!div class="mx-imgBorder"]
-> ![Select the Action tab](media/find-github-action.png)
-
-> [!div class="mx-imgBorder"]
-> ![Select the app configuration sync Action](media/app-configuration-sync-action.png)
-
-## Sync configuration files after a push
-This action syncs Azure App Configuration files when a change is pushed to `appsettings.json`. When a developer pushes a change to `appsettings.json`, the App Configuration Sync action updates the App Configuration instance with the new values.
-
-The first section of this workflow specifies that the action triggers *on* a *push* containing `appsettings.json` to the *main* branch. The second section lists the jobs run once the action is triggered. The action checks out the relevant files and updates the App Configuration instance using the connection string stored as a secret in the repository. For more information about using secrets in GitHub, see [GitHub's article](https://docs.github.com/en/actions/reference/encrypted-secrets) about creating and using encrypted secrets.
-
-```json
-on:
- push:
- branches:
- - 'main'
- paths:
- - 'appsettings.json'
-
-jobs:
- syncconfig:
- runs-on: ubuntu-latest
- steps:
- # checkout done so that files in the repo can be read by the sync
- - uses: actions/checkout@v1
- - uses: azure/appconfiguration-sync@v1
- with:
- configurationFile: 'appsettings.json'
- format: 'json'
- # Replace <ConnectionString> with the name of the secret in your
- # repository
- connectionString: ${{ secrets.<ConnectionString> }}
- separator: ':'
-```
-
-## Use strict sync
-By default the GitHub Action does not enable strict mode, meaning that the sync will only add key-values from the configuration file to the App Configuration instance (no key-value pairs will be deleted). Enabling strict mode will mean key-value pairs that aren't in the configuration file are deleted from the App Configuration instance, so that it matches the configuration file. If you are syncing from multiple sources or using Azure Key Vault with App Configuration, you'll want to use different prefixes or labels with strict sync to avoid wiping out configuration settings from other files (see samples below).
-
-```json
-on:
- push:
- branches:
- - 'main'
- paths:
- - 'appsettings.json'
-
-jobs:
- syncconfig:
- runs-on: ubuntu-latest
- steps:
- # checkout done so that files in the repo can be read by the sync
- - uses: actions/checkout@v1
- - uses: azure/appconfiguration-sync@v1
- with:
- configurationFile: 'appsettings.json'
- format: 'json'
- # Replace <ConnectionString> with the name of the secret in your
- # repository
- connectionString: ${{ secrets.<ConnectionString> }}
- separator: ':'
- label: 'Label'
- prefix: 'Prefix:'
- strict: true
-```
-## Sync multiple files in one action
-
-If your configuration is in multiple files, you can use the pattern below to trigger a sync when either file is modified. This pattern uses the glob library https://www.npmjs.com/package/glob . Note that if your config file name contains a comma, you can use a backslash to escape the comma.
-
-```json
-on:
- push:
- branches:
- - 'main'
- paths:
- - 'appsettings.json'
- - 'appsettings2.json'
-
-jobs:
- syncconfig:
- runs-on: ubuntu-latest
- steps:
- # checkout done so that files in the repo can be read by the sync
- - uses: actions/checkout@v1
- - uses: azure/appconfiguration-sync@v1
- with:
- configurationFile: '{appsettings.json,appsettings2.json}'
- format: 'json'
- # Replace <ConnectionString> with the name of the secret in your repository
- connectionString: ${{ secrets.<ConnectionString> }}
- separator: ':'
-```
-
-## Sync by prefix or label
-Specifying prefixes or labels in your sync action will sync only that particular set. This is important for using strict sync with multiple files. Depending on how the configuration is set up, either a prefix or a label can be associated with each file and then each prefix or label can be synced separately so that nothing is overwritten. Typically prefixes are used for different applications or services and labels are used for different environments.
-
-Sync by prefix:
-
-```json
-on:
- push:
- branches:
- - 'main'
- paths:
- - 'appsettings.json'
-
-jobs:
- syncconfig:
- runs-on: ubuntu-latest
- steps:
- # checkout done so that files in the repo can be read by the sync
- - uses: actions/checkout@v1
- - uses: azure/appconfiguration-sync@v1
- with:
- configurationFile: 'appsettings.json'
- format: 'json'
- # Replace <ConnectionString> with the name of the secret in your repository
- connectionString: ${{ secrets.<ConnectionString> }}
- separator: ':'
- prefix: 'Prefix::'
-```
-
-Sync by label:
-
-```json
-on:
- push:
- branches:
- - 'main'
- paths:
- - 'appsettings.json'
-
-jobs:
- syncconfig:
- runs-on: ubuntu-latest
- steps:
- # checkout done so that files in the repo can be read by the sync
- - uses: actions/checkout@v1
- - uses: azure/appconfiguration-sync@v1
- with:
- configurationFile: 'appsettings.json'
- format: 'json'
- # Replace <ConnectionString> with the name of the secret in your repository
- connectionString: ${{ secrets.<ConnectionString> }}
- separator: ':'
- label: 'Label'
-
-```
-
-## Use a dynamic label on sync
-The following action inserts a dynamic label on each sync, ensuring that each sync can be uniquely identified and allowing code changes to be mapped to config changes.
-
-The first section of this workflow specifies that the action triggers *on* a *push* containing `appsettings.json` to the *main* branch. The second section runs a job that creates a unique label for the config update based on the commit hash. The job then updates the App Configuration instance with the new values and the unique label for this update.
-
-```json
-on:
- push:
- branches:
- - 'main'
- paths:
- - 'appsettings.json'
-
-jobs:
- syncconfig:
- runs-on: ubuntu-latest
- steps:
- # Creates a label based on the branch name and the first 8 characters
- # of the commit hash
- - id: determine_label
- run: echo ::set-output name=LABEL::"${GITHUB_REF#refs/*/}/${GITHUB_SHA:0:8}"
- # checkout done so that files in the repo can be read by the sync
- - uses: actions/checkout@v1
- - uses: azure/appconfiguration-sync@v1
- with:
- configurationFile: 'appsettings.json'
- format: 'json'
- # Replace <ConnectionString> with the name of the secret in your
- # repository
- connectionString: ${{ secrets.<ConnectionString> }}
- separator: ':'
- label: ${{ steps.determine_label.outputs.LABEL }}
-```
-
-## Use Azure Key Vault with GitHub Action
-Developers using Azure Key Vault with AppConfiguration should use two separate files, typically an appsettings.json and a secretreferences.json. The secretreferences.json will contain the url to the key vault secret.
-
-{
- "mySecret": "{\"uri\":\"https://myKeyVault.vault.azure.net/secrets/mySecret"}"
-}
-
-The GitHub Action can then be configured to do a strict sync on the appsettings.json, followed by a non-strict sync on secretreferences.json. The following sample will trigger a sync when either file is updated:
-
-```json
-on:
- push:
- branches:
- - 'main'
- paths:
- - 'appsettings.json'
- - 'secretreferences.json'
-
-jobs:
- syncconfig:
- runs-on: ubuntu-latest
- steps:
- # checkout done so that files in the repo can be read by the sync
- - uses: actions/checkout@v1
- - uses: azure/appconfiguration-sync@v1
- with:
- configurationFile: 'appsettings.json'
- format: 'json'
- # Replace <ConnectionString> with the name of the secret in your repository
- connectionString: ${{ secrets.<ConnectionString> }}
- separator: ':'
- strict: true
- - uses: azure/appconfiguration-sync@v1
- with:
- configurationFile: 'secretreferences.json'
- format: 'json'
- # Replace <ConnectionString> with the name of the secret in your repository
- connectionString: ${{ secrets.<ConnectionString> }}
- separator: ':'
- contentType: 'application/vnd.microsoft.appconfig.keyvaultref+json;charset=utf-8'
-
-```
-
-## Use max depth to limit GitHub Action
-The default behavior for nested JSON attributes is to flatten the entire object. The JSON below defines this key-value pair:
-
-| Key | Value |
-| | |
-| Object:Inner:InnerKey | InnerValue |
-
-```json
-{ "Object":
- { "Inner":
- {
- "InnerKey": "InnerValue"
- }
- }
-}
-```
-
-If the nested object is intended to be the value pushed to the Configuration instance, you can use the *depth* value to stop the flattening at the appropriate depth.
-
-```json
-on:
- push:
- branches:
- - 'main'
- paths:
- - 'appsettings.json'
-
-jobs:
- syncconfig:
- runs-on: ubuntu-latest
- steps:
- # checkout done so that files in the repo can be read by the sync
- - uses: actions/checkout@v1
- - uses: azure/appconfiguration-sync@v1
- with:
- configurationFile: 'appsettings.json'
- format: 'json'
- # Replace <ConnectionString> with the name of the secret in your
- # repository
- connectionString: ${{ secrets.<ConnectionString> }}
- separator: ':'
- depth: 2
-```
-
-Given a depth of 2, the example above now returns the following key-value pair:
-
-| Key | Value |
-| | |
-| Object:Inner | {"InnerKey":"InnerValue"} |
-
-## Understand action inputs
-Input parameters specify data used by the action during runtime. The following table contains input parameters accepted by App Configuration Sync and the expected values for each. For more information about action inputs for GitHub Actions, see GitHub's [documentation](https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions#inputs).
-
-> [!Note]
-> Input IDs are case insensitive.
--
-| Input name | Required? | Value |
-|-|-|-|
-| configurationFile | Yes | Relative path to the configuration file in the repository. Glob patterns are supported and can include multiple files. |
-| format | Yes | File format of the configuration file. Valid formats are: JSON, YAML, properties. |
-| connectionString | Yes | Read-write connection string for the App Configuration instance. The connection string should be stored as a secret in the GitHub repository, and only the secret name should be used in the workflow. |
-| separator | Yes | Separator used when flattening the configuration file to key-value pairs. Valid values are: . , ; : - _ __ / |
-| prefix | No | Prefix to be added to the start of keys. |
-| label | No | Label used when setting key-value pairs. If unspecified, a null label is used. |
-| strict | No | A boolean value that determines whether strict mode is enabled. The default value is false. |
-| depth | No | Max depth for flattening the configuration file. Depth must be a positive number. The default will have no max depth. |
-| tags | No | Specifies the tag set on key-value pairs. The expected format is a stringified form of a JSON object of the following shape: { [propertyName: string]: string; } Each property name-value becomes a tag. |
-
-## Next steps
-
-In this article, you learned about the App Configuration Sync GitHub Action and how it can be used to automate updates to your App Configuration instance. To learn how Azure App Configuration reacts to changes in key-value pairs, continue to the next [article](./concept-app-configuration-event.md).
azure-app-configuration Howto Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-best-practices.md
Excessive requests to App Configuration can result in throttling or overage char
## Importing configuration data into App Configuration
-App Configuration offers the option to bulk [import](./howto-import-export-data.md) your configuration settings from your current configuration files using either the Azure portal or CLI. You can also use the same options to export key-values from App Configuration, for example between related stores. If youΓÇÖd like to set up an ongoing sync with your repo in GitHub or Azure DevOps, you can use our [GitHub Action](./concept-github-action.md) or [Azure Pipeline Push Task](./push-kv-devops-pipeline.md) so that you can continue using your existing source control practices while getting the benefits of App Configuration.
+App Configuration offers the option to bulk [import](./howto-import-export-data.md) your configuration settings from your current configuration files using either the Azure portal or CLI. You can also use the same options to export key-values from App Configuration, for example between related stores. If you have adopted Configuration as Code and manage your configurations in GitHub or Azure DevOps, you can set up ongoing configuration file import using [GitHub Actions](./push-kv-github-action.md) or [Azure Pipeline Push Task](./push-kv-devops-pipeline.md).
## Multi-region deployment in App Configuration
A multitenant application is built on an architecture where a shared instance of
Configuration as code is a practice of managing configuration files under your source control system, for example, a git repository. It gives you benefits like traceability and approval process for any configuration changes. If you adopt configuration as code, App Configuration has tools to assist you in [managing your configuration data in files](./concept-config-file.md) and deploying them as part of your build, release, or CI/CD process. This way, your applications can access the latest data from your App Configuration store(s). -- For GitHub, you can enable the [App Configuration Sync GitHub Action](concept-github-action.md) for your repository. Changes to configuration files are synchronized to App Configuration automatically whenever a pull request is merged.
+- For GitHub, you can import configuration files from your GitHub repository into your App Configuration store using [GitHub Actions](./push-kv-github-action.md)
- For Azure DevOps, you can include the [Azure App Configuration Push](push-kv-devops-pipeline.md), an Azure pipeline task, in your build or release pipelines for data synchronization. - You can also import configuration files to App Configuration using Azure CLI as part of your CI/CD system. For more information, see [az appconfig kv import](scripts/cli-import.md).
azure-app-configuration Howto Disable Access Key Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-disable-access-key-authentication.md
Title: Disable access key authentication for an Azure App Configuration instance
+ Title: Manage access key authentication for an Azure App Configuration instance
-description: Learn how to disable access key authentication for an Azure App Configuration instance.
+description: Learn how to manage access key authentication for an Azure App Configuration instance.
Last updated 04/05/2024
-# Disable access key authentication for an Azure App Configuration instance
+# Manage access key authentication for an Azure App Configuration instance
-Every request to an Azure App Configuration resource must be authenticated. By default, requests can be authenticated with either Microsoft Entra credentials, or by using an access key. Of these two types of authentication schemes, Microsoft Entra ID provides superior security and ease of use over access keys, and is recommended by Microsoft. To require clients to use Microsoft Entra ID to authenticate requests, you can disable the usage of access keys for an Azure App Configuration resource.
+Every request to an Azure App Configuration resource must be authenticated. By default, requests can be authenticated with either Microsoft Entra credentials, or by using an access key. Of these two types of authentication schemes, Microsoft Entra ID provides superior security and ease of use over access keys, and is recommended by Microsoft. To require clients to use Microsoft Entra ID to authenticate requests, you can disable the usage of access keys for an Azure App Configuration resource. If you want to use access keys to authenticate the request, it's recommended to rotate access keys every 90 days to enhance security.
-When you disable access key authentication for an Azure App Configuration resource, any existing access keys for that resource are deleted. Any subsequent requests to the resource using the previously existing access keys will be rejected. Only requests that are authenticated using Microsoft Entra ID will succeed. For more information about using Microsoft Entra ID, see [Authorize access to Azure App Configuration using Microsoft Entra ID](./concept-enable-rbac.md).
+## Enable access key authentication
-## Disable access key authentication
-
-Disabling access key authentication will delete all access keys. If any running applications are using access keys for authentication, they will begin to fail once access key authentication is disabled. Enabling access key authentication again will generate a new set of access keys and any applications attempting to use the old access keys will still fail.
+Access key is enabled by default, you can use access keys in your code to authenticate requests.
> [!WARNING] > If any clients are currently accessing data in your Azure App Configuration resource with access keys, then Microsoft recommends that you migrate those clients to [Microsoft Entra ID](./concept-enable-rbac.md) before disabling access key authentication. # [Azure portal](#tab/portal)
+To allow/disallow access key authentication for an Azure App Configuration resource in the Azure portal, follow these steps:
+
+1. Navigate to your Azure App Configuration resource in the Azure portal.
+1. Locate the **Access settings** setting under **Settings**.
+
+ :::image type="content" border="true" source="./media/access-settings-blade.png" alt-text="Screenshot showing how to access an Azure App Configuration resources access key blade.":::
+
+1. Set the **Enable access keys** toggle to **Enabled**.
+
+ :::image type="content" border="true" source="./media/enable-access-keys.png" alt-text="Screenshot showing how to enable access key authentication for Azure App Configuration.":::
+
+# [Azure CLI](#tab/azure-cli)
+
+To enable access keys for Azure App configuration resource, use the following command. The `--disable-local-auth` option is set to "false" for enable local auth.
+
+```azurecli-interactive
+az appconfig update \
+ --name <app-configuration-name> \
+ --resource-group <resource-group> \
+ --disable-local-auth false
+```
+++
+### Verify that access key authentication is enabled
+
+To verify if access key authentication is enabled, check if you're able to get a list of read and read-write access keys. This list will only exist if access key authentication is enabled.
+
+# [Azure portal](#tab/portal)
+
+To check if access key authentication is enabled for an Azure App Configuration resource in the Azure portal, follow these steps:
+
+1. Navigate to your Azure App Configuration resource in the Azure portal.
+1. Locate the **Access settings** setting under **Settings**.
+
+ :::image type="content" border="true" source="./media/access-settings-blade.png" alt-text="Screenshot showing how to access an Azure App Configuration resources access key blade.":::
+
+1. Check if there are access keys displayed and if the toggled state of **Enable access keys** is enabled.
+
+ :::image type="content" border="true" source="./media/get-access-keys-list.png" alt-text="Screenshot showing access keys for an Azure App Configuration resource.":::
+
+# [Azure CLI](#tab/azure-cli)
+
+To check if access key authentication is enabled for an Azure App Configuration resource, use the following command. The command will list the access keys for an Azure App Configuration resource.
+If access key authentication is enabled, then read access keys and read-write access keys will be returned.
+
+```azurecli-interactive
+az appconfig credential list \
+ --name <app-configuration-name> \
+ --resource-group <resource-group>
+```
+++
+## Disable access key authentication
+
+Disabling access key authentication will delete all access keys. If any running applications are using access keys for authentication, they will begin to fail once access key authentication is disabled. Only requests that are authenticated using Microsoft Entra ID will succeed. For more information about using Microsoft Entra ID, see [Authorize access to Azure App Configuration using Microsoft Entra ID](./concept-enable-rbac.md). Enabling access key authentication again will generate a new set of access keys and any applications attempting to use the old access keys will still fail.
+
+# [Azure portal](#tab/portal)
+ To disallow access key authentication for an Azure App Configuration resource in the Azure portal, follow these steps: 1. Navigate to your Azure App Configuration resource in the Azure portal.
-2. Locate the **Access settings** setting under **Settings**.
+1. Locate the **Access settings** setting under **Settings**.
:::image type="content" border="true" source="./media/access-settings-blade.png" alt-text="Screenshot showing how to access an Azure App Configuration resources access key blade.":::
-3. Set the **Enable access keys** toggle to **Disabled**.
+1. Set the **Enable access keys** toggle to **Disabled**.
:::image type="content" border="true" source="./media/disable-access-keys.png" alt-text="Screenshot showing how to disable access key authentication for Azure App Configuration"::: # [Azure CLI](#tab/azure-cli)
-The capability to disable access key authentication using the Azure CLI is in development.
+To disable access keys for Azure App configuration resource, use the following command. The `--disable-local-auth` option is set to "true" for disable local auth.
+
+```azurecli-interactive
+az appconfig update \
+ --name <app-configuration-name> \
+ --resource-group <resource-group> \
+ --disable-local-auth true
+```
To verify that access key authentication is no longer permitted, a request can b
To verify access key authentication is disabled for an Azure App Configuration resource in the Azure portal, follow these steps: 1. Navigate to your Azure App Configuration resource in the Azure portal.
-2. Locate the **Access settings** setting under **Settings**.
+1. Locate the **Access settings** setting under **Settings**.
:::image type="content" border="true" source="./media/access-settings-blade.png" alt-text="Screenshot showing how to access an Azure App Configuration resources access key blade.":::
-3. Verify there are no access keys displayed and **Enable access keys** is toggled to **Disabled**.
+1. Check that there are no access keys displayed and the toggled state of **Enable access keys** is off.
:::image type="content" border="true" source="./media/disable-access-keys.png" alt-text="Screenshot showing access keys being disabled for an Azure App Configuration resource"::: # [Azure CLI](#tab/azure-cli)
-To verify access key authentication is disabled for an Azure App Configuration resource in the Azure portal, use the following command. The command will list the access keys for an Azure App Configuration resource and if access key authentication is disabled the list will be empty.
+To verify access key authentication is disabled for an Azure App Configuration resource, use the following command. The command will list the access keys for an Azure App Configuration resource and if access key authentication is disabled the list will be empty.
```azurecli-interactive az appconfig credential list \
az appconfig credential list \
--resource-group <resource-group> ```
-If access key authentication is disabled, then an empty list will be returned.
-
-```
-C:\Users\User>az appconfig credential list -g <resource-group> -n <app-configuration-name>
-[]
-```
- ## Permissions for allowing or disallowing access key authentication
Be careful to restrict assignment of these roles only to those users who require
> [!NOTE] > When access key authentication is disabled and [ARM authentication mode](./quickstart-deployment-overview.md#azure-resource-manager-authentication-mode) of App Configuration store is local, the capability to read/write key-values in an [ARM template](./quickstart-resource-manager.md) will be disabled as well. This is because access to the Microsoft.AppConfiguration/configurationStores/keyValues resource used in ARM templates requires access key authentication with local ARM authentication mode. It's recommended to use pass-through ARM authentication mode. For more information, see [Deployment overview](./quickstart-deployment-overview.md).
+## Rotate access key
+Microsoft recommends that you rotate your access keys periodically to help keep your resource secure. If possible, use Azure Key Vault to manage your access keys. If you are not using Key Vault, you will need to rotate your keys manually.
+
+Each Azure App Configuration resource has two access keys to enable secret rotation. This is a security precaution that lets you regularly change the keys that can access your service, protecting the privacy of your resource if a key gets leaked. The recommended rotation cycle is 90 days.
+
+You can rotate keys using the following procedure:
+
+1. If you're using both keys in production, change your code so that only one access key is in use. In this example, let's say you decide to keep using your store's primary key.
+You must have only one key in your code, because when you regenerate your secondary key, the older version of that key will stop working immediately, causing clients using the older key to get 401 access denied errors.
+
+1. Once the primary key is the only key in use, you can regenerate the secondary key. Go to your resource's page on the Azure portal, open the **Settings** > **Access settings** menu, and select **Regenerate** under **Secondary key**.
+
+1. Next, update your code to use the newly generated secondary key.
+It helps to have logs or availability to check that users of the key have successfully swapped from using the primary key to the secondary key before you proceed.
+
+1. Now you can regenerate the primary key using the same process.
+
+1. Finally, update your code to use the new primary key.
+ ## Next steps - [Use customer-managed keys to encrypt your App Configuration data](concept-customer-managed-keys.md)
azure-app-configuration Howto Import Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-import-export-data.md
Azure App Configuration supports data import and export operations. Use these operations to work with configuration data in bulk and exchange data between your App Configuration store and code project. For example, you can set up one App Configuration store for testing and another one for production. You can copy application settings between them so that you don't have to enter data twice.
-This article provides a guide for importing and exporting data with App Configuration. If youΓÇÖd like to set up an ongoing sync with your GitHub repo, take a look at [GitHub Actions](./concept-github-action.md) and [Azure Pipelines tasks](./pull-key-value-devops-pipeline.md).
-
-You can import or export data using either the [Azure portal](https://portal.azure.com) or the [Azure CLI](./scripts/cli-import.md).
+This article provides a guide for importing and exporting data using either the [Azure portal](https://portal.azure.com) or the [Azure CLI](./scripts/cli-import.md). If you have adopted [Configuration as Code](./howto-best-practices.md#configuration-as-code) and manage your configurations in GitHub or Azure Devops, you can set up ongoing configuration file import using [GitHub Actions](./push-kv-github-action.md) or use the [Azure Pipeline Push Task](./push-kv-devops-pipeline.md).
## Import data
azure-app-configuration Push Kv Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/push-kv-github-action.md
+
+ Title: Import configuration files from your GitHub repository to App Configuration store
+description: Use GitHub Actions to automatically update your App Configuration store when you update your configuration file in your GitHub repository
++ Last updated : 06/05/2024++++
+# Import configuration files from your GitHub repository to App Configuration store
+
+If you have adopted [Configuration as Code](./howto-best-practices.md#configuration-as-code) and manage your configurations in GitHub, you can use GitHub Actions to automatically import configuration files from your GitHub repository into your App Configuration store. This allows you to make changes to your configuration files as you normally would, while getting App Configuration store benefits like:
+* Centralized configuration outside of your code.
+* Updating configuration without redeploying your entire app.
+* Integration with services like Azure App Service and Functions.
+
+A [GitHub Action workflow](https://docs.github.com/en/actions/learn-github-actions/introduction-to-github-actions#the-components-of-github-actions) defines an automated process in a GitHub repository. To import a configuration file from your GitHub repository into Azure App Configuration store, use the [Azure CLI](https://github.com/Azure/cli) GitHub action, which provides full capabilities for file importing to your App Configuration store.
+
+## Authentication
+To import configurations to your Azure App Configuration store you can authenticate using one of the following methods:
+
+### Use Microsoft Entra ID
+The recommended way to authenticate is by using Microsoft Entra ID, which allows you to securely connect to your Azure resources. You can automate the authentication process using the [Azure Login](/azure/developer/github/connect-from-azure) GitHub action.
+
+Azure Login allows you to authenticate using service principals with secrets or OpenID Connect with a Federated Identity Credential. In this example, youΓÇÖll use OpenID Connect to log in to your App Configuration store.
+
+#### Use Azure login with OpenID Connect
+To use Azure Login with OpenID Connect, you will need to:
+1. Set up a [Microsoft Entra application with a service principal.](/entra/identity-platform/howto-create-service-principal-portal)
+2. Assign your Microsoft Entra application the **App Configuration Data Owner** role to allow your GitHub action to read and write to your App Configuration store.
+3. Provide your Microsoft Entra application's Client ID, Tenant ID, and Subscription ID to the login action. These values can be provided directly in the workflow or stored as GitHub secrets for better security. In the example below, these values are set as secrets. For more information about using secrets in GitHub, see [Using secrets in GitHub Actions](https://docs.github.com/en/actions/reference/encrypted-secrets).
+
+To start using this GitHub Action, go to your repository and select the **Actions** tab. Select **New workflow**, then **Set up a workflow yourself**. Finally, search the marketplace for ΓÇ£Azure LoginΓÇ¥. Once you find it, click on the action and copy the provided snippet into your workflow file.
+> [!div class="mx-imgBorder"]
+> ![Select the Action tab](media/find-github-action.png)
+> [!div class="mx-imgBorder"]
+> ![Select the Azure Login Action](media/azure-login-github-action.png)
+
+#### Example using Microsoft Entra ID
+
+```yaml
+# Set permissions for the workflow. Specify 'id-token: write' to allow OIDC token generation at the workflow level.
+permissions:
+ id-token: write
+ contents: read
+
+jobs:
+ syncconfig:
+ runs-on: ubuntu-latest
+ steps:
+ - name: Azure login
+ uses: azure/login@v2
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+```
+
+### Use a connection string
+Alternatively, you can authenticate by passing the connection string directly to the Azure CLI command. This method involves retrieving the connection string from the Azure portal and using it in your commands or scripts.
+
+To get started, you can find the connection string under **Access Settings** of your App Configuration store in the Azure portal.
+
+Next, set this connection string as a secret variable in your GitHub repository. For more information about using secrets in GitHub, see [Using secrets in GitHub Actions.](https://docs.github.com/en/actions/reference/encrypted-secrets).
+
+#### Example using a connection string
+
+```yaml
+on:
+ push:
+ branches:
+ - 'main'
+ paths:
+ - 'appsettings.json'
+
+jobs:
+ syncconfig:
+ runs-on: ubuntu-latest
+
+ # pass the secret variable as an environment variable to access it in your CLI action.
+ env:
+ CONNECTION_STRING: ${{ secrets.<ConnectionString> }}
+```
+## Configuration file import
+
+You use the [Azure CLI](https://github.com/Azure/cli) GitHub Action to import a configuration file to your App Configuration store. To start using this GitHub Action, go to your repository and select the **Actions** tab. Select **New workflow**, then **Set up a workflow yourself**. Finally, search the marketplace for ΓÇ£Azure CLI Action.ΓÇ¥ Once you find it, click on the action and copy the provided snippet into your workflow file.
+> [!div class="mx-imgBorder"]
+> ![Select the Azure CLI Action](media/azure-cli-github-action.png)
+
+In the following example, you use the Azure CLI action to import configuration files into an Azure App Configuration store when a change is pushed to `appsettings.json`. When a developer pushes a change to `appsettings.json`, the script passed to the Azure CLI action updates the App Configuration store with the new values.
+
+The *on* section of this workflow specifies that the action triggers *on* a *push* containing `appsettings.json` to the *main* branch. The *jobs* section lists the jobs run once the action is triggered. The action checks out the relevant files and updates the App Configuration store.
+
+```yaml
+on:
+ push:
+ branches:
+ - 'main'
+ paths:
+ - 'appsettings.json'
+
+# Set permissions for the workflow. Specify 'id-token: write' to allow OIDC token generation at the workflow level.
+permissions:
+ id-token: write
+ contents: read
+
+jobs:
+ syncconfig:
+ runs-on: ubuntu-latest
+ steps:
+ - name: Azure login
+ uses: azure/login@v2
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+
+ # checkout done so that files in the repo can be read by the sync
+ - uses: actions/checkout@v1
+ - uses: azure/cli@v2
+ with:
+ azcliversion: latest
+ inlineScript: |
+ az appconfig kv import --endpoint <your-app-configuration-store-endpoint> --auth-mode login -s file --path appsettings.json --format json --yes
+```
+
+For more information about Azure App Configuration CLI import commands, see the [Azure AppConfifguration CLI documentation.](/cli/azure/appconfig/kv#az-appconfig-kv-import)
+
+### Use a dynamic label on import
+
+Using a dynamic label on each import is a good way to maintain clear and precise version control of your configurations. It allows each import to your App Configuration store to be uniquely identified, making it easier to map code changes to configuration updates.
+
+#### Example using a dynamic label on import
+
+In the following example, all key-values imported will have a unique label based on the commit hash.
+
+```yaml
+ jobs:
+ syncconfig:
+ runs-on: ubuntu-latest
+ steps:
+ # Creates a label based on the branch name and the first 8 characters
+ # of the commit hash
+ - id: determine_label
+ run: echo ::set-output name=LABEL::"${GITHUB_REF#refs/*/}/${GITHUB_SHA:0:8}"
+ # checkout done so that files in the repo can be read by the sync
+ - uses: actions/checkout@v1
+ - uses: azure/cli@v2
+ with:
+ azcliversion: latest
+ inlineScript: |
+ az appconfig kv import --endpoint <your-app-configuration-store-endpoint> --auth-mode login -s file --path appsettings.json --format json --label ${{ steps.determine_label.outputs.LABEL }} --yes
+```
+## Next steps
+
+To learn how to use CLI import commands, check out our comprehensive guide [Azure CLI import commands](/cli/azure/appconfig/kv#az-appconfig-kv-import).
+
+To learn more about different file content profiles, see [Azure App Configuration support for configuration files](./concept-config-file.md).
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Azure Connected Machine agent description: This article provides a detailed overview of the Azure Connected Machine agent, which supports monitoring virtual machines hosted in hybrid environments. Last updated 06/03/2024-+ # Overview of Azure Connected Machine agent
azure-arc Deploy Ama Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/deploy-ama-policy.md
Title: How to deploy and configure Azure Monitor Agent using Azure Policy description: Learn how to deploy and configure Azure Monitor Agent using Azure Policy. Last updated 05/17/2023-+ # Deploy and configure Azure Monitor Agent using Azure Policy
azure-arc Manage Howto Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-howto-migrate.md
Title: How to migrate Azure Arc-enabled servers across regions description: Learn how to migrate an Azure Arc-enabled server from one region to another. Last updated 3/29/2022-+ # How to migrate Azure Arc-enabled servers across regions
azure-arc Manage Vm Extensions Ansible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-ansible.md
Title: Enable VM extension using Red Hat Ansible description: This article describes how to deploy virtual machine extensions to Azure Arc-enabled servers running in hybrid cloud environments using Red Hat Ansible Automation. Last updated 05/15/2023-+
azure-arc Manage Vm Extensions Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-cli.md
Title: Enable VM extension using Azure CLI description: This article describes how to deploy virtual machine extensions to Azure Arc-enabled servers running in hybrid cloud environments using the Azure CLI. Last updated 03/30/2022-+
azure-arc Manage Vm Extensions Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-portal.md
Title: Enable VM extension from the Azure portal description: This article describes how to deploy virtual machine extensions to Azure Arc-enabled servers running in hybrid cloud environments from the Azure portal. Last updated 10/15/2021-+ # Enable Azure VM extensions from the Azure portal
azure-arc Manage Vm Extensions Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-powershell.md
Title: Enable VM extension using Azure PowerShell description: This article describes how to deploy virtual machine extensions to Azure Arc-enabled servers running in hybrid cloud environments using Azure PowerShell. Last updated 03/30/2022-+
azure-arc Manage Vm Extensions Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-template.md
Title: Enable VM extension using Azure Resource Manager template description: This article describes how to deploy virtual machine extensions to Azure Arc-enabled servers running in hybrid cloud environments using an Azure Resource Manager template. Last updated 06/02/2022-+
azure-arc Migrate Azure Monitor Agent Ansible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/migrate-azure-monitor-agent-ansible.md
Title: How to migrate to Azure Monitor Agent using Red Hat Ansible Automation Platform description: Learn how to migrate to Azure Monitor Agent using Red Hat Ansible Automation Platform. Last updated 10/17/2022-+
azure-arc Onboard Ansible Playbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-ansible-playbooks.md
Title: Connect machines at scale using Ansible Playbooks description: In this article, you learn how to connect machines to Azure using Azure Arc-enabled servers using Ansible playbooks. Last updated 05/09/2022-+
azure-arc Onboard Group Policy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-group-policy-powershell.md
Title: Connect machines at scale using Group Policy with a PowerShell script description: In this article, you learn how to create a Group Policy Object to onboard Active Directory-joined Windows machines to Azure Arc-enabled servers. Last updated 05/04/2023-+
azure-arc Onboard Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-portal.md
Title: Connect hybrid machines to Azure using a deployment script description: In this article, you learn how to install the agent and connect machines to Azure by using Azure Arc-enabled servers using the deployment script you create in the Azure portal. Last updated 10/23/2023-+
azure-arc Onboard Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-powershell.md
Title: Connect hybrid machines to Azure by using PowerShell description: In this article, you learn how to install the agent and connect a machine to Azure by using Azure Arc-enabled servers. You can do this with PowerShell. Last updated 07/16/2021-+
azure-arc Organize Inventory Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/organize-inventory-servers.md
Title: How to organize and inventory servers using hierarchies, tagging, and reporting description: Learn how to organize and inventory servers using hierarchies, tagging, and reporting. Last updated 03/03/2023-+ # Organize and inventory servers with hierarchies, tagging, and reporting
azure-arc Plan Evaluate On Azure Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/plan-evaluate-on-azure-virtual-machine.md
Title: How to evaluate Azure Arc-enabled servers with an Azure virtual machine description: Learn how to evaluate Azure Arc-enabled servers using an Azure virtual machine. Last updated 10/01/2021-+ # Evaluate Azure Arc-enabled servers on an Azure virtual machine
azure-arc Prepare Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prepare-extended-security-updates.md
Title: How to prepare to deliver Extended Security Updates for Windows Server 2012 through Azure Arc description: Learn how to prepare to deliver Extended Security Updates for Windows Server 2012 through Azure Arc. Last updated 07/03/2024-+ # Prepare to deliver Extended Security Updates for Windows Server 2012
azure-arc Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-overview.md
Title: Security overview description: Basic security information about Azure Arc-enabled servers.-+ Last updated 06/06/2024
azure-arc Ssh Arc Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/ssh-arc-troubleshoot.md
Title: Troubleshoot SSH access to Azure Arc-enabled servers description: Learn how to troubleshoot and resolve issues with SSH access to Arc-enabled servers. Last updated 07/01/2023-+ # Troubleshoot SSH access to Azure Arc-enabled servers
azure-arc Troubleshoot Agent Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/troubleshoot-agent-onboard.md
Title: Troubleshoot Azure Connected Machine agent connection issues description: This article tells how to troubleshoot and resolve issues with the Connected Machine agent that arise with Azure Arc-enabled servers when trying to connect to the service. Last updated 10/13/2022-+ # Troubleshoot Azure Connected Machine agent connection issues
azure-arc Troubleshoot Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/troubleshoot-extended-security-updates.md
Title: How to troubleshoot delivery of Extended Security Updates for Windows Server 2012 through Azure Arc description: Learn how to troubleshoot delivery of Extended Security Updates for Windows Server 2012 through Azure Arc. Last updated 07/03/2024-+ # Troubleshoot delivery of Extended Security Updates for Windows Server 2012
azure-arc Troubleshoot Vm Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/troubleshoot-vm-extensions.md
Title: Troubleshoot Azure Arc-enabled servers VM extension issues description: This article tells how to troubleshoot and resolve issues with Azure VM extensions that arise with Azure Arc-enabled servers. Last updated 07/16/2021-+ # Troubleshoot Azure Arc-enabled servers VM extension issues
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md
You are able to manually trigger an upgrade to the latest version of Redis softw
| **Tier** | Basic, Standard, Premium | Enterprise, Enterprise Flash | |:--|:-:|:-:|
-| **Lastest Redis Version** | Redis 6.0 (GA) | Redis 6.2 (GA) / Redis 7.2 (Preview)|
+| **Lastest Redis Version** | Redis 6.0 (GA) | Redis 6.0 (GA) / Redis 7.2 (Preview)|
| **Upgrade Policy** | Manual upgrade to newer version | Automatic upgrade to latest GA version | ### Enterprise tier E1 (preview) SKU
azure-maps Creator Facility Ontology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-facility-ontology.md
Title: Facility Ontology in Microsoft Azure Maps Creator description: Facility Ontology that describes the feature class definitions for Azure Maps Creator--++ Last updated 02/17/2023
azure-maps Creator Geographic Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-geographic-scope.md
Title: Azure Maps Creator service geographic scope description: Learn about Azure Maps Creator service's geographic mappings in Azure Maps--++ Last updated 05/18/2021
azure-maps Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md
Title: Work with indoor maps in Azure Maps Creator description: This article introduces concepts that apply to Azure Maps Creator services--++ Last updated 04/01/2022
azure-maps Creator Long Running Operation V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-long-running-operation-v2.md
Title: Azure Maps long-running operation API V2 description: Learn about long-running asynchronous V2 background processing in Azure Maps--++ Last updated 05/18/2021
azure-maps Creator Long Running Operation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-long-running-operation.md
Title: Azure Maps Long-Running Operation API description: Learn about long-running asynchronous background processing in Azure Maps--++ Last updated 12/07/2020
azure-maps Creator Onboarding Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-onboarding-tool.md
Title: Create indoor map with onboarding tool description: This article describes how to create an indoor map using the onboarding tool--++ Last updated 08/15/2023
azure-maps Creator Qgis Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-qgis-plugin.md
Title: Work with datasets using the QGIS plugin description: How to view and edit indoor map data using the Azure Maps QGIS plugin--++ Last updated 06/14/2023
azure-maps Drawing Conversion Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-conversion-error-codes.md
Title: Azure Maps Drawing Conversion errors and warnings description: Learn about the Conversion errors and warnings you may meet while you're using the Azure Maps Conversion service. Read the recommendations on how to resolve the errors and the warnings, with some examples.--++ Last updated 05/21/2021
azure-maps Drawing Error Visualizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-error-visualizer.md
Title: Use Azure Maps Drawing Error Visualizer description: This article demonstrates how to visualize warnings and errors returned by the Creator Conversion API.--++ Last updated 02/17/2023
azure-maps Drawing Package Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-package-guide.md
Title: Drawing package guide for Microsoft Azure Maps Creator description: Learn how to prepare a drawing package for the Azure Maps Conversion service--++ Last updated 03/21/2023
azure-maps Drawing Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-requirements.md
Title: Drawing package requirements in Microsoft Azure Maps Creator description: Learn about the drawing package requirements to convert your facility design files to map data--++ Last updated 03/21/2023
azure-maps How To Create Custom Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-custom-styles.md
Title: Create custom styles for indoor maps description: Learn how to use Maputnik with Azure Maps Creator to create custom styles for your indoor maps.--++ Last updated 9/23/2022
azure-maps How To Creator Wayfinding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wayfinding.md
Title: Indoor Maps wayfinding service description: How to use the wayfinding service to plot and display routes for indoor maps in Microsoft Azure Maps Creator--++ Last updated 10/25/2022
azure-maps How To Creator Wfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wfs.md
Title: Query datasets using the Web Feature Service description: How to Query datasets with Web Feature Service (WFS) --++ Last updated 03/03/2023
azure-maps How To Dataset Geojson https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dataset-geojson.md
Title: How to create a dataset using a GeoJson package description: Learn how to create a dataset using a GeoJson package.--++ Last updated 11/01/2021
azure-maps How To Manage Creator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-creator.md
Title: Manage Microsoft Azure Maps Creator description: This article demonstrates how to manage Microsoft Azure Maps Creator.--++ Last updated 01/20/2022
azure-maps How To Request Weather Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-request-weather-data.md
Title: Request real-time and forecasted weather data using Azure Maps Weather services description: Learn how to request real-time (current) and forecasted (minute, hourly, daily) weather data using Microsoft Azure Maps Weather services --++ Last updated 10/28/2021
azure-maps How To Search For Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-search-for-address.md
Title: Search for a location using Azure Maps Search services description: Learn about the Azure Maps Search service. See how to use this set of APIs for geocoding, reverse geocoding, fuzzy searches, and reverse cross street searches.--++ Last updated 10/28/2021
azure-maps How To Use Best Practices For Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-routing.md
Title: Best practices for Azure Maps Route service in Microsoft Azure Maps description: Learn how to route vehicles by using Route service from Microsoft Azure Maps.--++ Last updated 10/28/2021
azure-maps How To Use Best Practices For Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-search.md
Title: Best practices for Azure Maps Search service description: Learn how to apply the best practices when using the Search service from Microsoft Azure Maps.--++ Last updated 10/28/2021
azure-maps How To Use Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module.md
Title: Use the Azure Maps Indoor Maps module with Microsoft Creator services with custom styles (preview) description: Learn how to use the Microsoft Azure Maps Indoor Maps module to render maps by embedding the module's JavaScript libraries.--++ Last updated 06/28/2023
azure-maps Render Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/render-coverage.md
Title: Render coverage description: Render coverage tables list the countries/regions that support Azure Maps road tiles.--++ Last updated 09/21/2023
azure-maps Rest Api Creator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-api-creator.md
Title: Links to the Azure Maps Creator Rest API description: Links to the Azure Maps Creator Rest API--++ Last updated 02/05/2024
azure-maps Routing Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/routing-coverage.md
Title: Routing coverage description: Learn what level of coverage Azure Maps provides in various regions for routing, routing with traffic, and truck routing. --++ Last updated 10/21/2022
azure-maps Spatial Io Supported Data Format Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-supported-data-format-details.md
 Title: Supported data format details | Microsoft Azure Maps description: Learn how delimited spatial data is parsed in the spatial IO module.--++ Last updated 10/28/2021
azure-maps Traffic Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/traffic-coverage.md
Title: Traffic coverage description: Learn about traffic coverage in Azure Maps. See whether information on traffic flow and incidents is available in various regions throughout the world.--++ Last updated 03/24/2022
azure-maps Tutorial Ev Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-ev-routing.md
Title: 'Tutorial: Route electric vehicles by using Azure Notebooks (Python) with Microsoft Azure Maps' description: Tutorial on how to route electric vehicles by using Microsoft Azure Maps routing APIs and Azure Notebooks--++ Last updated 04/26/2021
azure-maps Weather Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-coverage.md
Title: Microsoft Azure Maps Weather services coverage description: Learn about Microsoft Azure Maps Weather services coverage--++ Last updated 11/08/2022
azure-maps Weather Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-service-tutorial.md
Title: 'Tutorial: Join sensor data with weather forecast data by using Azure Notebooks(Python)' description: Tutorial on how to join sensor data with weather forecast data from Microsoft Azure Maps Weather services using Azure Notebooks(Python).--++ Last updated 10/28/2021
azure-maps Weather Services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-services-concepts.md
Title: Weather services concepts in Microsoft Azure Maps description: Learn about the concepts that apply to Microsoft Azure Maps Weather services.--++ Last updated 09/10/2020
azure-monitor Azure Monitor Agent Network Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-network-configuration.md
The following table provides the endpoints that firewalls need to provide access
|`<log-analytics-workspace-id>`.ods.opinsights.azure.com |Ingest logs data | 1234a123-aa1a-123a-aaa1-a1a345aa6789.ods.opinsights.azure.com | management.azure.com | Only needed if sending time series data (metrics) to Azure Monitor [Custom metrics](../essentials/metrics-custom-overview.md) database | - | | `<virtual-machine-region-name>`.monitoring.azure.com | Only needed if sending time series data (metrics) to Azure Monitor [Custom metrics](../essentials/metrics-custom-overview.md) database | westus2.monitoring.azure.com |-
+| `<data-collection-endpoint>.<virtual-machine-region-name>`.ingest.monitor.azure.com | Only needed if sending data to Log Analytics [custom logs](./data-collection-text-log.md) table | 275test-01li.eastus2euap-1.canary.ingest.monitor.azure.com |
Replace the suffix in the endpoints with the suffix in the following table for different clouds.
azure-monitor Data Sources Firewall Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-firewall-logs.md
+
+ Title: Collect Firewall logs with Azure Monitor Agent
+description: Configure collection of Windows Firewall logs on virtual machines with Azure Monitor Agent.
+ Last updated : 6/1/2023++++++
+# Collect firewall logs with Azure Monitor Agent (Preview)
+Windows Firewall is a Microsoft Windows application that filters information coming to your system from the Internet and blocks potentially harmful programs. Windows Firewall logs are generated on both client and server operating systems. These logs provide valuable information about network traffic, including dropped packets and successful connections. Parsing Windows Firewall log files can be done using methods like Windows Event Forwarding (WEF) or forwarding logs to a SIEM product like Azure Sentinel. You can turn it on or off by following these steps on any Windows system:
+1. Select Start, then open Settings.
+1. Under Update & Security, select Windows Security, Firewall & network protection.
+1. Select a network profile: domain, private, or public.
+1. Under Microsoft Defender Firewall, switch the setting to On or Off.
+
+## Prerequisites
+To complete this procedure, you need:
+- Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac).
+- [Data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint).
+- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md) in the workspace.
+- A Virtual Machine, Virtual Machine Scale Set, or Arc-enabled on-premises machine that is running firewall.
+
+## Add Firewall table to Log Analytics Workspace
+Unlike other tables that are created by default in LAW, the Windows Firewall table must be manually created. Search for the Security and Audit solution and create it. See screenshot below. If the table isn't present you'll get a DCR deployment error stating that the table isn't present in LAW. The schema for the firewall table that gets created is located here: [Windows Firewall Schema](/azure/azure-monitor/reference/tables/windowsfirewall)
+
+[ ![Screenshot that shows how to add the security and audit solution.](media/data-collection-firewall-log/security-and-audit-solution.png) ](./media/data-collection-firewall-log/security-and-audit-solution.png#lightbox)
+
+## Create a data collection rule to collect firewall logs
+The [data collection rule](../essentials/data-collection-rule-overview.md) defines:
+- Which source log files Azure Monitor Agent scans for new events.
+- How Azure Monitor transforms events during ingestion.
+- The destination Log Analytics workspace and table to which Azure Monitor sends the data.
+
+You can define a data collection rule to send data from multiple machines to multiple Log Analytics workspaces, including workspaces in a different region or tenant. Create the data collection rule in the *same region* as your Analytics workspace.
+
+> [!NOTE]
+> To send data across tenants, you must first enable [Azure Lighthouse](../../lighthouse/overview.md).
+
+To create the data collection rule in the Azure portal:
+1. On the **Monitor** menu, select **Data Collection Rules**.
+1. Select **Create** to create a new data collection rule and associations.
+
+ [ ![Screenshot that shows the Create button on the Data Collection Rules screen.](media/data-collection-firewall-log/data-collection-rules-updated.png) ](media/data-collection-firewall-log/data-collection-rules-updated.png#lightbox)
+
+1. Enter a **Rule name** and specify a **Subscription**, **Resource Group**, **Region**, and **Platform Type**:
+ - **Region** specifies where the DCR will be created. The virtual machines and their associations can be in any subscription or resource group in the tenant.
+ - **Platform Type** specifies the type of resources this rule can apply to. The **Custom** option allows for both Windows and Linux types.
+ -**Data Collection End Point** select a previously created data [collection end point](../essentials/data-collection-endpoint-overview.md).
+
+ [ ![Screenshot that shows the Basics tab of the Data Collection Rule screen.](media/data-collection-firewall-log/data-collection-rule-basics-updated.png) ](media/data-collection-firewall-log/data-collection-rule-basics-updated.png#lightbox)
+1. On the **Resources** tab: Select **+ Add resources** and associate resources with the data collection rule. Resources can be Virtual Machines, Virtual Machine Scale Sets, and Azure Arc for servers. The Azure portal installs Azure Monitor Agent on resources that don't already have it installed.
+
+> [!IMPORTANT]
+> The portal enables system-assigned managed identity on the target resources, along with existing user-assigned
+> identities, if there are any. For existing applications, unless you specify the user-assigned identity in the
+> request, the machine defaults to using system-assigned identity instead. If you need network isolation using private
+> links, select existing endpoints from the same region for the respective resources or [create a new endpoint](../essentials/data-collection-endpoint-overview.md).
+
+1. On the **Collect and deliver** tab, select **Add data source** to add a data source and set a destination.
+1. Select **Firewall Logs**.
+
+ [ ![Screenshot that shows the Azure portal form to select firewall logs in a data collection rule.](media/data-collection-firewall-log/firewall-data-collection-rule.png)](media/data-collection-firewall-log/firewall-data-collection-rule.png#lightbox)
+
+1. On the **Destination** tab, add one or more destinations for the data source. You can select multiple destinations of the same or different types. For instance, you can select multiple Log Analytics workspaces, which is also known as multihoming.
+
+ [ ![Screenshot that shows the Azure portal form to add a data source in a data collection rule.](media/data-collection-firewall-log/data-collection-rule-destination.png) ](media/data-collection-firewall-log/data-collection-rule-destination.png#lightbox)
+
+1. Select **Review + create** to review the details of the data collection rule and association with the set of virtual machines.
+1. Select **Create** to create the data collection rule.
+
+> [!NOTE]
+> It can take up to 5 minutes for data to be sent to the destinations after you create the data collection rule.
++
+### Sample log queries
+
+Count the firewall log entries by URL for the host www.contoso.com.
+
+```kusto
+WindowsFirewall
+| take 10
+```
+
+[ ![Screenshot that shows the results of a Firewall log query.](media/data-collection-firewall-log/law-query-results.png) ](media/data-collection-firewall-log/law-query-results.png#lightbox)
+
+## Troubleshoot
+Use the following steps to troubleshoot the collection of firewall logs.
+
+### Run Azure Monitor Agent troubleshooter
+To test your configuration and share logs with Microsoft [use the Azure Monitor Agent Troubleshooter](use-azure-monitor-agent-troubleshooter.md).
+
+### Check if any firewall logs have been received
+Start by checking if any records have been collected for your firewall logs by running the following query in Log Analytics. If the query doesn't return records, check the other sections for possible causes. This query looks for entries in the last two days, but you can modify for another time range.
+
+``` kusto
+WindowsFirewall
+| where TimeGenerated > ago(48h)
+| order by TimeGenerated desc
+```
+
+### Verify that firewall logs are being created
+Look at the timestamps of the log files and open the latest to see that latest timestamps are present in the log files. The default location for firewall log files is C:\windows\system32\logfiles\firewall\pfirewall.log.
+
+[ ![Screenshot that shows firewall logs on a local disk.](media/data-collection-firewall-log/firewall-files-on-disk.png) ](media/data-collection-firewall-log/firewall-files-on-disk.png#lightbox)
+
+To turn on logging follow these steps.
+1. gpedit {follow the picture}ΓÇï
+2. netsh advfirewall>set allprofiles logging allowedconnections enableΓÇï
+3. netsh advfirewall>set allprofiles logging droppedconnections enableΓÇï
+
+[ ![Screenshot that show all the steps to turn on logging.](media/data-collection-firewall-log/turn-on-firewall-logging.png) ](media/data-collection-firewall-log/turn-on-firewall-logging.png#lightbox)
+
+## Next steps
+Learn more about:
+- [Azure Monitor Agent](azure-monitor-agent-overview.md).
+- [Data collection rules](../essentials/data-collection-rule-overview.md).
+- [Data collection endpoints](../essentials/data-collection-endpoint-overview.md)
azure-monitor Prometheus Metrics Scrape Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-configuration.md
relabelings:
targetLabel: instance ```
+> [!NOTE]
+> If you have relabeling configs, ensure that the relabeling does not filter out the targets, and the labels configured correctly match the targets.
+ ### Metric Relabelings Metric relabelings are applied after scraping and before ingestion. Use the `metricRelabelings` section to filter metrics after scraping. The following examples show how to do so.
azure-monitor Prometheus Metrics Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-troubleshoot.md
Replica pod scrapes metrics from `kube-state-metrics`, custom scrape targets in
If you encounter an error while you attempt to enable monitoring for your AKS cluster, please follow the instructions mentioned [here](https://github.com/Azure/prometheus-collector/tree/main/internal/scripts/troubleshoot) to run the troubleshooting script. This script is designed to do a basic diagnosis of for any configuration issues on your cluster and you can ch the generated files while creating a support request for faster resolution for your support case.
+## Missing metrics
+ ## Metrics Throttling In the Azure portal, navigate to your Azure Monitor Workspace. Go to `Metrics` and verify that the metrics `Active Time Series % Utilization` and `Events Per Minute Ingested % Utilization` are below 100%.
kubectl describe pod <ama-metrics pod name> -n kube-system
If the pods are running as expected, the next place to check is the container logs.
+## Check for relabeling configs
+
+If metrics are missing, you can also check if you have relabeling configs. With relabeling configs, ensure that the relabeling does not filter out the targets, and the labels configured correctly match the targets. Refer to [Prometheus relabel config documentation](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config) for more details.
+ ## Container logs View the container logs with the following command:
Go to `127.0.0.1:9091/metrics` in a browser to see if the metrics were scraped b
## Metric names, label names & label values
-Agent based scraping currently has the limitations in the following table:
+Metrics scraping currently has the limitations in the following table:
| Property | Limit | |:|:|
If you see metrics missed, you can first check if the ingestion limits are being
- Events Per Minute Ingested Limit - The maximum number of events per minute that can be ingested before getting throttled - Events Per Minute Ingested % Utilization - The percentage of current metric ingestion rate limit being util
-To avoid metrics ingestion throttling, you can monitor and set up an alert on the ingestion limits. See [Monitor ingestion limits](../essentials/prometheus-metrics-overview.md#how-can-i-monitor-the-service-limits-and-quota).
+To avoid metrics ingestion throttling, you can **monitor and set up an alert on the ingestion limits**. See [Monitor ingestion limits](../essentials/prometheus-metrics-overview.md#how-can-i-monitor-the-service-limits-and-quota).
Refer to [service quotas and limits](../service-limits.md#prometheus-metrics) for default quotas and also to understand what can be increased based on your usage. You can request quota increase for Azure Monitor workspaces using the `Support Request` menu for the Azure Monitor workspace. Ensure you include the ID, internal ID and Location/Region for the Azure Monitor workspace in the support request, which you can find in the `Properties' menu for the Azure Monitor workspace in the Azure portal.
azure-netapp-files Application Volume Group Add Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-add-hosts.md
Previously updated : 11/19/2021 Last updated : 06/18/2024 # Add hosts to a multiple-host SAP HANA system using application volume group for SAP HANA
azure-netapp-files Application Volume Group Add Volume Secondary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-add-volume-secondary.md
Previously updated : 11/19/2021 Last updated : 06/18/2024 # Add volumes for an SAP HANA system as a secondary database in HSR
azure-netapp-files Application Volume Group Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-concept.md
Previously updated : 04/16/2024 Last updated : 06/18/2024
azure-netapp-files Application Volume Group Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-considerations.md
Previously updated : 11/08/2023 Last updated : 06/18/2024 # Requirements and considerations for application volume group for SAP HANA
azure-netapp-files Application Volume Group Deploy First Host https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-deploy-first-host.md
Previously updated : 10/13/2022 Last updated : 06/18/2024 # Deploy the first SAP HANA host using application volume group for SAP HANA
azure-netapp-files Application Volume Group Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-disaster-recovery.md
Previously updated : 08/22/2022 Last updated : 06/18/2024 # Add volumes for an SAP HANA system as a DR system using cross-region replication
azure-netapp-files Application Volume Group Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-introduction.md
Previously updated : 02/24/2023 Last updated : 06/18/2024 # Understand Azure NetApp Files application volume group for SAP HANA
azure-netapp-files Application Volume Group Manage Volumes Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-manage-volumes-oracle.md
na Previously updated : 10/20/2023 Last updated : 04/19/2024 # Manage volumes in an application volume group for Oracle
azure-netapp-files Application Volume Group Oracle Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-oracle-considerations.md
na Previously updated : 10/20/2023 Last updated : 04/19/2024 # Requirements and considerations for application volume group for Oracle
azure-netapp-files Application Volume Group Oracle Deploy Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-oracle-deploy-volumes.md
na Previously updated : 10/20/2022 Last updated : 04/19/2024 # Deploy application volume group for Oracle
azure-netapp-files Application Volume Group Oracle Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-oracle-introduction.md
na Previously updated : 10/20/2023 Last updated : 04/19/2024 # Understand Azure NetApp Files application volume group for Oracle
azure-netapp-files Azure Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md
Previously updated : 11/02/2023 Last updated : 07/17/2024
azure-netapp-files Azure Netapp Files Resize Capacity Pools Or Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resize-capacity-pools-or-volumes.md
Previously updated : 02/21/2023 Last updated : 05/20/2024 # Resize a capacity pool or a volume
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
Previously updated : 09/29/2023 Last updated : 07/18/2024 # Resource limits for Azure NetApp Files
azure-netapp-files Azure Netapp Files Set Up Capacity Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md
Previously updated : 10/23/2023 Last updated : 05/20/2024 # Create a capacity pool for Azure NetApp Files
azure-netapp-files Azure Netapp Files Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-videos.md
Previously updated : 12/07/2023 Last updated : 02/01/2024 # Azure NetApp Files videos
azure-netapp-files Backup Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md
Previously updated : 09/29/2023 Last updated : 06/06/2024
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
Previously updated : 10/02/2023 Last updated : 06/26/2024
azure-netapp-files Cool Access Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cool-access-introduction.md
Previously updated : 11/01/2023 Last updated : 06/06/2024
azure-netapp-files Create Cross Zone Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-cross-zone-replication.md
Previously updated : 01/04/2023 Last updated : 06/06/2024 # Create cross-zone replication relationships for Azure NetApp Files
azure-netapp-files Cross Region Replication Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-introduction.md
Previously updated : 05/08/2023 Last updated : 06/06/2024
azure-netapp-files Double Encryption At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/double-encryption-at-rest.md
Previously updated : 08/28/2023 Last updated : 05/29/2024
azure-netapp-files Faq Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-networking.md
Previously updated : 11/08/2021 Last updated : 05/22/2024 # Networking FAQs for Azure NetApp Files
azure-netapp-files Faq Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-security.md
Previously updated : 02/21/2023 Last updated : 06/15/2024 # Security FAQs for Azure NetApp Files
azure-netapp-files Manage Availability Zone Volume Placement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-availability-zone-volume-placement.md
Previously updated : 01/13/2023 Last updated : 05/22/2024 # Manage availability zone volume placement for Azure NetApp Files
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Previously updated : 11/27/2023 Last updated : 07/19/2024
azure-resource-manager Bicep Config Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-linter.md
Title: Linter settings for Bicep config
description: Describes how to customize configuration values for the Bicep linter Previously updated : 05/06/2024 Last updated : 07/19/2024 # Add linter settings in the Bicep config file
The following example shows the rules that are available for configuration.
"use-resource-symbol-reference": { "level": "warning" },
+ "use-safe-access": {
+ "level": "warning"
+ },
"use-secure-value-for-secure-inputs": { "level": "error" },
azure-resource-manager Linter Rule Use Safe Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-use-safe-access.md
+
+ Title: Linter rule - Use the safe access (.?) operator
+description: Use the safe access (.?) operator instead of checking object contents with the 'contains' function.
++ Last updated : 07/19/2024++
+# Linter rule - use the safe access operator
+
+This rule looks for the use of the [`contains()`](./bicep-functions-object.md#contains) function for checking property existence before access and provides a simpler automatic replacement. It serves to recommend and introduce users to a simplified equivalent syntax without introducing any functional code changes. For more information, see [Safe dereference operator](./operator-safe-dereference.md).
+
+The specific patterns it's looking for are:
+
+- Ternary operator to check for property access:
+
+ ```bicep
+ contains(<object>, '<property>') ? <object>.<property> : <default-value>
+ ```
+
+ The following replacement is suggested:
+
+ ```bicep
+ <object>.?<property> ?? <default-value>
+ ```
+
+- Ternary operator to check for variable-named property access:
+
+ ```bicep
+ contains(<object>, <property-name>) ? foo[<property-name>] : <default-value>
+ ```
+
+ The following replacement is suggested:
+
+ ```bicep
+ <object>[?<property-name>] ?? <default-value>
+ ```
+
+## Linter rule code
+
+To customize rule settings, use the following value in the [Bicep configuration file](./bicep-config-linter.md):
+
+`use-safe-access`
+
+## Solution
+
+Accept the editor code action to automatically perform the refactor.
+
+## Examples
+
+### Named Property Access
+
+The following example triggers the rule:
+
+```bicep
+param foo object
+var test = contains(foo, 'bar') ? foo.bar : 'baz'
+```
+
+Accepting the code action results in the following Bicep:
+
+```bicep
+param foo object
+var test = foo.?bar ?? 'baz'
+```
+
+### Variable Property Access
+
+The following example triggers the rule:
+
+```bicep
+param foo object
+param target string
+var test = contains(foo, target) ? foo[target] : 'baz'
+```
+
+Accepting the code action results in the following Bicep:
+
+```bicep
+param foo object
+param target string
+var test = foo[?target] ?? 'baz'
+```
+
+### Non-issues
+
+The following examples don't trigger the rule:
+
+Difference between the property being checked and accessed:
+
+```bicep
+param foo object
+var test = contains(foo, 'bar') ? foo.baz : 'baz'
+```
+
+Difference between the variable property being checked and accessed:
+
+```bicep
+param foo object
+param target string
+param notTarget string
+var test = contains(foo, target) ? bar[notTarget] : 'baz'
+```
+
+## Next steps
+
+For more information about the linter, see [Use Bicep linter](./linter.md).
azure-resource-manager Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter.md
Title: Use Bicep linter
description: Learn how to use Bicep linter. Previously updated : 05/06/2024 Last updated : 07/19/2024 # Use Bicep linter
The default set of linter rules is minimal and taken from [arm-ttk test cases](.
- [use-recent-api-versions](./linter-rule-use-recent-api-versions.md) - [use-resource-id-functions](./linter-rule-use-resource-id-functions.md) - [use-resource-symbol-reference](./linter-rule-use-resource-symbol-reference.md)
+- [use-safe-access](./linter-rule-use-safe-access.md)
- [use-secure-value-for-secure-inputs](./linter-rule-use-secure-value-for-secure-inputs.md) - [use-stable-resource-identifiers](./linter-rule-use-stable-resource-identifier.md) - [use-stable-vm-image](./linter-rule-use-stable-vm-image.md)
backup Backup Azure Private Endpoints Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-private-endpoints-concept.md
Title: Private endpoints for Azure Backup - Overview
description: This article explains about the concept of private endpoints for Azure Backup that helps to perform backups while maintaining the security of your resources. Previously updated : 06/14/2024 Last updated : 07/19/2024
This article describes how the [enhanced capabilities of private endpoints](#key
- You need to re-register the Recovery Services resource provider with the subscription, if you've registered it before *May 1, 2020*. To re-register the provider, go to *your subscription* in the Azure portal > **Resource provider**, and then select **Microsoft.RecoveryServices** > **Re-register**. -- [Cross-region restore](backup-create-rs-vault.md#set-cross-region-restore) for SQL and SAP HANA database backups aren't supported, if the vault has private endpoints enabled.- - You can create DNS across subscriptions. - You can create a secondary private endpoint before or after having protected items in the vault. Learn [how to do Cross Region Restore to a private endpoint enabled vault](backup-azure-private-endpoints-configure-manage.md#cross-region-restore-to-a-private-endpoint-enabled-vault).
The following diagram shows how the name resolution works for storage accounts u
:::image type="content" source="./media/private-endpoints-overview/name-resolution-works-for-storage-accounts-using-private-dns-zone-inline.png" alt-text="Diagram showing how the name resolution works for storage accounts using a private DNS zone." lightbox="./media/private-endpoints-overview/name-resolution-works-for-storage-accounts-using-private-dns-zone-expanded.png":::
+The following diagram shows how you can do Cross Region Restore over Private Endpoint by replicating the Private Endpoint in a secondary region. Learn [how to do Cross Region Restore to a private endpoint enabled vault](backup-azure-private-endpoints-configure-manage.md#cross-region-restore-to-a-private-endpoint-enabled-vault).
++ ## Next steps - Learn [how to configure and manage private endpoints for Azure Backup](backup-azure-private-endpoints-configure-manage.md).
backup Backup Azure Private Endpoints Configure Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-private-endpoints-configure-manage.md
Title: How to create and manage private endpoints (with v2 experience) for Azure
description: This article explains how to configure and manage private endpoints for Azure Backup. Previously updated : 06/14/2024 Last updated : 07/19/2024
Follow these steps:
:::image type="content" source="./media/backup-azure-private-endpoints/deny-public-network.png" alt-text="Screenshot showing how to select the Deny option."::: >[!Note]
- >- Once you deny access, you can still access the vault, but you can't move data to/from networks that don't contain private endpoints. For more information, see [Create private endpoints for Azure Backup](#create-private-endpoints-for-azure-backup).
- >- Denial of public access is currently not supported for vaults that have *Cross Region Restore* enabled.
+ >Once you deny access, you can still access the vault, but you can't move data to/from networks that don't contain private endpoints. For more information, see [Create private endpoints for Azure Backup](#create-private-endpoints-for-azure-backup).
+
3. Select **Apply** to save the changes.
baremetal-infrastructure About Nc2 On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/about-nc2-on-azure.md
description: Learn about Nutanix Cloud Clusters on Azure and the benefits it off
Previously updated : 05/21/2024 Last updated : 7/19/2024 # About Nutanix Cloud Clusters on Azure
-The articles in this section are intended for professionals interested in using Nutanix Cloud Clusters (NC2) on Azure.
+In this article, we'll give an overview of the features BareMetal Infrastructure offers for Nutanix workloads.
-Email [NC2-on-Azure Docs](mailto:AzNutanixPM@microsoft.com) to provide input.
+Nutanix Cloud Clusters (NC2) on Microsoft Azure provides a hybrid cloud solution that operates as a single cloud, allowing you to manage applications and infrastructure in your private cloud and Azure. With NC2 running on Azure, you can seamlessly move your applications between on-premises and Azure using a single management console. With NC2 on Azure, you can use your existing Azure accounts and networking setup (VPN, VNets, and Subnets), eliminating the need to manage any complex network overlays. With this hybrid offering, you use the same Nutanix software and licenses across your on-premises cluster and Azure to optimize your IT investment efficiently.
+
+You use the NC2 console to create a cluster, update the cluster capacity (the number of nodes), and delete a Nutanix cluster. After you create a Nutanix cluster in Azure using NC2, you can operate the cluster in the same manner as you operate your on-premises Nutanix cluster with minor changes in the Nutanix command-line interface (nCLI), Prism Element and Prism Central web consoles, and APIs.
:::image type="content" source="media/nc2-on-azure.png" alt-text="Illustration of NC2 on Azure features." border="false" lightbox="media/nc2-on-azure.png":::
-In particular, this article highlights NC2 features.
+## Operating system and hypervisor
+
+NC2 runs Nutanix Acropolis Operating System (AOS) and Nutanix Acropolis Hypervisor (AHV).
+
+- AHV hypervisor is based upon open source Kernel-based Virtual Machine (KVM).
+- AHV will determine the lowest processor generation in the cluster and constrain all Quick Emulator (QEMU) domains to that level.
+
+This functionality allows mixing of processor generations within an AHV cluster and ensures the ability to live-migrate between hosts.
+
+AOS abstracts kvm, virsh, qemu, libvirt, and iSCSI from the end-user and handles all backend configuration. Thus users can use Prism to manage everything they would want to manage, while not needing to be concerned with low-level management.
+
+## SKUs
+
+We offer two SKUs: AN36 and AN36P. The following table presents component options for each available SKU.
+
+| Component |Ready Node for Nutanix AN36|Ready Node for Nutanix AN36P|
+| :- | -: |::|
+|Core|Intel 6140, 36 Core, 2.3 GHz|Intel 6240, 36 Core, 2.6 GHz|
+|vCPUs|72|72|
+|RAM|576 GB|768 GB|
+|Storage|18.56 TB (8 x 1.92 TB SATA SSD, 2x1.6TB NVMe)|20.7 TB (2x750 GB Optane, 6x3.2-TB NVMe)|
+|Network (available bandwidth between nodes)|25 Gbps|25 Gbps|
+
+## Licensing
+
+You can bring your own on-premises capacity-based Nutanix licenses (CBLs).
+Alternatively, you can purchase licenses from Nutanix or from Azure Marketplace.
+
+## Supported protocols
+
+The following protocols are used for different mount points within BareMetal servers for Nutanix workload.
+
+- OS mount ΓÇô internet small computer systems interface (iSCSI)
+- Data/log ΓÇô [Network File System version 3 (NFSv3)](/windows-server/storage/nfs/nfs-overview#nfs-version-3-continuous-availability)
+- Backup/archive ΓÇô [Network File System version 4 (NFSv4)](/windows-server/storage/nfs/nfs-overview#nfs-version-41)
## Unlock the benefits of Azure
In particular, this article highlights NC2 features.
* Modernize through the power of Azure * Adapt quicker with unified data governance and gain immediate insights with transformative analytics to drive innovation.
-### SKUs
-
-We offer two SKUs: AN36 and AN36P. For specifications, see [SKUs](skus.md).
- ### More benefits * Microsoft Azure Consumption Contract (MACC) credits
For any questions on Azure Hybrid Benefits, contact your Microsoft Account Execu
## Responsibility matrix
-On-premises Nutanix environments require the Nutanix customer to support all the hardware and software for running the platform.
-For NC2 on Azure, Microsoft maintains the hardware for the customer.
-For more information, see [NC2 on Azure responsibility matrix](nc2-on-azure-responsibility-matrix.md).
+NC2 on Azure implements a shared responsibility model that defines distinct roles and responsibilities of the three parties involved in the offering: the Customer, Microsoft and Nutanix.
+
+On-premises Nutanix environments require the Nutanix customer to support all the hardware and software for running the platform. For NC2 on Azure, Microsoft maintains the hardware for the customer.
++
+Microsoft manages the Azure BareMetal specialized compute hardware and its data and control plane platform for underlay network. Microsoft supports if the customers plan to bring their existing Azure Subscription, VNet, vWAN, etc.
+
+Nutanix covers the life-cycle management of Nutanix software (MCM, Prism Central/Element, etc.) and their licenses.
+
+**Monitoring and remediation**
+
+Microsoft continuously monitors the health of the underlay and BareMetal infrastructure. If Microsoft detects a failure, it takes action to repair the failed services.
## Support
Nutanix (for software-related issues) and Microsoft (for infrastructure-related
Learn more: > [!div class="nextstepaction"]
-> [Use cases and supported scenarios](use-cases-and-supported-scenarios.md)
+> [Architecture](architecture.md)
baremetal-infrastructure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/architecture.md
description: Learn about the architecture of several configurations of BareMetal
Previously updated : 7/17/2024 Last updated : 7/19/2024 # Nutanix Cloud Clusters (NC2) on Azure architectural concepts
A private cloud includes clusters with:
Private clouds are installed and managed within an Azure subscription. The number of private clouds within a subscription is scalable.
-The following diagram describes the architectural components of the Azure VMware Solution.
+The following diagram describes the architectural components of the NC2 on Azure.
:::image type="content" source="media/nc2-on-azure-architecture-overview.png" alt-text="Diagram illustrating the NC2 on Azure architecutural overview." border="false" lightbox="media/nc2-on-azure-architecture-overview.png":::
Each NC2 on Azure architectural component has the following function:
- Azure ExpressRoute: Provides high-speed private connections between Azure data centers and on-premises or colocation infrastructure. - Azure Virtual WAN (vWAN): Aggregates networking, security, and routing functions together into a single unified Wide Area Network (WAN).
+## Use cases and supported scenarios
+
+Learn about use cases and supported scenarios for NC2 on Azure, including cluster management, disaster recovery, on-demand elasticity, and lift-and-shift.
+
+### Unified management experience - cluster management
+
+That operations and cluster management be nearly identical to on-premises is critical to customers.
+Customers can update capacity, monitor alerts, replace hosts, monitor usage, and more by combining the respective strengths of Microsoft and Nutanix.
+
+### Disaster recovery
+
+Disaster recovery is critical to cloud functionality.
+A disaster can be any of the following:
+
+- Cyber attack
+- Data breach
+- Equipment failure
+- Natural disaster
+- Data loss
+- Human error
+- Malware and viruses
+- Network and internet blips
+- Hardware and/or software failure
+- Weather catastrophes
+- Flooding
+- Office vandalism
+
+When a disaster strikes, the goal of any DR plan is to ensure operations run as normally as possible.
+While the business will be aware of the crisis, ideally, its customers and end-users shouldn't be affected.
+
+### On-demand elasticity
+
+Scale up and scale out as you like.
+We provide the flexibility that means you don't have to procure hardware yourself - with just a click of a button you can get additional nodes in the cloud nearly instantly.
+
+### Lift and shift
+
+Move applications to the cloud and modernize your infrastructure.
+Applications move with no changes, allowing for flexible operations and minimum downtime.
+
+## Supported SKUs and instances
+
+The following table presents component options for each available SKU.
+
+| Component |Ready Node for Nutanix AN36|Ready Node for Nutanix AN36P|
+| :- | -: |::|
+|Core|Intel 6140, 36 Core, 2.3 GHz|Intel 6240, 36 Core, 2.6 GHz|
+|vCPUs|72|72|
+|RAM|576 GB|768 GB|
+|Storage|18.56 TB (8 x 1.92 TB SATA SSD, 2x1.6TB NVMe)|20.7 TB (2x750 GB Optane, 6x3.2-TB NVMe)|
+|Network (available bandwidth between nodes)|25 Gbps|25 Gbps|
+
+Nutanix Clusters on Azure supports:
+
+* Minimum of three bare metal nodes per cluster.
+* Maximum of 28 bare metal nodes per cluster.
+* Only the Nutanix AHV hypervisor on Nutanix clusters running in Azure.
+* Prism Central instance deployed on Nutanix Clusters on Azure to manage the Nutanix clusters in Azure.
+
+## Supported regions
+
+When planning your NC2 on Azure design, use the following table to understand what SKUs are available in each Azure region.
+
+| Azure region | SKU |
+| : | :: |
+| Australia East | AN36P |
+| East US | AN36 |
+| East US 2 | AN36P |
+| Germany West Central | AN36P |
+| Japan East | AN36P |
+| North Central US | AN36P |
+| Southeast Asia | AN36P |
+| UK South | AN36P |
+| West Europe | AN36P |
+| West US 2 | AN36 |
+ ## Deployment example The image in this section shows one example of an NC2 on Azure deployment.
The following table describes whatΓÇÖs supported for each network features confi
Learn more: > [!div class="nextstepaction"]
-> [Requirements](requirements.md)
+> [Getting Started](get-started.md)
baremetal-infrastructure Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/get-started.md
description: Learn how to sign up, set up, and use Nutanix Cloud Clusters on Azu
Previously updated : 05/21/2024 Last updated : 7/19/2024 # Getting started with Nutanix Cloud Clusters on Azure Learn how to sign up for, set up, and use Nutanix Cloud Clusters (NC2) on Azure.
+## Azure account requirements
+
+* An Azure account with a new subscription
+* A Microsoft Entra directory
+
+## My Nutanix account requirements
+
+For more information, see "NC2 on Azure Subscription and Billing" in [Nutanix Cloud Clusters on Azure Deployment and User Guide]
+(https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Cloud-Clusters-Azure:Nutanix-Cloud-Clusters-Azure).
+
+## Networking requirements
+
+* Connectivity between your on-premises datacenter and Azure. Both ExpressRoute and VPN are supported.
+* After a cluster is created, you'll need Virtual IP addresses for both the on-premises cluster and the cluster running in Azure.
+* Outbound internet access on your Azure portal.
+* Azure Directory Service resolves the FQDN:
+gateway-external-api.cloud.nutanix.com.
+
+## Other requirements
+
+* Minimum of three (or more) Azure Nutanix Ready nodes per cluster
+* Only the Nutanix AHV hypervisor on Nutanix clusters running in Azure
+* Prism Central instance deployed on NC2 on Azure to manage the Nutanix clusters in Azure
+ ## Sign up for NC2
-Once you've satisfied the [requirements](requirements.md), go to
-[Nutanix Cloud Clusters
-on Azure Deployment and User Guide](https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Cloud-Clusters-Azure:Nutanix-Cloud-Clusters-Azure) to sign up.
+Go to [Nutanix Cloud Clusters on Azure Deployment and User Guide](https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Cloud-Clusters-Azure:Nutanix-Cloud-Clusters-Azure) to sign up.
To learn about Microsoft BareMetal hardware pricing, and to purchase Nutanix software, go to [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/nutanixinc.nc2_azure?tab=Overview).
on Azure Deployment and User Guide](https://portal.nutanix.com/page/documents/de
Learn more: > [!div class="nextstepaction"]
-> [About NC2 on Azure](about-nc2-on-azure.md)
+> [FAQ](faq.md)
baremetal-infrastructure Nc2 Baremetal Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/nc2-baremetal-overview.md
- Title: What is BareMetal Infrastructure for Nutanix Cloud Clusters on Azure?--
-description: Learn about the features BareMetal Infrastructure offers for NC2 workloads.
--- Previously updated : 05/21/2024--
-# What is BareMetal Infrastructure for Nutanix Cloud Clusters on Azure?
-
-In this article, we'll give an overview of the features BareMetal Infrastructure offers for Nutanix workloads.
-
-Nutanix Cloud Clusters (NC2) on Microsoft Azure provides a hybrid cloud solution that operates as a single cloud, allowing you to manage applications and infrastructure in your private cloud and Azure. With NC2 running on Azure, you can seamlessly move your applications between on-premises and Azure using a single management console. With NC2 on Azure, you can use your existing Azure accounts and networking setup (VPN, VNets, and Subnets), eliminating the need to manage any complex network overlays. With this hybrid offering, you use the same Nutanix software and licenses across your on-premises cluster and Azure to optimize your IT investment efficiently.
-
-You use the NC2 console to create a cluster, update the cluster capacity (the number of nodes), and delete a Nutanix cluster. After you create a Nutanix cluster in Azure using NC2, you can operate the cluster in the same manner as you operate your on-premises Nutanix cluster with minor changes in the Nutanix command-line interface (nCLI), Prism Element and Prism Central web consoles, and APIs.
-
-## Supported protocols
-
-The following protocols are used for different mount points within BareMetal servers for Nutanix workload.
--- OS mount ΓÇô internet small computer systems interface (iSCSI)-- Data/log ΓÇô [Network File System version 3 (NFSv3)](/windows-server/storage/nfs/nfs-overview#nfs-version-3-continuous-availability)-- Backup/archive ΓÇô [Network File System version 4 (NFSv4)](/windows-server/storage/nfs/nfs-overview#nfs-version-41)-
-## Licensing
-
-You can bring your own on-premises capacity-based Nutanix licenses (CBLs).
-Alternatively, you can purchase licenses from Nutanix or from Azure Marketplace.
-
-## Operating system and hypervisor
-
-NC2 runs Nutanix Acropolis Operating System (AOS) and Nutanix Acropolis Hypervisor (AHV).
--- AHV hypervisor is based on open source Kernel-based Virtual Machine (KVM).-- AHV will determine the lowest processor generation in the cluster and constrain all Quick Emulator (QEMU) domains to that level.-
-This functionality allows mixing of processor generations within an AHV cluster and ensures the ability to live-migrate between hosts.
-
-AOS abstracts kvm, virsh, qemu, libvirt, and iSCSI from the end-user and handles all backend configuration.
-Thus users can use Prism to manage everything they would want to manage, while not needing to be concerned with low-level management.
-
-## Next steps
-
-Learn more:
-
-> [!div class="nextstepaction"]
-> [Getting started with NC2 on Azure](get-started.md)
baremetal-infrastructure Nc2 On Azure Responsibility Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/nc2-on-azure-responsibility-matrix.md
- Title: NC2 on Azure responsibility matrix--
-description: Defines who's responsible for what for NC2 on Azure.
--- Previously updated : 7/18/2024--
-# NC2 on Azure responsibility matrix
-
-NC2 on Azure implements a shared responsibility model that defines distinct roles and responsibilities of the three parties involved in the offering: the Customer, Microsoft and Nutanix.
-
-On-premises Nutanix environments require the Nutanix customer to support all the hardware and software for running the platform. For NC2 on Azure, Microsoft maintains the hardware for the customer.
--
-Microsoft manages the Azure BareMetal specialized compute hardware and its data and control plane platform for underlay network. Microsoft supports if the customers plan to bring their existing Azure Subscription, VNet, vWAN, etc.
-
-Nutanix covers the life-cycle management of Nutanix software (MCM, Prism Central/Element, etc.) and their licenses.
-
-**Monitoring and remediation**
-
-Microsoft continuously monitors the health of the underlay and BareMetal infrastructure. If Microsoft detects a failure, it takes action to repair the failed services.
baremetal-infrastructure Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/requirements.md
- Title: Requirements--
-description: Learn what you need to run NC2 on Azure, including Azure, Nutanix, networking, and other requirements.
--- Previously updated : 05/21/2024--
-# Requirements
-
-This article assumes prior knowledge of the Nutanix stack and Azure services to operate significant deployments on Azure.
-The following sections identify the requirements to use Nutanix Clusters on Azure:
-
-## Azure account requirements
-
-* An Azure account with a new subscription
-* A Microsoft Entra directory
-
-## My Nutanix account requirements
-
-For more information, see "NC2 on Azure Subscription and Billing" in [Nutanix Cloud Clusters on Azure Deployment and User Guide]
-(https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Cloud-Clusters-Azure:Nutanix-Cloud-Clusters-Azure).
-
-## Networking requirements
-
-* Connectivity between your on-premises datacenter and Azure. Both ExpressRoute and VPN are supported.
-* After a cluster is created, you'll need Virtual IP addresses for both the on-premises cluster and the cluster running in Azure.
-* Outbound internet access on your Azure portal.
-* Azure Directory Service resolves the FQDN:
-gateway-external-api.cloud.nutanix.com.
-
-## Other requirements
-
-* Minimum of three (or more) Azure Nutanix Ready nodes per cluster
-* Only the Nutanix AHV hypervisor on Nutanix clusters running in Azure
-* Prism Central instance deployed on NC2 on Azure to manage the Nutanix clusters in Azure
-
-## Next steps
-
-Learn more:
-
-> [!div class="nextstepaction"]
-> [Supported instances and regions](supported-instances-and-regions.md)
baremetal-infrastructure Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/skus.md
- Title: SKUs--
-description: Learn about SKU options for NC2 on Azure, including core, RAM, storage, and network.
--- Previously updated : 05/21/2024--
-# SKUs
-
-This article identifies options associated with SKUs available for NC2 on Azure, including core, RAM, storage, and network.
-
-## Options
-
-The following table presents component options for each available SKU.
-
-| Component |Ready Node for Nutanix AN36|Ready Node for Nutanix AN36P|
-| :- | -: |::|
-|Core|Intel 6140, 36 Core, 2.3 GHz|Intel 6240, 36 Core, 2.6 GHz|
-|vCPUs|72|72|
-|RAM|576 GB|768 GB|
-|Storage|18.56 TB (8 x 1.92 TB SATA SSD, 2x1.6TB NVMe)|20.7 TB (2x750 GB Optane, 6x3.2-TB NVMe)|
-|Network (available bandwidth between nodes)|25 Gbps|25 Gbps|
-
-## Next steps
-
-Learn more:
-
-> [!div class="nextstepaction"]
-> [FAQ](faq.md)
baremetal-infrastructure Supported Instances And Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/supported-instances-and-regions.md
- Title: Supported instances and regions--
-description: Learn about instances and regions supported for NC2 on Azure.
--- Previously updated : 05/21/2024--
-# Supported instances and regions
-
-Learn about instances and regions supported for NC2 on Azure.
-
-## Supported instances
-
-Nutanix Clusters on Azure supports:
-
-* Minimum of three bare metal nodes per cluster.
-* Maximum of 28 bare metal nodes per cluster.
-* Only the Nutanix AHV hypervisor on Nutanix clusters running in Azure.
-* Prism Central instance deployed on Nutanix Clusters on Azure to manage the Nutanix clusters in Azure.
-
-## Supported regions
-
-NC2 on Azure supports the following regions using AN36:
-
-* East US
-* West US 2
-
-NC2 on Azure supports the following regions using AN36P:
-
-* North Central US
-* East US 2
-* Southeast Asia
-* Australia East
-* UK South
-* West Europe
-* Germany West Central
-* Japan East
-
-## Next steps
-
-Learn more:
-
-> [!div class="nextstepaction"]
-> [SKUs](skus.md)
baremetal-infrastructure Use Cases And Supported Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/use-cases-and-supported-scenarios.md
- Title: Use cases and supported scenarios--
-description: Learn about use cases and supported scenarios for NC2 on Azure, including cluster management, disaster recovery, on-demand elasticity, and lift-and-shift.
--- Previously updated : 7/17/2024--
-# Use cases and supported scenarios
-
- Learn about use cases and supported scenarios for NC2 on Azure, including cluster management, disaster recovery, on-demand elasticity, and lift-and-shift.
-
-## Unified management experience - cluster management
-
-That operations and cluster management be nearly identical to on-premises is critical to customers.
-Customers can update capacity, monitor alerts, replace hosts, monitor usage, and more by combining the respective strengths of Microsoft and Nutanix.
-
-## Disaster recovery
-
-Disaster recovery is critical to cloud functionality.
-A disaster can be any of the following:
--- Cyber attack-- Data breach-- Equipment failure-- Natural disaster-- Data loss-- Human error-- Malware and viruses-- Network and internet blips-- Hardware and/or software failure-- Weather catastrophes-- Flooding-- Office vandalism-
- ...or anything else that puts your operations at risk.
-
-When a disaster strikes, the goal of any DR plan is to ensure operations run as normally as possible.
-While the business will be aware of the crisis, ideally, its customers and end-users shouldn't be affected.
-
-## On-demand elasticity
-
-Scale up and scale out as you like.
-We provide the flexibility that means you don't have to procure hardware yourself - with just a click of a button you can get additional nodes in the cloud nearly instantly.
-
-## Lift and shift
-
-Move applications to the cloud and modernize your infrastructure.
-Applications move with no changes, allowing for flexible operations and minimum downtime.
-
-> [!div class="nextstepaction"]
-> [Architecture](architecture.md)
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
Agent-based faults are injected into **Azure Virtual Machines** or **Virtual Mac
| Windows<sup>1</sup>, Linux<sup>2</sup> | [Network Disconnect](#network-disconnect) | Network disruption | | Windows<sup>1</sup>, Linux<sup>2</sup> | [Network Latency](#network-latency) | Network performance degradation | | Windows<sup>1</sup>, Linux<sup>2</sup> | [Network Packet Loss](#network-packet-loss) | Network reliability issues |
-| Windows<sup>1</sup>, Linux<sup>2</sup> | [Network Isolation](#network-isolation) | Network disruption |
+| Windows, Linux<sup>2</sup> | [Network Isolation](#network-isolation) | Network disruption |
| Windows | [DNS Failure](#dns-failure) | DNS resolution issues | | Windows | [Network Disconnect (Via Firewall)](#network-disconnect-via-firewall) | Network disruption | | Windows, Linux | [Physical Memory Pressure](#physical-memory-pressure) | Memory capacity loss, resource pressure |
The parameters **destinationFilters** and **inboundDestinationFilters** use the
|-|-| | Capability name | NetworkIsolation-1.0 | | Target type | Microsoft-Agent |
-| Supported OS types | Windows, Linux (outbound traffic only) |
+| Supported OS types | Windows, Linux (outbound only) |
| Description | Fully isolate the virtual machine from network connections by dropping all IP-based inbound (on Windows) and outbound (on Windows and Linux) packets for the specified duration. At the end of the duration, network connections will be re-enabled. Because the agent depends on network traffic, this action cannot be cancelled and will run to the specified duration. | | Prerequisites | **Windows:** The agent must run as administrator, which happens by default if installed as a VM extension. | | | **Linux:** The `tc` (Traffic Control) package is used for network faults. If it isn't already installed, the agent automatically attempts to install it from the default package manager. |
The parameters **destinationFilters** and **inboundDestinationFilters** use the
#### Limitations * Because the agent depends on network traffic, **this action cannot be cancelled** and will run to the specified duration. Use with caution.
-* The agent-based network faults currently only support IPv4 addresses.
-* When running on Windows, the network packet loss fault currently only works with TCP or UDP packets.
-* When running on Linux, this fault only affects **outbound** traffic, not inbound traffic. The fault affects **both inbound and outbound** traffic on Windows environments.
* This fault currently only affects new connections. Existing active connections are unaffected. You can restart the service or process to force connections to break.
+* When running on Linux, this fault can only affect **outbound** traffic, not inbound traffic. The fault can affect **both inbound and outbound** traffic on Windows environments.
### DNS Failure
chaos-studio Chaos Studio Samples Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-samples-rest-api.md
az rest --method get --url "https://management.azure.com/{experimentId}?api-vers
az rest --method put --url "https://management.azure.com/{experimentId}?api-version={apiVersion}" --body @{experimentName.json} ```
+Note: if you receive an `UnsupportedMediaType` error, make sure your referenced JSON file is valid, and try other ways to reference the `.json` file. Different command-line interpreters may require different methods of file referencing. Another common syntax is `--body "@experimentName.json"`.
+ ### Delete an experiment ```azurecli
communication-services Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/advanced-messaging/logs.md
+
+ Title: Azure Communication Services Advanced Messaging logs
+
+description: Learn about logging for Azure Communication Services Advanced Messaging.
+++ Last updated : 07/18/2024++++
+# Advanced Messaging logs
+
+Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured for Advanced Messaging through the Azure portal by enabling the diagnostic setting for `Advanced Messaging Logs`.
+
+> [!IMPORTANT]
+> The following refers to logs enabled through [Azure Monitor](../../../azure-monitor/overview.md) (see also [FAQ](../../../azure-monitor/overview.md#frequently-asked-questions)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../analytics/enable-logging.md)
+
+## ACSAdvancedMessagingOperations
+
+To view the table definition, see [Log Analytics table ACSAdvancedMessagingOperations](/azure/azure-monitor/reference/tables/ACSAdvancedMessagingOperations).
communication-services Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/advanced-messaging/metrics.md
+
+ Title: Advanced Messaging metrics definitions for Azure Communication Service
+
+description: This document covers definitions of Advanced Messaging metrics available in the Azure portal.
+++ Last updated : 07/18/2024++++
+# Advanced Messaging metrics overview
+
+Azure Communication Services currently provides metrics for all Communication Services primitives. You can use [Azure Monitor metrics explorer](../../../azure-monitor/essentials/analyze-metrics.md) to:
+
+- Plot your own charts.
+- Investigate abnormalities in your metric values.
+- Understand your API traffic by using the metrics data that Advanced Messaging requests emit.
+
+## Where to find metrics
+
+Primitives in Communication Services emit metrics for API requests. To find these metrics, see the **Metrics** tab under your Communication Services resource. You can also create permanent dashboards by using the workbooks tab under your Communication Services resource.
+
+## Metric definitions
+
+All API request metrics contain three dimensions that you can use to filter your metrics data. These dimensions can be aggregated together by using the `Count` aggregation type. They support all standard Azure Aggregation time series, including `Sum`, `Average`, `Min`, and `Max`.
+
+For more information on supported aggregation types and time series aggregations, see [Azure Monitor Metrics aggregation and display explained](./../../../azure-monitor/essentials/metrics-aggregation-explained.md).
+
+- **Operation**: All operations or routes that can be called on the Azure Communication Services Advanced Messaging gateway.
+- **Status Code**: The status code response sent after the request.
+- **StatusSubClass**: The status code series sent after the response.
+
+For the complete list of all metrics emitted by Azure Communication Services, see [Metrics overview](./../metrics.md) or the reference documentation [Supported metrics for Microsoft.Communication/CommunicationServices](/azure/azure-monitor/reference/supported-metrics/microsoft-communication-communicationservices-metrics).
+
+### Advanced Messaging API requests
+
+The following operations are available on Advanced Messaging API request metrics:
+
+| Operation / Route | Description | Scenario |
+|||-|
+| DownloadMedia | Download media payload request. | Business requested to download media payload. |
+| ListTemplates | List templates request. | Business requested to list templates for a given channel. |
+| ReceiveMessage | Message received. | User sent a message to the business. |
+| SendMessage | Send message notification request. | Business requesting to send a message to the user. |
+| SendMessageDeliveryStatus | Delivery status received. | Business received response about a message that the business requested to send to a user. |
+
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation.md
Azure Communication Services Call Automation provides developers the ability to build server-based, intelligent call workflows, and call recording for voice and Public Switched Telephone Network(PSTN) channels. The SDKs, available in C#, Java, JavaScript and Python, use an action-event model to help you build personalized customer interactions. Your communication applications can listen to real-time call events and perform control plane actions (like answer, transfer, play audio, start recording, etc.) to steer and control calls based on your business logic.
-> [!NOTE]
-> Call Automation currently doesn't support [Rooms](../rooms/room-concept.md) calls.
- ## Common use cases Some of the common use cases that can be built using Call Automation include:
The following list presents the set of features that are currently available in
| | Place new outbound call to one or more endpoints | ✔️ | ✔️ | ✔️ | ✔️ | | | Redirect* (forward) a call to one or more endpoints | ✔️ | ✔️ | ✔️ | ✔️ | | | Reject an incoming call | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Connect to an ongoing call or Room | ✔️ | ✔️ | ✔️ | ✔️ |
| Mid-call scenarios | Add one or more endpoints to an existing call | ✔️ | ✔️ | ✔️ | ✔️ | | | Cancel adding an endpoint to an existing call | ✔️ | ✔️ | ✔️ | ✔️ | | | Play Audio from an audio file | ✔️ | ✔️ | ✔️ | ✔️ |
Using the IncomingCall event from Event Grid, a call can be redirected to one or
**Create Call** Create Call action can be used to place outbound calls to phone numbers and to other communication users. Use cases include your application placing outbound calls to proactively inform users about an outage or notify about an order update.
+**Connect Call**
+Connect Call action can be used to connect to an ongoing call and take call actions on it. You can also use this action to connect and manage a Rooms call programmatically, like performing PSTN dial outs for Room using your service.
+ ### Mid-call actions These actions can be performed on the calls that are answered or placed using Call Automation SDKs. Each mid-call action has a corresponding success or failure web hook callback event.
The Call Automation events are sent to the web hook callback URI specified when
| Event | Description | | -- | |
-| CallConnected | Your applicationΓÇÖs call leg is connected (inbound or outbound) |
-| CallDisconnected | Your applicationΓÇÖs call leg is disconnected |
-| CallTransferAccepted | Your applicationΓÇÖs call leg has been transferred to another endpoint |
-| CallTransferFailed | The transfer of your applicationΓÇÖs call leg failed |
-| AddParticipantSucceeded| Your application added a participant |
-| AddParticipantFailed | Your application was unable to add a participant |
-| CancelAddParticipantSucceeded| Your application canceled adding a participant |
-| CancelAddParticipantFailed | Your application was unable to cancel adding a participant |
-| RemoveParticipantSucceeded| Your application has successfully removed a participant from the call. |
-| RemoveParticipantFailed | Your application was unable to remove a participant from the call. |
-| ParticipantsUpdated | The status of a participant changed while your applicationΓÇÖs call leg was connected to a call |
+| CallConnected | The call has successfully started (when using Answer or Create action) or your application has successfully connected to an ongoing call (when using Connect action)|
+| CallDisconnected | Your application has been disconnected from the call |
+| ConnectFailed | Your application failed to connect to a call (for connect call action only)|
+| CallTransferAccepted | Transfer action has successfully completed and the transferee is connected to the target participant |
+| CallTransferFailed | The transfer action has failed |
+| AddParticipantSucceeded| Your application has successfully added a participant to the call |
+| AddParticipantFailed | Your application was unable to add a participant to the call (due to an error or the participant didn't accept the invite |
+| CancelAddParticipantSucceeded| Your application canceled an AddParticipant request successfully (i.e. the participant was not added to the call) |
+| CancelAddParticipantFailed | Your application was unable to cancel an AddParticipant request (this could be because the request has already been processed) |
+| RemoveParticipantSucceeded| Your application has successfully removed a participant from the call |
+| RemoveParticipantFailed | Your application was unable to remove a participant from the call |
+| ParticipantsUpdated | The status of a participant changed while your application was connected to a call |
| PlayCompleted | Your application successfully played the audio file provided | | PlayFailed | Your application failed to play audio | | PlayCanceled | The requested play action has been canceled |
communication-services Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/metrics.md
Previously updated : 06/30/2023 Last updated : 07/18/2024 + # Metrics overview Azure Communication Services currently provides metrics for all Azure communication services' primitives. [Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md) can be used to plot your own charts, investigate abnormalities in your metric values, and understand your API traffic by using the metrics data that email requests emit.
More information on supported aggregation types and time series aggregations can
- **Status Code** - The status code response sent after the request. - **StatusSubClass** - The status code series sent after the response.
-### Chat API request metric operations
-
-The following operations are available on Chat API request metrics:
-
-| Operation / Route | Description |
-| -- | - |
-| GetChatMessage | Gets a message by message ID. |
-| ListChatMessages | Gets a list of chat messages from a thread. |
-| SendChatMessage | Sends a chat message to a thread. |
-| UpdateChatMessage | Updates a chat message. |
-| DeleteChatMessage | Deletes a chat message. |
-| GetChatThread | Gets a chat thread. |
-| ListChatThreads | Gets the list of chat threads of a user. |
-| UpdateChatThread | Updates a chat thread's properties. |
-| CreateChatThread | Creates a chat thread. |
-| DeleteChatThread | Deletes a thread. |
-| GetReadReceipts | Gets read receipts for a thread. |
-| SendReadReceipt | Sends a read receipt event to a thread, on behalf of a user. |
-| SendTypingIndicator | Posts a typing event to a thread, on behalf of a user. |
-| ListChatThreadParticipants | Gets the members of a thread. |
-| AddChatThreadParticipants | Adds thread members to a thread. If members already exist, no change occurs. |
-| RemoveChatThreadParticipant | Remove a member from a thread. |
-
+### Advanced Messaging API requests
-If a request is made to an operation that isn't recognized, you receive a "Bad Route" value response.
+The following operations are available on Advanced Messaging API request metrics:
-### SMS API requests
+| Operation / Route | Description | Scenario |
+|||-|
+| DownloadMedia | Download media payload request. | Business requested to download media payload. |
+| ListTemplates | List templates request. | Business requested to list templates for a given channel. |
+| ReceiveMessage | Message received. | User sent a message to the business. |
+| SendMessage | Send message notification request. | Business requesting to send a message to the user. |
+| SendMessageDeliveryStatus | Delivery status received. | Business received response about a message that the business requested to send to a user. |
-The following operations are available on SMS API request metrics:
-
-| Operation / Route | Description |
-| -- | - |
-| SMSMessageSent | Sends an SMS message. |
-| SMSDeliveryReportsReceived | Gets SMS Delivery Reports |
-| SMSMessagesReceived | Gets SMS messages. |
- ### Authentication API requests
The following operations are available on Call Automation API request metrics:
| Delete Call | Delete a call. | | Cancel All Media Operations | Cancel all ongoing or queued media operations in a call. |
+### Chat API request metric operations
+
+The following operations are available on Chat API request metrics:
+
+| Operation / Route | Description |
+| -- | - |
+| GetChatMessage | Gets a message by message ID. |
+| ListChatMessages | Gets a list of chat messages from a thread. |
+| SendChatMessage | Sends a chat message to a thread. |
+| UpdateChatMessage | Updates a chat message. |
+| DeleteChatMessage | Deletes a chat message. |
+| GetChatThread | Gets a chat thread. |
+| ListChatThreads | Gets the list of chat threads of a user. |
+| UpdateChatThread | Updates a chat thread's properties. |
+| CreateChatThread | Creates a chat thread. |
+| DeleteChatThread | Deletes a thread. |
+| GetReadReceipts | Gets read receipts for a thread. |
+| SendReadReceipt | Sends a read receipt event to a thread, on behalf of a user. |
+| SendTypingIndicator | Posts a typing event to a thread, on behalf of a user. |
+| ListChatThreadParticipants | Gets the members of a thread. |
+| AddChatThreadParticipants | Adds thread members to a thread. If members already exist, no change occurs. |
+| RemoveChatThreadParticipant | Remove a member from a thread. |
++
+If a request is made to an operation that isn't recognized, you receive a "Bad Route" value response.
+ ### Job Router API requests The following operations are available on Job Router API request metrics:
The following operations are available on Rooms API request metrics:
:::image type="content" source="./media/rooms/rooms-metrics.png" alt-text="Screenshot of Rooms Request Metric." lightbox="./media/rooms/rooms-metrics.png":::
+### SMS API requests
+
+The following operations are available on SMS API request metrics:
+
+| Operation / Route | Description |
+| -- | - |
+| SMSMessageSent | Sends an SMS message. |
+| SMSDeliveryReportsReceived | Gets SMS Delivery Reports |
+| SMSMessagesReceived | Gets SMS messages. |
++ ## Next steps - Learn more about [Data Platform Metrics](../../azure-monitor/essentials/data-platform-metrics.md)
communication-services Actions For Call Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/actions-for-call-control.md
Call Automation uses a REST API interface to receive requests for actions and pr
Call Automation supports various other actions to manage call media and recording that have separate guides.
-> [!NOTE]
-> Call Automation currently doesn't support [Rooms](../../concepts/rooms/room-concept.md) calls.
- As a prerequisite, we recommend you to read these articles to make the most of this guide: 1. Call Automation [concepts guide](../../concepts/call-automation/call-automation.md#call-actions) that describes the action-event programming model and event callbacks.
The response provides you with CallConnection object that you can use to take fu
2. `ParticipantsUpdated` event that contains the latest list of participants in the call. ![Sequence diagram for placing an outbound call.](media/make-call-flow.png)
+## Connect to a call
+Connect action enables your service to establish a connection with an ongoing call and take actions on it. This is useful to manage a Rooms call or when client applications started a 1:1 or group call that Call automation isn't part of. Connection is established using the CallLocator property and can be of types: ServerCallLocator, GroupCallLocator, and RoomCallLocator. These IDs can be found when the call is originally established or a Room is created, and also published as part of [CallStarted](./../../../event-grid/communication-services-voice-video-events.md#microsoftcommunicationcallstarted) event.
+
+To connect to any 1:1 or group call, use the ServerCallLocator. If you started a call using GroupCallId, you can also use the GroupCallLocator.
+### [csharp](#tab/csharp)
+
+```csharp
+Uri callbackUri = new Uri("https://<myendpoint>/Events"); //the callback endpoint where you want to receive subsequent events
+CallLocator serverCallLocator = new ServerCallLocator("<ServerCallId>");
+ConnctCallResult response = await client.ConnectAsync(serverCallLocator, callbackUri);
+```
+
+### [Java](#tab/java)
+
+```java
+String callbackUri = "https://<myendpoint>/Events"; //the callback endpoint where you want to receive subsequent events
+CallLocator serverCallLocator = new ServerCallLocator("<ServerCallId>");
+ConnectCallResult response = client.connectCall(serverCallLocator, callbackUri).block();
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+const callbackUri = "https://<myendpoint>/Events"; // the callback endpoint where you want to receive subsequent events
+const serverCallLocator = { kind: "serverCallLocator", id: "<serverCallId>" };
+const response = await client.connectCall(serverCallLocator, callbackUri);
+```
+
+### [Python](#tab/python)
+
+```python
+callback_uri = "https://<myendpoint>/Events" # the callback endpoint where you want to receive subsequent events
+server_call_locator = ServerCallLocator("<server_call_id>")
+call_connection_properties = client.connect_call(call_locator=server_call_locator, callback_url=callback_uri)
+```
+
+--
+
+To connect to a Rooms call, use RoomCallLocator which takes RoomId.
+### [csharp](#tab/csharp)
+
+```csharp
+Uri callbackUri = new Uri("https://<myendpoint>/Events"); //the callback endpoint where you want to receive subsequent events
+CallLocator roomCallLocator = new RoomCallLocator("<RoomId>");
+ConnctCallResult response = await client.ConnectAsync(roomCallLocator, callbackUri);
+```
+
+### [Java](#tab/java)
+
+```java
+String callbackUri = "https://<myendpoint>/Events"; //the callback endpoint where you want to receive subsequent events
+CallLocator roomCallLocator = new RoomCallLocator("<RoomId>");
+ConnectCallResult response = client.connectCall(roomCallLocator, callbackUri).block();
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+const roomCallLocator = { kind: "roomCallLocator", id: "<RoomId>" };
+const callbackUri = "https://<myendpoint>/Events"; // the callback endpoint where you want to receive subsequent events
+const response = await client.connectCall(roomCallLocator, callbackUri);
+```
+
+### [Python](#tab/python)
+
+```python
+callback_uri = "https://<myendpoint>/Events" # the callback endpoint where you want to receive subsequent events
+room_call_locator = RoomCallLocator("<room_id>")
+call_connection_properties = client.connect_call(call_locator=room_call_locator, callback_url=callback_uri)
+```
+
+--
+
+A successful response provides you with CallConnection object that you can use to take further actions on this call. Two events are published to the callback endpoint you provided earlier:
+1. `CallConnected` event notifying that you successfully connect to the call.
+2. `ParticipantsUpdated` event that contains the latest list of participants in the call.
+
+At any point after a successful connection, if your service is disconnected from this call you will be notified via a CallDisconected event. Failure to connect to the call in the first place results in ConnectFailed event.
+
+![Sequence diagram for connecting to call.](media/connect-call-flow.png)
+ ## Answer an incoming call Once you've subscribed to receive [incoming call notifications](../../concepts/call-automation/incoming-call-notification.md) to your resource, you will answer an incoming call. When answering a call, it's necessary to provide a callback url. Communication Services post all subsequent events about this call to that url.
connectors Connectors Native Reqres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-reqres.md
This how-to guide shows create a logic app workflow that can receive and handle
> [!NOTE] >
-> The Response action works only when you use the Request trigger.
+> The Response action works only when you use the **Request** trigger.
-For example, this list describes some tasks that your workflow can perform when you use the Request trigger and Response action:
+For example, this list describes some tasks that your workflow can perform when you use the **Request** trigger and Response action:
* Receive and respond to an HTTPS request for data in an on-premises database.
To run your workflow by sending an outgoing or outbound request instead, use the
* An Azure account and subscription. If you don't have a subscription, you can [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* The logic app workflow where you want to receive the inbound HTTPS request. To start your workflow with a Request trigger, you have to start with a blank workflow. To use the Response action, your workflow must start with the Request trigger.
+* The logic app workflow where you want to receive the inbound HTTPS request. To start your workflow with a **Request** trigger, you have to start with a blank workflow. To use the Response action, your workflow must start with the **Request** trigger.
+ <a name="add-request-trigger"></a> ## Add a Request trigger
-The Request trigger creates a manually callable endpoint that handles *only* inbound requests over HTTPS. When the caller sends a request to this endpoint, the Request trigger fires and runs the workflow. For information about how to call this trigger, review [Call, trigger, or nest workflows with HTTPS endpoints in Azure Logic Apps](../logic-apps/logic-apps-http-endpoint.md).
+The **Request** trigger creates a manually callable endpoint that handles *only* inbound requests over HTTPS. When the caller sends a request to this endpoint, the **Request** trigger fires and runs the workflow. For information about how to call this trigger, review [Call, trigger, or nest workflows with HTTPS endpoints in Azure Logic Apps](../logic-apps/logic-apps-http-endpoint.md).
## [Consumption](#tab/consumption)
The Request trigger creates a manually callable endpoint that handles *only* inb
| Property name | JSON property name | Required | Description | ||--|-|-| | **HTTP POST URL** | {none} | Yes | The endpoint URL that's generated after you save your workflow and is used for sending a request that triggers your workflow. |
- | **Request Body JSON Schema** | `schema` | No | The JSON schema that describes the properties and values in the incoming request body. The designer uses this schema to generate tokens for the properties in the request. That way, your workflow can parse, consume, and pass along outputs from the Request trigger into your workflow. <br><br>If you don't have a JSON schema, you can generate the schema from a sample payload by using the **Use sample payload to generate schema** capability. |
+ | **Request Body JSON Schema** | `schema` | No | The JSON schema that describes the properties and values in the incoming request body. The designer uses this schema to generate tokens for the properties in the request. That way, your workflow can parse, consume, and pass along outputs from the **Request** trigger into your workflow. <br><br>If you don't have a JSON schema, you can generate the schema from a sample payload by using the **Use sample payload to generate schema** capability. |
The following example shows a sample JSON schema:
The Request trigger creates a manually callable endpoint that handles *only* inb
To generate a JSON schema that's based on the expected payload (data), you can use a tool such as [JSONSchema.net](https://jsonschema.net), or you can follow these steps:
- 1. In the Request trigger, select **Use sample payload to generate schema**.
+ 1. In the **Request** trigger, select **Use sample payload to generate schema**.
![Screenshot showing Consumption workflow, Request trigger, and "Use sample payload to generate schema" selected.](./media/connectors-native-reqres/generate-from-sample-payload-consumption.png)
The Request trigger creates a manually callable endpoint that handles *only* inb
} ```
- 1. In the Request trigger's title bar, select the ellipses button (**...**).
+ 1. In the **Request** trigger's title bar, select the ellipses button (**...**).
1. In the trigger's settings, turn on **Schema Validation**, and select **Done**.
The Request trigger creates a manually callable endpoint that handles *only* inb
> [!NOTE] > > If you want to include the hash or pound symbol (**#**) in the URI
- > when making a call to the Request trigger, use this encoded version instead: `%25%23`
+ > when making a call to the **Request** trigger, use this encoded version instead: `%25%23`
## [Standard](#tab/standard)
The Request trigger creates a manually callable endpoint that handles *only* inb
| Property name | JSON property name | Required | Description | ||--|-|-| | **HTTP POST URL** | {none} | Yes | The endpoint URL that's generated after you save your workflow and is used for sending a request that triggers your workflow. |
- | **Request Body JSON Schema** | `schema` | No | The JSON schema that describes the properties and values in the incoming request body. The designer uses this schema to generate tokens for the properties in the request. That way, your workflow can parse, consume, and pass along outputs from the Request trigger into your workflow. <br><br>If you don't have a JSON schema, you can generate the schema from a sample payload by using the **Use sample payload to generate schema** capability. |
+ | **Request Body JSON Schema** | `schema` | No | The JSON schema that describes the properties and values in the incoming request body. The designer uses this schema to generate tokens for the properties in the request. That way, your workflow can parse, consume, and pass along outputs from the **Request** trigger into your workflow. <br><br>If you don't have a JSON schema, you can generate the schema from a sample payload by using the **Use sample payload to generate schema** capability. |
The following example shows a sample JSON schema:
The Request trigger creates a manually callable endpoint that handles *only* inb
To generate a JSON schema that's based on the expected payload (data), you can use a tool such as [JSONSchema.net](https://jsonschema.net), or you can follow these steps:
- 1. In the Request trigger, select **Use sample payload to generate schema**.
+ 1. In the **Request** trigger, select **Use sample payload to generate schema**.
![Screenshot showing Standard workflow, Request trigger, and "Use sample payload to generate schema" selected.](./media/connectors-native-reqres/generate-from-sample-payload-standard.png)
The Request trigger creates a manually callable endpoint that handles *only* inb
} ```
- 1. On the designer, select the Request trigger. On the information pane that opens, select the **Settings** tab.
+ 1. On the designer, select the **Request** trigger. On the information pane that opens, select the **Settings** tab.
1. Expand **Data Handling**, and set **Schema Validation** to **On**.
The Request trigger creates a manually callable endpoint that handles *only* inb
> [!NOTE] > > If you want to include the hash or pound symbol (**#**) in the URI
- > when making a call to the Request trigger, use this encoded version instead: `%25%23`
+ > when making a call to the **Request** trigger, use this encoded version instead: `%25%23`
>
- > The URL for the Request trigger is associated with your workflow's storage account. This URL
+ > The URL for the **Request** trigger is associated with your workflow's storage account. This URL
> changes if the storage account changes. For example, with Standard logic apps, if you manually > change your storage account and copy your workflow to the new storage account, the URL for
- > the Request trigger also changes to reflect the new storage account. The same workflow has a different URL.
+ > the **Request** trigger also changes to reflect the new storage account. The same workflow has a different URL.
For information about security, authorization, and encryption for inbound calls
## Trigger outputs
-The following table lists the outputs from the Request trigger:
+The following table lists the outputs from the **Request** trigger:
| JSON property name | Data type | Description | |--|--|-|
The following table lists the outputs from the Request trigger:
## Add a Response action
-When you use the Request trigger to receive inbound requests, you can model the response and send the payload results back to the caller by using the Response built-in action, which works *only* with the Request trigger. This combination with the Request trigger and Response action creates the [request-response pattern](https://en.wikipedia.org/wiki/Request%E2%80%93response). Except for inside Foreach loops and Until loops, and parallel branches, you can add the Response action anywhere in your workflow.
+When you use the **Request** trigger to receive inbound requests, you can model the response and send the payload results back to the caller by using the Response built-in action, which works *only* with the **Request** trigger. This combination with the **Request** trigger and Response action creates the [request-response pattern](https://en.wikipedia.org/wiki/Request%E2%80%93response). Except for inside Foreach loops and Until loops, and parallel branches, you can add the Response action anywhere in your workflow.
> [!IMPORTANT] >
When you use the Request trigger to receive inbound requests, you can model the
1. On the workflow designer, [follow these general steps to find and add the Response built-in action named **Response**](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
- For simplicity, the following examples show a collapsed Request trigger.
+ For simplicity, the following examples show a collapsed **Request** trigger.
1. In the action information box, add the required values for the response message.
When you use the Request trigger to receive inbound requests, you can model the
## Test your workflow
-To test your workflow, send an HTTP request to the generated URL. For example, you can use local tools or apps such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/) to send the HTTP request.
+To trigger your workflow, send an HTTP request to the URL generated for the **Request** trigger, including the method that the **Request** trigger expects, by using your HTTP request tool and its instructions.
-For more information about the trigger's underlying JSON definition and how to call this trigger, see these topics, [Request trigger type](../logic-apps/logic-apps-workflow-actions-triggers.md#request-trigger) and [Call, trigger, or nest workflows with HTTP endpoints in Azure Logic Apps](../logic-apps/logic-apps-http-endpoint.md).
+For more information about the trigger's underlying JSON definition and how to call this trigger, see these topics, [**Request** trigger type](../logic-apps/logic-apps-workflow-actions-triggers.md#request-trigger) and [Call, trigger, or nest workflows with HTTP endpoints in Azure Logic Apps](../logic-apps/logic-apps-http-endpoint.md).
## Security and authentication
-In a Standard logic app workflow that starts with the Request trigger (but not a webhook trigger), you can use the Azure Functions provision for authenticating inbound calls sent to the endpoint created by that trigger by using a managed identity. This provision is also known as "**Easy Auth**". For more information, review [Trigger workflows in Standard logic apps with Easy Auth](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/trigger-workflows-in-standard-logic-apps-with-easy-auth/ba-p/3207378).
+In a Standard logic app workflow that starts with the **Request** trigger (but not a webhook trigger), you can use the Azure Functions provision for authenticating inbound calls sent to the endpoint created by that trigger by using a managed identity. This provision is also known as "**Easy Auth**". For more information, review [Trigger workflows in Standard logic apps with Easy Auth](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/trigger-workflows-in-standard-logic-apps-with-easy-auth/ba-p/3207378).
For more information about security, authorization, and encryption for inbound calls to your logic app workflow, such as [Transport Layer Security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security), previously known as Secure Sockets Layer (SSL), [Microsoft Entra ID Open Authentication (Microsoft Entra ID OAuth)](../active-directory/develop/index.yml), exposing your logic app with Azure API Management, or restricting the IP addresses that originate inbound calls, see [Secure access and data - Access for inbound calls to request-based triggers](../logic-apps/logic-apps-securing-a-logic-app.md#secure-inbound-requests).
container-apps Azure Arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-overview.md
Previously updated : 04/22/2024 Last updated : 07/18/2024
Running in an Azure Arc-enabled Kubernetes cluster allows:
Learn to set up your Kubernetes cluster for Container Apps, via [Set up an Azure Arc-enabled Kubernetes cluster to run Azure Container Apps](azure-arc-enable-cluster.md)
-As you configure your cluster, you'll carry out these actions:
+As you configure your cluster, you carry out these actions:
- **The connected cluster**, which is an Azure projection of your Kubernetes infrastructure. For more information, see [What is Azure Arc-enabled Kubernetes?](../azure-arc/kubernetes/overview.md).
The following public preview limitations apply to Azure Container Apps on Azure
| Cluster networking requirement | Must support [LoadBalancer](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer) service type | | Feature: Managed identities | [Not available](#are-managed-identities-supported) | | Feature: Pull images from ACR with managed identity | Not available (depends on managed identities) |
-| Logs | Log Analytics must be configured with cluster extension; not per-site |
+| Logs | Log Analytics must be configured with cluster extension; not per-application |
## Resources created by the Container Apps extension
The following table describes the role of each revision created for you:
| `<extensionName>-k8se-activator` | Used as part of the scaling pipeline | 2 | 100 millicpu | 500 MB | ReplicaSet | | `<extensionName>-k8se-billing` | Billing record generation - Azure Container Apps on Azure Arc enabled Kubernetes is Free of Charge during preview | 3 | 100 millicpu | 100 MB | ReplicaSet | | `<extensionName>-k8se-containerapp-controller` | The core operator pod that creates resources on the cluster and maintains the state of components. | 2 | 100 millicpu | 1 GB | ReplicaSet |
-| `<extensionName>-k8se-envoy` | A front-end proxy layer for all data-plane http requests. It routes the inbound traffic to the correct apps. | 3 | 1 Core | 1536 MB | ReplicaSet |
+| `<extensionName>-k8se-envoy` | A front-end proxy layer for all data-plane http requests. It routes the inbound traffic to the correct apps. | 3 | 1 Core | 1,536 MB | ReplicaSet |
| `<extensionName>-k8se-envoy-controller` | Operator, which generates Envoy configuration | 2 | 200 millicpu | 500 MB | ReplicaSet | | `<extensionName>-k8se-event-processor` | An alternative routing destination to help with apps that have scaled to zero while the system gets the first instance available. | 2 | 100 millicpu | 500 MB | ReplicaSet | | `<extensionName>-k8se-http-scaler` | Monitors inbound request volume in order to provide scaling information to [KEDA](https://keda.sh). | 1 | 100 millicpu | 500 MB | ReplicaSet | | `<extensionName>-k8se-keda-cosmosdb-scaler` | KEDA Cosmos DB Scaler | 1 | 10 m | 128 MB | ReplicaSet |
-| `<extensionName>-k8se-keda-metrics-apiserver` | KEDA Metrics Server | 1 | 1 Core | 1000 MB | ReplicaSet |
+| `<extensionName>-k8se-keda-metrics-apiserver` | KEDA Metrics Server | 1 | 1 Core | 1,000 MB | ReplicaSet |
| `<extensionName>-k8se-keda-operator` | Scales workloads in and out from 0/1 to N instances | 1 | 100 millicpu | 500 MB | ReplicaSet | | `<extensionName>-k8se-log-processor` | Gathers logs from apps and other components and sends them to Log Analytics. | 2 | 200 millicpu | 500 MB | DaemonSet | | `<extensionName>-k8se-mdm` | Metrics and Logs Agent | 2 | 500 millicpu | 500 MB | ReplicaSet |
The following table describes the role of each revision created for you:
- [Are there any scaling limits?](#are-there-any-scaling-limits) - [What logs are collected?](#what-logs-are-collected) - [What do I do if I see a provider registration error?](#what-do-i-do-if-i-see-a-provider-registration-error)-- [Can I deploy the Container Apps extension on an ARM64 based cluster?](#can-i-deploy-the-container-apps-extension-on-an-arm64-based-cluster)
+- [Can I deploy the Container Apps extension on an Arm64 based cluster?](#can-i-deploy-the-container-apps-extension-on-an-arm64-based-cluster)
### How much does it cost?
Logs for both system components and your applications are written to standard ou
Both log types can be collected for analysis using standard Kubernetes tools. You can also configure the application environment cluster extension with a [Log Analytics workspace](../azure-monitor/logs/log-analytics-overview.md), and it sends all logs to that workspace.
-By default, logs from system components are sent to the Azure team. Application logs aren't sent. You can prevent these logs from being transferred by setting `logProcessor.enabled=false` as an extension configuration setting. This configuration setting will also disable forwarding of application to your Log Analytics workspace. Disabling the log processor might affect the time needed for any support cases, and you'll be asked to collect logs from standard output through some other means.
+By default, logs from system components are sent to the Azure team. Application logs aren't sent. You can prevent these logs from being transferred by setting `logProcessor.enabled=false` as an extension configuration setting. This configuration setting disables forwarding of application to your Log Analytics workspace. Disabling the log processor might affect the time needed for any support cases, and you'll be asked to collect logs from standard output through some other means.
### What do I do if I see a provider registration error? As you create an Azure Container Apps connected environment resource, some subscriptions might see the "No registered resource provider found" error. The error details might include a set of locations and API versions that are considered valid. If this error message is returned, the subscription must be re-registered with the `Microsoft.App` provider. Re-registering the provider has no effect on existing applications or APIs. To re-register, use the Azure CLI to run `az provider register --namespace Microsoft.App --wait`. Then reattempt the connected environment command.
-### Can I deploy the Container Apps extension on an ARM64 based cluster?
+### Can I deploy the Container Apps extension on an Arm64 based cluster?
-ARM64 based clusters aren't supported at this time.
+Arm64 based clusters aren't supported at this time.
## Extension Release Notes
ARM64 based clusters aren't supported at this time.
### Container Apps extension v1.30.6 (January 2024) - Update KEDA to v2.12, Envoy SC image to v1.0.4, and Dapr image to v1.11.6
+ - Added default response timeout for Envoy routes to 1,800 seconds
- Changed Fluent bit default log level to warn - Delay deletion of job pods to ensure log emission - Fixed issue for job pod deletion for failed job executions
+ - Ensure jobs in suspended state have failed pods deleted
- Update to not resolve HTTPOptions for TCP applications - Allow applications to listen on HTTP or HTTPS - Add ability to suspend jobs - Fixed issue where KEDA scaler was failing to create job after stopped job execution
+ - Add startingDeadlineSeconds to Container App Job if there's a cluster reboot
- Removed heavy logging in Envoy access log server - Updated Monitoring Configuration version for Azure Container Apps on Azure Arc enabled Kubernetes
ARM64 based clusters aren't supported at this time.
- Export additional Envoy metrics - Truncate Envoy log to first 1,024 characters when log content failed to parse - Handle SIGTERM gracefully in local proxy
+ - Allow ability to use different namespaces with KEDA
- Validation added for scale rule name - Enabled revision GC by default - Enabled emission of metrics for sidecars - Added volumeMounts to job executions
+ - Added validation to webhook endpoints for jobs
+
+ ### Container Apps extension v1.37.1 (July 2024)
+
+ - Update EasyAuth to support MISE
## Next steps
cosmos-db Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/compatibility.md
Azure Cosmos DB for MongoDB vCore supports the following database commands:
<tr><td><code>netstat</code></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td></tr> <tr><td><code>ping</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td><code>profile</code></td><td colspan="3">As a PaaS service, this will be managed by Azure.</td></tr>
-<tr><td><code>serverStatus</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
+<tr><td><code>serverStatus</code></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td></tr>
<tr><td><code>shardConnPoolStats</code></td><td colspan="3">Deprecated in MongoDB 5.0</td></tr> <tr><td><code>top</code></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td></tr> <tr><td><code>validate</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
cost-management-billing Migrate Consumption Usage Details Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-consumption-usage-details-api.md
description: This article has information to help you migrate from the Consumption Usage Details API. Previously updated : 11/17/2023 Last updated : 07/18/2024
# Migrate from Consumption Usage Details API
-This article discusses migration away from the [Consumption Usage Details API](/rest/api/consumption/usage-details/list). The Consumption Usage Details API is deprecated. The date that the API will be turned off is still being determined. We recommend that you migrate away from the API as soon as possible.
+This article discusses migration away from the [Consumption Usage Details API](/rest/api/consumption/usage-details/list), which is planned for deprecation. The exact date of deprecation is still being determined. We recommend that you don't build your reporting pipelines using this API and migrate away from it as soon as possible.
+
+Work is underway to retire Enterprise Agreement (EA) reporting APIs. We recommend that EA customers migrate to the Cost Management [Cost Details](/rest/api/cost-management/generate-cost-details-report) API. The older EA reporting APIs are only available to customers with an Enterprise Agreement.
+
+If you use the [Consumption Usage Details API](/rest/api/consumption/usage-details/list) we *recommend*, but don't require that you migrate to the Cost Management [Cost Details](/rest/api/cost-management/generate-cost-details-report) API.
+
+Consumption and Cost Management APIs are available for both EA Microsoft Customer Agreement (MCA) customers. So, Azure Government customers that remain under an EA aren't negatively affected.
+
+When you migrate from EA to MCA, we recommend that you move from the EA reporting Usage Details API to Cost Management Cost Details API and use Exports.
## Migration destinations Read the [Choose a cost details solution](usage-details-best-practices.md) article before you choose which solution is right for your workload. Generally, we recommend [Exports](../costs/tutorial-export-acm-data.md) if you have ongoing data ingestion needs and or a large monthly usage details dataset. For more information, see [Ingest usage details data](automation-ingest-usage-details-overview.md).
-If you have a smaller usage details dataset or a scenario that isn't met by Exports, consider using the [Cost Details](/rest/api/cost-management/generate-cost-details-report) report instead. For more information, see [Get small cost datasets on demand](get-small-usage-datasets-on-demand.md).
+If you have a smaller usage details dataset or a scenario that doesn't get met by Exports, consider using the [Cost Details](/rest/api/cost-management/generate-cost-details-report) report instead. For more information, see [Get small cost datasets on demand](get-small-usage-datasets-on-demand.md).
> [!NOTE] > The [Cost Details](/rest/api/cost-management/generate-cost-details-report) report is only available for customers with an Enterprise Agreement or Microsoft Customer Agreement. If you have an MSDN, pay-as-you-go, or Visual Studio subscription, you can migrate to Exports or continue using the Consumption Usage Details API.
If you have a smaller usage details dataset or a scenario that isn't met by Expo
New solutions provide many benefits over the Consumption Usage Details API. Here's a summary: - **Single dataset for all usage details** - Azure and Azure Marketplace usage details were merged into one dataset. It reduces the number of APIs that you need to call to get see all your charges.-- **Scalability** - The Marketplaces API is deprecated because it promotes a call pattern that isn't able to scale as your Azure usage increases. The usage details dataset can get extremely large as you deploy more resources into the cloud. The Marketplaces API is a paginated synchronous API so it isn't optimized to effectively transfer large volumes of data over a network with high efficiency and reliability. Exports and the [Cost Details](/rest/api/cost-management/generate-cost-details-report) API are asynchronous. They provide you with a CSV file that can be directly downloaded over the network.
+- **Scalability** - The Marketplaces API is deprecated because it promotes a call pattern that isn't able to scale as your Azure usage increases. The usage details dataset can get large as you deploy more resources into the cloud. The Marketplaces API is a paginated synchronous API so it isn't optimized to effectively transfer large volumes of data over a network with high efficiency and reliability. Exports and the [Cost Details](/rest/api/cost-management/generate-cost-details-report) API are asynchronous. They provide you with a CSV file that can be directly downloaded over the network.
- **API improvements** - Exports and the Cost Details API are the solutions that Azure supports moving forward. All new features are being integrated into them. - **Schema consistency** - The [Cost Details](/rest/api/cost-management/generate-cost-details-report) report and [Exports](../costs/tutorial-export-acm-data.md) provide files with matching fields os you can move from one solution to the other, based on your scenario.-- **Cost Allocation integration** - Enterprise Agreement and Microsoft Customer Agreement customers using Exports or the Cost Details API can view charges in relation to the cost allocation rules that they have configured. For more information about cost allocation, see [Allocate costs](../costs/allocate-costs.md).
+- **Cost Allocation integration** - Enterprise Agreement and Microsoft Customer Agreement customers using Exports or the Cost Details API can view charges in relation to the cost allocation rules that they configured. For more information about cost allocation, see [Allocate costs](../costs/allocate-costs.md).
## Field Differences
The following table summarizes the field differences between the Consumption Usa
## Enterprise Agreement field mapping
-Enterprise Agreement customers who are using the Consumption Usage Details API have usage details records of the kind `legacy`. A legacy usage details record is shown below. All Enterprise Agreement customers have records of this kind due to the underlying billing system that's used for them.
+Enterprise Agreement customers who are using the Consumption Usage Details API have usage details records of the kind `legacy`. All Enterprise Agreement customers have records of this kind due to the underlying billing system that's used for them. Here's an example legacy usage details record:
```json {
Bold property names are unchanged.
## Microsoft Customer Agreement field mapping
-Microsoft Customer Agreement customers that use the Consumption Usage Details API have usage details records of the kind `modern`. A modern usage details record is shown below. All Microsoft Customer Agreement customers have records of this kind due to the underlying billing system that is used for them.
+Microsoft Customer Agreement customers that use the Consumption Usage Details API have usage details records of the kind `modern`. All Microsoft Customer Agreement customers have records of this kind due to the underlying billing system that is used for them. Here's an example MCA usage details record:
```json {
Microsoft Customer Agreement customers that use the Consumption Usage Details AP
} ```
-An full example legacy Usage Details record is shown at [Usage Details - List - REST API (Azure Consumption)](/rest/api/consumption/usage-details/list#billingaccountusagedetailslist-modern)
+A full example legacy Usage Details record is shown at [Usage Details - List - REST API (Azure Consumption)](/rest/api/consumption/usage-details/list#billingaccountusagedetailslist-modern)
A mapping between the old and new fields are shown in the following table. New properties are available in the CSV files produced by Exports and the Cost Details API. Fields that need a mapping due to differences across the solutions are shown in **bold text**.
defender-for-cloud Plan Multicloud Security Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-get-started.md
Title: Planning multicloud security get started guidance before you begin cloud solution
+ Title: Start planning multicloud protection in Microsoft Defender for Cloud
description: Learn about designing a solution for securing and protecting your multicloud environment with Microsoft Defender for Cloud.
Last updated 05/30/2024
-# Get started
+# Start planning multicloud protection
-This article introduces guidance to help you design a solution for securing and protecting your multicloud environment with Microsoft Defender for Cloud. The guidance can be used by cloud solution and infrastructure architects, security architects and analysts, and anyone else involved in designing a multicloud security solution.
+This article introduces guidance to help you design a solution for securing and protecting a multicloud environment with Microsoft Defender for Cloud. The guidance can be used by cloud solution and infrastructure architects, security architects and analysts, and anyone else involved in designing a multicloud security solution.
As you capture your functional and technical requirements, the articles provide an overview of multicloud capabilities, planning guidance, and prerequisites.
defender-for-cloud Zero Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/zero-trust.md
Title: Zero trust infrastructure and integrations
-description: Independent software vendors (ISVs) can integrate their solutions with Microsoft Defender for Cloud to help customers adopt a Zero Trust model and keep their organizations secure.
Previously updated : 05/30/2024
+ Title: Zero trust and Microsoft Defender for Cloud
+description: Understand how to implement zero trust principles to secure an enterprise infrastructure that includes Microsoft Defender for Cloud
Last updated : 07/17/2024 - zerotrust-services
-# Zero Trust infrastructure and integrations
+# Zero trust and Defender for Cloud
+This article provides strategy and instructions for integrating zero trust infrastructure solutions with [Microsoft Defender for Cloud](defender-for-cloud-introduction.md). The guidance includes integrations with other solutions, including security information and event management (SIEM), security orchestration automated response (SOAR), endpoint detection and response (EDR), and IT service management (ITSM) solutions.
-Infrastructure comprises the hardware, software, micro-services, networking infrastructure, and facilities required to support IT services for an organization. Zero Trust infrastructure solutions assess, monitor, and prevent security threats to these services.
+Infrastructure comprises the hardware, software, micro-services, networking infrastructure, and facilities required to support IT services for an organization. Whether on-premises or multicloud, infrastructure represents a critical threat vector.
-Zero Trust infrastructure solutions support the principles of Zero Trust by ensuring that access to infrastructure resources is verified explicitly, access is granted using principles of least privilege access, and mechanisms are in place that assumes breach and look for and remediate security threats in infrastructure.
+Zero Trust infrastructure solutions assess, monitor, and prevent security threats to your infrastructure. Solutions support the principles of zero trust by ensuring that access to infrastructure resources is verified explicitly, and granted using principles of least privilege access. Mechanisms assume breach, and look for and remediate security threats in infrastructure.
-This guidance is for software providers and technology partners who want to enhance their infrastructure security solutions by integrating with Microsoft products.
+## What is zero trust?
-## Zero Trust integration for Infrastructure guide
-This integration guide includes strategy and instructions for integrating with [Microsoft Defender for Cloud](defender-for-cloud-introduction.md) and its integrated cloud workload protection platform (CWPP), Microsoft Defender for Cloud.
-The guidance includes integrations with the most popular Security Information and Event Management (SIEM), Security Orchestration Automated Response (SOAR), Endpoint Detection and Response (EDR), and IT Service Management (ITSM) solutions.
-### Zero Trust and Defender for Cloud
+## Zero Trust and Defender for Cloud
-Our [Zero Trust infrastructure deployment guidance](/security/zero-trust/deploy/infrastructure) provides key stages of the Zero Trust strategy for infrastructure. Which are:
+[Zero Trust infrastructure deployment guidance](/security/zero-trust/deploy/infrastructure) provides key stages of zero trust infrastructure strategy:
-1. [Assess compliance with chosen standards and policies](update-regulatory-compliance-packages.yml)
-1. [Harden configuration](recommendations-reference.md) wherever gaps are found
-1. Employ other hardening tools such as [just-in-time (JIT)](just-in-time-access-usage.yml) VM access
-1. Set up [threat detection and protections](/azure/azure-sql/database/threat-detection-configure)
-1. Automatically block and flag risky behavior and take protective actions
+1. [Assess compliance](update-regulatory-compliance-packages.yml) with chosen standards and policies.
+1. [Harden configuration](recommendations-reference.md) wherever gaps are found.
+1. Employ other hardening tools such as [just-in-time (JIT)](just-in-time-access-usage.yml) VM access.
+1. Set up [threat protection](/azure/azure-sql/database/threat-detection-configure).
+1. Automatically block and flag risky behavior and take protective actions.
-There's a clear mapping from the goals we've described in the [infrastructure deployment guidance](/security/zero-trust/deploy/infrastructure) to the core aspects of Defender for Cloud.
+Here's how these stages map to Defender for Cloud.
-|Zero Trust goal | Defender for Cloud feature |
+|Goal | Defender for Cloud |
|||
-|Assess compliance | In Defender for Cloud, every subscription automatically has the [Microsoft cloud security benchmark (MCSB) security initiative assigned](security-policy-concept.md).<br>Using the [secure score tools](secure-score-security-controls.md) and the [regulatory compliance dashboard](update-regulatory-compliance-packages.yml) you can get a deep understanding of your customer's security posture. |
-| Harden configuration | [Review your security recommendations](review-security-recommendations.md) and [track your secure score improvement overtime](secure-score-access-and-track.md). You can also prioritize which recommendations to remediate based on potential attack paths, by leveraging the [attack path](how-to-manage-attack-path.md) feature. |
-|Employ hardening mechanisms | Least privilege access is one of the three principles of Zero Trust. Defender for Cloud can assist you to harden VMs and network using this principle by leveraging features such as:<br>[Just-in-time (JIT) virtual machine (VM) access](just-in-time-access-overview.md)<br>[Adaptive network hardening](adaptive-network-hardening.md)<br>[Adaptive application controls](adaptive-application-controls.md). |
-|Set up threat detection | Defender for Cloud offers an integrated cloud workload protection platform (CWPP), Microsoft Defender for Cloud.<br>Microsoft Defender for Cloud provides advanced, intelligent, protection of Azure and hybrid resources and workloads.<br>One of the Microsoft Defender plans, Microsoft Defender for servers, includes a native integration with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/).<br>Learn more in [Introduction to Microsoft Defender for Cloud](defender-for-cloud-introduction.md). |
-|Automatically block suspicious behavior | Many of the hardening recommendations in Defender for Cloud offer a *deny* option. This feature lets you prevent the creation of resources that don't satisfy defined hardening criteria. Learn more in [Prevent misconfigurations with Enforce/Deny recommendations](./prevent-misconfigurations.md). |
-|Automatically flag suspicious behavior | Microsoft Defenders for Cloud's security alerts are triggered by advanced detections. Defender for Cloud prioritizes and lists the alerts, along with the information needed for you to quickly investigate the problem. Defender for Cloud also provides detailed steps to help you remediate attacks. For a full list of the available alerts, see [Security alerts - a reference guide](alerts-reference.md).|
+|Assess compliance | In Defender for Cloud, every subscription automatically has the [Microsoft cloud security benchmark (MCSB) security initiative assigned](security-policy-concept.md).<br>Using the [secure score tools](secure-score-security-controls.md) and the [regulatory compliance dashboard](update-regulatory-compliance-packages.yml) you can get a deep understanding of security posture. |
+| Harden configuration | Infrastructure and environment settings are assessed against compliance standard, and recommendations are issued based on those assessments. You can [review and remediate security recommendations](review-security-recommendations.md) and [track secure score improvements] (secure-score-access-and-track.md) over time. You can prioritize which recommendations to remediate based on potential [attack paths](how-to-manage-attack-path.md). |
+|Employ hardening mechanisms | Least privilege access is a zero trust principle. Defender for Cloud can help you to harden VMs and network settings using this principle with features such as:<br>[Just-in-time (JIT) VM access](just-in-time-access-overview.md), [adaptive network hardening](adaptive-network-hardening.md), and [adaptive application controls](adaptive-application-controls.md). |
+|Set up threat protection | Defender for Cloud is a cloud workload protection platform (CWPP), providing advanced, intelligent protection of Azure and hybrid resources and workloads. [Learn more](defender-for-cloud-introduction.md). |
+|Automatically block risky behavior | Many of the hardening recommendations in Defender for Cloud offer a *deny* option, to prevent the creation of resources that don't satisfy defined hardening criteria. [Learn more](./prevent-misconfigurations.md). |
+|Automatically flag suspicious behavior | Defenders for Cloud security alerts are triggered by threat detections. Defender for Cloud prioritizes and lists alerts, with information to help you investigate. It also provides detailed steps to help you remediate attacks. Review a [full list of security alerts](alerts-reference.md).|
++
+### Apply zero trust to hybrid and multicloud scenarios
+
+With cloud workloads commonly spanning multiple cloud platforms, cloud security services must do the same.Defender for Cloud protects workloads wherever they're running. In Azure, on-premises, AWS, or GCP.
+
+- **AWS**: To protect AWS machines, you onboard AWS accounts into Defender for Cloud. This integration provides a unified view of Defender for Cloud recommendations and AWS Security Hub findings. Learn more about [connecting AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md).
+- **GCP**: To protect GCP machines, you onboard GCP accounts into Defender for Cloud. This integration provides a unified view of Defender for Cloud recommendations and GCP Security Command Center findings. Learn more about [connecting GCP accounts to Microsoft Defender for Cloud](quickstart-onboard-gcp.md).
+- **On-premises machines**. You can extend Defender for Cloud protection by connecting on-premises machines to [Azure Arc enabled servers](../azure-arc/servers/overview.md). Learn more about [connecting on-premises machines to Defender for Cloud](quickstart-onboard-machines.md).
-### Protect your Azure PaaS services with Defender for Cloud
-With Defender for Cloud enabled on your subscription, and Microsoft Defender for Cloud enabled for all available resource types, you'll have a layer of intelligent threat protection - powered by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) - protecting resources in Azure Key Vault, Azure Storage, Azure DNS, and other Azure PaaS services. For a full list, see [What resource types can Microsoft Defender for Cloud secure?](defender-for-cloud-introduction.md).
+## Protect Azure PaaS services
-### Azure Logic Apps
+When Defender for Cloud is available in an Azure subscription, and Defender for Cloud plans enabled for all available resource types, a layer of intelligent threat protection, powered by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) protects resources in Azure PaaS services, including Azure Key Vault, Azure Storage, Azure DNS, and others. Learn more about the [resource types that Defender for Cloud can secure](defender-for-cloud-introduction.md).
+
+## Automate responses with Azure Logic Apps
Use [Azure Logic Apps](../logic-apps/index.yml) to build automated scalable workflows, business processes, and enterprise orchestrations to integrate your apps and data across cloud services and on-premises systems.
Defender for Cloud's [workflow automation](workflow-automation.yml) feature lets
This is great way to define and respond in an automated, consistent manner when threats are discovered. For example, to notify relevant stakeholders, launch a change management process, and apply specific remediation steps when a threat is detected.
-### Integrate Defender for Cloud with your SIEM, SOAR, and ITSM solutions
-
-Microsoft Defender for Cloud can stream your security alerts into the most popular Security Information and Event Management (SIEM), Security Orchestration Automated Response (SOAR), and IT Service Management (ITSM) solutions.
+## Integrate with SIEM, SOAR, and ITSM solutions
-There are Azure-native tools for ensuring you can view your alert data in all of the most popular solutions in use today, including:
+Defender for Cloud can stream your security alerts into the most popular SIEM, SOAR, and ITSM solutions. There are Azure-native tools to ensure you can view your alert data in all of the most popular solutions in use today, including:
- Microsoft Sentinel - Splunk Enterprise and Splunk Cloud
There are Azure-native tools for ensuring you can view your alert data in all of
- Power BI - Palo Alto Networks
-#### Microsoft Sentinel
+### Integrate with Microsoft Sentinel
-Defender for Cloud natively integrates with [Microsoft Sentinel](../sentinel/overview.md), Microsoft's cloud-native, security information event management (SIEM) and security orchestration automated response (SOAR) solution.
+Defender for Cloud natively integrates with [Microsoft Sentinel](../sentinel/overview.md), Microsoft's SIEM/SOAR solution.
-There are two approaches to ensuring your Defender for Cloud data is represented in Microsoft Sentinel:
+There are two approaches to ensuring that Defender for Cloud data is represented in Microsoft Sentinel:
- **Sentinel connectors** - Microsoft Sentinel includes built-in connectors for Microsoft Defender for Cloud at the subscription and tenant levels:
There are two approaches to ensuring your Defender for Cloud data is represented
> [!TIP] > Learn more in [Connect security alerts from Microsoft Defender for Cloud](../sentinel/connect-defender-for-cloud.md). -- **Stream your audit logs** - An alternative way to investigate Defender for Cloud alerts in Microsoft Sentinel is to stream your audit logs into Microsoft Sentinel:
+- **Audit logs streaming** - An alternative way to investigate Defender for Cloud alerts in Microsoft Sentinel is to stream your audit logs into Microsoft Sentinel:
- [Connect Windows security events](../sentinel/connect-windows-security-events.md) - [Collect data from Linux-based sources using Syslog](../sentinel/connect-syslog.md) - [Connect data from Azure Activity log](../sentinel/data-connectors/azure-activity.md)
-#### Stream alerts with Microsoft Graph Security API
+### Stream alerts with Microsoft Graph Security API
Defender for Cloud has out-of-the-box integration with Microsoft Graph Security API. No configuration is required and there are no extra costs.
-You can use this API to stream alerts from the **entire tenant** (and data from many other Microsoft Security products) into third-party SIEMs and other popular platforms:
--- **Splunk Enterprise and Splunk Cloud** - [Use the Microsoft Graph Security API Add-On for Splunk](https://splunkbase.splunk.com/app/4564/)-- **Power BI** - [Connect to the Microsoft Graph Security API in Power BI Desktop](/power-bi/connect-data/desktop-connect-graph-security)-- **ServiceNow** - [Follow the instructions to install and configure the Microsoft Graph Security API application from the ServiceNow Store](https://docs.servicenow.com/bundle/orlando-security-management/page/product/secops-integration-sir/secops-integration-ms-graph/task/ms-graph-install.html)-- **QRadar** - [IBM's Device Support Module for Microsoft Defender for Cloud via Microsoft Graph API](https://www.ibm.com/support/knowledgecenter/SS42VS_DSM/com.ibm.dsm.doc/c_dsm_guide_ms_azure_security_center_overview.html)-- **Palo Alto Networks**, **Anomali**, **Lookout**, **InSpark**, and more - [Microsoft Graph Security API](https://www.microsoft.com/security/business/graph-security-api#office-MultiFeatureCarousel-09jr2ji)
+You can use this API to stream alerts from the entire tenant, and data from many other Microsoft Security products into third-party SIEMs and other popular platforms:
-[Learn more about Microsoft Graph Security API](https://www.microsoft.com/security/business/graph-security-api).
+- **Splunk Enterprise and Splunk Cloud** - Use the [Microsoft Graph Security API Add-On for Splunk](https://splunkbase.splunk.com/app/4564/)
+- **Power BI** - Connect to the [Microsoft Graph Security API in Power BI Desktop](/power-bi/connect-data/desktop-connect-graph-security)
+- **ServiceNow** - Follow the instructions to [install and configure the Microsoft Graph Security API application from the ServiceNow Store](https://docs.servicenow.com/bundle/orlando-security-management/page/product/secops-integration-sir/secops-integration-ms-graph/task/ms-graph-install.html)
+- **QRadar** - Use [IBM's Device Support Module for Defender for Cloud via Microsoft Graph API](https://www.ibm.com/support/knowledgecenter/SS42VS_DSM/com.ibm.dsm.doc/c_dsm_guide_ms_azure_security_center_overview.html)
+- **Palo Alto Networks**, **Anomali**, **Lookout**, **InSpark**, and more. Learn more about [Microsoft Graph Security API](https://www.microsoft.com/security/business/graph-security-api).
-#### Stream alerts with Azure Monitor
-Use Defender for Cloud's [continuous export](continuous-export.md) feature to connect Defender for Cloud with Azure monitor via Azure Event Hubs and stream alerts into **ArcSight**, **SumoLogic**, Syslog servers, **LogRhythm**, **Logz.io Cloud Observability Platform**, and other monitoring solutions.
-Learn more in [Stream alerts to monitoring solutions](export-to-siem.md).
+### Stream alerts with Azure Monitor
-This can also be done at the Management Group level using Azure Policy, see [Create continuous export automation configurations at scale](continuous-export.md).
+Use Defender for Cloud's [continuous export](continuous-export.md) feature to connect to Azure monitor via Azure Event Hubs, and stream alerts into **ArcSight**, **SumoLogic**, Syslog servers, **LogRhythm**, **Logz.io Cloud Observability Platform**, and other monitoring solutions.
-> [!TIP]
-> To view the event schemas of the exported data types, visit the [Event Hub event schemas](https://aka.ms/ASCAutomationSchemas).
+- This can also be done at the Management Group level using Azure Policy. Learn about [creating continuous export automation configurations at scale](continuous-export.md).
+- To view the event schemas of the exported data types, review the [Event Hubs event schemas](https://aka.ms/ASCAutomationSchemas).
-### Integrate Defender for Cloud with an Endpoint Detection and Response (EDR) solution
-
-#### Microsoft Defender for Endpoint
+Learn more about [streaming alerts to monitoring solutions](export-to-siem.md).
-[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) is a holistic, cloud-delivered endpoint security solution.
-Defender for Cloud's integrated CWPP for machines, [Microsoft Defender for Servers](plan-defender-for-servers.md), includes an integrated license for [Microsoft Defender for Endpoint](https://www.microsoft.com/security/business/endpoint-security/microsoft-defender-endpoint). Together, they provide comprehensive endpoint detection and response (EDR) capabilities. For more information, see [Protect your endpoints](integration-defender-for-endpoint.md?tabs=linux).
-When Defender for Endpoint detects a threat, it triggers an alert. The alert is shown in Defender for Cloud. From Defender for Cloud, you can also pivot to the Defender for Endpoint console and perform a detailed investigation to uncover the scope of the attack. Learn more about [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint).
+### Integrate with EDR solutions
-#### Other EDR solutions
-
-Defender for Cloud provides hardening recommendations to ensure you're securing your organization's resources according to the guidance of [Azure Security Benchmark](/security/benchmark/azure/introduction). One of the controls in the benchmark relates to endpoint security: [ES-1: Use Endpoint Detection and Response (EDR)](/security/benchmark/azure/security-controls-v2-endpoint-security).
-
-There are two recommendations in Defender for Cloud to ensure you've enabled endpoint protection and it's running well. These recommendations are checking for the presence and operational health of EDR solutions from:
--- Trend Micro-- Symantec-- McAfee-- Sophos-
-Learn more in [Endpoint protection assessment and recommendations in Microsoft Defender for Cloud](endpoint-protection-recommendations-technical.md).
-
-### Apply your Zero Trust strategy to hybrid and multicloud scenarios
-
-With cloud workloads commonly spanning multiple cloud platforms, cloud security services must do the same.
-
-Microsoft Defender for Cloud protects workloads wherever they're running: in Azure, on-premises, Amazon Web Services (AWS), or Google Cloud Platform (GCP).
+#### Microsoft Defender for Endpoint
-#### Integrate Defender for Cloud with on-premises machines
+[Defender for Endpoint](/microsoft-365/security/defender-endpoint/) is a holistic, cloud-delivered endpoint security solution. The Defender for Cloud servers workload plan, [Defender for Servers](plan-defender-for-servers.md), includes an integrated license for [Defender for Endpoint](https://www.microsoft.com/security/business/endpoint-security/microsoft-defender-endpoint). Together, they provide comprehensive EDR capabilities. Learn more about [protecting endpoints](integration-defender-for-endpoint.md?tabs=linux).
-To secure hybrid cloud workloads, you can extend Defender for Cloud's protections by connecting on-premises machines to [Azure Arc enabled servers](../azure-arc/servers/overview.md).
+When Defender for Endpoint detects a threat, it triggers an alert. The alert is shown in Defender for Cloud. From Defender for Cloud, you can pivot to the Defender for Endpoint console and perform a detailed investigation to uncover the scope of the attack.
-Learn about how to connect machines in [Connect your non-Azure machines to Defender for Cloud](quickstart-onboard-machines.md).
+#### Other EDR solutions
-#### Integrate Defender for Cloud with other cloud environments
+Defender for Cloud provides health assessment of supported versions of EDR solutions.
-To view the security posture of **Amazon Web Services** machines in Defender for Cloud, onboard AWS accounts into Defender for Cloud. This integrates AWS Security Hub and Microsoft Defender for Cloud for a unified view of Defender for Cloud recommendations and AWS Security Hub findings and provides a range of benefits as described in [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md).
+Defender for Cloud provides recommendations based on the [Microsoft security benchmark](/security/benchmark/azure/introduction). One of the controls in the benchmark relates to endpoint security: [ES-1: Use Endpoint Detection and Response (EDR)](/security/benchmark/azure/security-controls-v2-endpoint-security). There are two recommendations to ensure you've enabled endpoint protection and it's running well. Learn more about [assessment for supported EDR solutions](endpoint-protection-recommendations-technical.md) in Defender for Cloud.
-To view the security posture of **Google Cloud Platform** machines in Defender for Cloud, onboard GCP accounts into Defender for Cloud. This integrates GCP Security Command and Microsoft Defender for Cloud for a unified view of Defender for Cloud recommendations and GCP Security Command Center findings and provides a range of benefits as described in [Connect your GCP accounts to Microsoft Defender for Cloud](quickstart-onboard-gcp.md).
## Next steps
-To learn more about Microsoft Defender for Cloud and Microsoft Defender for Cloud, see the complete [Defender for Cloud documentation](index.yml).
+Start planning [multicloud protection](plan-multicloud-security-get-started.md).
dev-box How To Configure Intune Conditional Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-intune-conditional-access-policies.md
After creating your device group and validated your dev box devices are members,
| | | | | Windows 365 | 0af06dc6-e4b5-4f28-818e-e78e62d137a5 | Used when retrieving the list of resources for the user and when users initiate actions on their dev box like Restart. | | Azure Virtual Desktop | 9cdead84-a844-4324-93f2-b2e6bb768d07 | Used to authenticate to the Gateway during the connection and when the client sends diagnostic information to the service. <br>Might also appear as Windows Virtual Desktop. |
- | Microsoft Remote Desktop | a4a365df-50f1-4397-bc59-1a1564b8bb9c | Used to authenticate users to the dev box. <br>Only needed when you configure single sign-on in a provisioning policy. |
+ | Microsoft Remote Desktop | a4a365df-50f1-4397-bc59-1a1564b8bb9c | Used to authenticate users to the dev box. <br>Only needed when you configure single sign-on in a provisioning policy. |
+ | Microsoft Developer Portal | 0140a36d-95e1-4df5-918c-ca7ccd1fafc9 | Used to manage the Dev box portal. |
1. You should match your conditional access policies between these apps, which ensures that the policy applies to the developer portal, the connection to the Gateway, and the dev box for a consistent experience. If you want to exclude apps, you must also choose all of these apps.
dms Tutorial Mysql Azure Single To Flex Offline Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-single-to-flex-offline-portal.md
To create a migration project, perform the following steps.
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/11-select-create.png" alt-text="Screenshot of a CSelect a new migration project.":::
-3. On the **New migration project** page, specify a name for the project, in the Source server type selection box, select **Azure Database For MySQL ΓÇô Single Server**, in the Target server type selection box, select **Azure Database For MySQL**, in the **Migration activity type** selection box, select **Online migration**, and then select **Create and run activity**.
+1. On the **New migration project** page, specify a name for the project, in the Source server type selection box, select **Azure Database For MySQL ΓÇô Single Server**, in the Target server type selection box, select **Azure Database For MySQL**, in the **Migration activity type** selection box, select **Offline migration**, and then select **Create and run activity**.
- > [!NOTE]
+ > [!NOTE]
> Selecting Create project only as the migration activity type will only create the migration project; you can then run the migration project at a later time.
- :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/12-create-project-offline.png" alt-text="Screenshot of a Create a new migration project.":::
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/12-create-project-offline.png" alt-text="Screenshot of a Create a new migration project.":::
### Configure the migration project
event-grid Communication Services Advanced Messaging Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/communication-services-advanced-messaging-events.md
Title: Azure Communication Services - Advanced Messaging events description: This article describes how to use Azure Communication Services as an Event Grid event source for Advanced Messaging Events. Previously updated : 09/30/2022 Last updated : 07/15/2024 # Azure Communication Services - Advanced Messaging events
-This article provides the properties and schema for communication services advanced messaging events. For an introduction to event schemas, see [Azure Event Grid event schema](event-schema.md).
+This article provides the properties and schema for Communication Services Advanced Messaging events. For an introduction to event schemas, see [Azure Event Grid event schema](event-schema.md).
## Event types Azure Communication Services emits the following Advanced Messaging event types:
-| Event type | Description |
-| -- | - |
-| Microsoft.Communication.AdvancedMessageReceived | Published when Communication Service receives a WhatsApp message. |
-| Microsoft.Communication.AdvancedMessageDeliveryStatusUpdated | Published when the WhatsApp sends status of message notification as sent/read/failed. |
+| Event type | Description |
+|--|-|
+| [Microsoft.Communication.AdvancedMessageReceived](#microsoftcommunicationadvancedmessagereceived-event) | Published when Communication Services Advanced Messaging receives a message. |
+| [Microsoft.Communication.AdvancedMessageDeliveryStatusUpdated](#microsoftcommunicationadvancedmessagedeliverystatusupdated-event) | Published when Communication Services Advanced Messaging receives a status update for a previously sent message notification. |
## Event responses
When an event is triggered, the Event Grid service sends data about that event t
This section contains an example of what that data would look like for each event.
-### Microsoft.Communication.AdvancedMessageReceived event
+### Microsoft.Communication.AdvancedMessageReceived event
+
+Published when Communication Services Advanced Messaging receives a message.
+
+Example scenario: A WhatsApp user sends a WhatsApp message to a WhatsApp Business Number that is connected to an active Advanced Messaging channel in a Communication Services resource. As a result, a `Microsoft.Communication.AdvancedMessageReceived` with the contents of the user's WhatsApp message is published.
+
+#### Attribute list
+
+Details for the attributes specific to `Microsoft.Communication.AdvancedMessageReceived` events.
+
+| Attribute | Type | Nullable | Description |
+|:|:-:|:--:||
+| channelType | `string` | ✔️ | Channel type of the channel that the message was sent on. Ex. "whatsapp". |
+| from | `string` | ✔️ | Sender ID that sent the message. |
+| to | `string` | ✔️ | The channel ID that received the message, formatted as a GUID. |
+| receivedTimestamp | `DateTimeOffset` | ✔️ | Timestamp of the message. |
+| content | `string` | ✔️ | The text content in the message. |
+| media | [`MediaContent`](#mediacontent) | ✔️ | Contains details about the received media. |
+| context | [`MessageContext`](#messagecontext) | ✔️ | Contains details about the received media. |
+| button | [`ButtonContent`](#buttoncontent) | ✔️ | Contains details about the received media. |
+| interactive | [`InteractiveContent`](#interactivecontent) | ✔️ | Contains details about the received media. |
+
+##### MediaContent
+
+| Attribute | Type | Nullable | Description |
+|:-|:--:|:--:|--|
+| mimeType | `string` | ❌ | MIME type of the media. Used to determine the correct file type for media downloads. |
+| id | `string` | ❌ | Media ID. Used to retrieve media for download, formatted as a GUID. |
+| fileName | `string` | ✔️ | The filename of the underlying media file as specified when uploaded. |
+| caption | `string` | ✔️ | Caption text for the media object, if supported and provided. |
+
+##### MessageContext
+
+| Attribute | Type | Nullable | Description |
+|:-|:--:|:--:||
+| from | `string` | ✔️ | The WhatsApp ID for the customer who replied to an inbound message. |
+| id | `string` | ✔️ | The message ID for the sent message for an inbound reply. |
+
+##### ButtonContent
+
+| Attribute | Type | Nullable | Description |
+|:-|:--:|:--:|-|
+| text | `string` | ✔️ | The text of the button. |
+| payload | `string` | ✔️ | The payload, set up by the business, of the button that the user selected. |
+
+##### InteractiveContent
+
+| Attribute | Type | Nullable | Description |
+|:|:--:|:--:||
+| type | [`InteractiveReplyType`](#interactivereplytype) | ✔️ | Type of the interactive content. |
+| buttonReply | [`InteractiveButtonReplyContent`](#interactivebuttonreplycontent) | ✔️ | Sent when a customer selects a button. |
+| listReply | [`InteractiveListReplyContent`](#interactivelistreplycontent) | ✔️ | Sent when a customer selects an item from a list. |
+
+##### InteractiveReplyType
+
+| Value | Description |
+|:|--|
+| buttonReply | The interactive content is a button. |
+| listReply | The interactive content is a list. |
+| unknown | The interactive content is unknown. |
+
+##### InteractiveButtonReplyContent
+
+| Attribute | Type | Nullable | Description |
+|:-|:--:|:--:|-|
+| id | `string` | ✔️ | ID of the button. |
+| title | `string` | ✔️ | Title of the button. |
+
+##### InteractiveListReplyContent
+
+| Attribute | Type | Nullable | Description |
+|:|:--:|:--:|-|
+| id | `string` | ✔️ | ID of the selected list item. |
+| title | `string` | ✔️ | Title of the selected list item. |
+| description | `string` | ✔️ | Description of the selected row. |
+
+#### Examples
+
+##### Text message received
```json [{
- "id": "fdc64eca-390d-4974-abd6-1a13ccbe3160",
- "topic": "/subscriptions/{subscription-id}/resourcegroups/{resourcegroup-name}/providers/microsoft.communication/communicationservices/acsxplatmsg-test",
- "subject": "advancedMessage/sender/{sender@id}/recipient/00000000-0000-0000-0000-000000000000",
+ "id": "00000000-0000-0000-0000-000000000000",
+ "topic": "/subscriptions/{subscription-id}/resourcegroups/{resourcegroup-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
+ "subject": "advancedMessage/sender/{sender@id}/recipient/11111111-1111-1111-1111-111111111111",
"data": { "content": "Hello", "channelType": "whatsapp", "from": "{sender@id}",
- "to": "00000000-0000-0000-0000-000000000000",
+ "to": "11111111-1111-1111-1111-111111111111",
"receivedTimestamp": "2023-07-06T18:30:19+00:00" }, "eventType": "Microsoft.Communication.AdvancedMessageReceived",
This section contains an example of what that data would look like for each even
}] ```
-### Microsoft.Communication.AdvancedMessageDeliveryStatusUpdated event
+##### Media message received
```json [{
- "id": "48cd6446-01dd-479f-939c-171c86c46700",
- "topic": "/subscriptions/{subscription-id}/resourcegroups/{resourcegroup-name}/providers/microsoft.communication/communicationservices/acsxplatmsg-test",
- "subject": "advancedMessage/00000000-0000-0000-0000-000000000000/status/Failed",
+ "id": "00000000-0000-0000-0000-000000000000",
+ "topic": "/subscriptions/{subscription-id}/resourcegroups/{resourcegroup-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
+ "subject": "advancedMessage/sender/{sender@id}/recipient/11111111-1111-1111-1111-111111111111",
"data": {
- "messageId": "00000000-0000-0000-0000-000000000000",
+ "channelType": "whatsapp",
+ "media": {
+ "mimeType": "image/jpeg",
+ "id": "22222222-2222-2222-2222-222222222222",
+ "caption": "This is a media caption"
+ },
+ "from": "{sender@id}",
+ "to": "11111111-1111-1111-1111-111111111111",
+ "receivedTimestamp": "2023-07-06T18:30:19+00:00"
+ },
+ "eventType": "Microsoft.Communication.AdvancedMessageReceived",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2023-07-06T18:30:22.1921716Z"
+}]
+```
+
+### Microsoft.Communication.AdvancedMessageDeliveryStatusUpdated event
+
+Published when Communication Services Advanced Messaging receives a status update for a previously sent message notification.
+
+Example scenario: Contoso uses an active Advanced Messaging channel connected to a WhatsApp Business Account to send a WhatsApp message to a WhatsApp user. WhatsApp then replies to Contoso's Advanced Messaging channel with the status of the previously sent message. As a result, a `Microsoft.Communication.AdvancedMessageDeliveryStatusUpdated` event containing the message status is published.
+
+#### Attribute list
+
+Details for the attributes specific to `Microsoft.Communication.AdvancedMessageReceived` events.
+
+| Attribute | Type | Nullable | Description |
+|:|:--:|:--:|-|
+| channelType | `string` | ✔️ | Channel type of the channel that the message was sent on. |
+| from | `string` | ✔️ | The channel ID that sent the message, formatted as a GUID. |
+| to | `string` | ✔️ | Recipient ID that the message was sent to. |
+| receivedTimestamp | `DateTimeOffset` | ✔️ | Timestamp of the message. |
+| messageId | `string` | ✔️ | The ID of the message, formatted as a GUID. |
+| status | `string` | ✔️ | Status of the message. Possible values include `Sent`, `Delivered`, `Read`, and `Failed`. For more information, see [Status](#status). |
+| error | [`ChannelEventError`](#channeleventerror) | ✔️ | Contains the details of an error. |
+
+##### ChannelEventError
+
+| Attribute | Type | Nullable | Description |
+|:|:--:|:--:||
+| channelCode | `string` | ✔️ | The error code received on this channel. |
+| channelMessage | `string` | ✔️ | The error message received on this channel. |
+
+##### Status
+
+| Value | Description |
+|:-||
+| Sent | The messaging service sent the message to the recipient |
+| Delivered | The message recipient received the message |
+| Read | The message recipient read the message |
+| Failed | The message failed to send correctly |
+
+#### Examples
+
+##### Update for message delivery
+
+```json
+[{
+ "id": "00000000-0000-0000-0000-000000000000",
+ "topic": "/subscriptions/{subscription-id}/resourcegroups/{resourcegroup-name}/providers/microsoft.communication/communicationservices/{communication-services-resource-name}",
+ "subject": "advancedMessage/22222222-2222-2222-2222-222222222222/status/Sent",
+ "data": {
+ "messageId": "22222222-2222-2222-2222-222222222222",
"status": "Sent", "channelType": "whatsapp", "from": "{sender@id}",
This section contains an example of what that data would look like for each even
}] ```
+##### Update for message delivery with failure
+ ```json [{
- "id": "48cd6446-01dd-479f-939c-171c86c46700",
+ "id": "00000000-0000-0000-0000-000000000000",
"topic": "/subscriptions/{subscription-id}/resourcegroups/{resourcegroup-name}/providers/microsoft.communication/communicationservices/acsxplatmsg-test",
- "subject": "advancedMessage/00000000-0000-0000-0000-000000000000/status/Failed",
+ "subject": "advancedMessage/22222222-2222-2222-2222-222222222222/status/Failed",
"data": {
- "messageId": "00000000-0000-0000-0000-000000000000",
+ "messageId": "22222222-2222-2222-2222-222222222222",
"status": "Failed", "channelType": "whatsapp", "from": "{sender@id}",
This section contains an example of what that data would look like for each even
}] ```
-> [!NOTE]
-> Possible values for `Status` are `Sent`, `Delivered`, `Read` and `Failed`.
- ## Quickstart For a quickstart that shows how to subscribe for Advanced Messaging events using web hooks, see [Quickstart: Handle Advanced Messaging events](../communication-services/quickstarts/advanced-messaging/whatsapp/handle-advanced-messaging-events.md).
event-grid Event Grid Dotnet Get Started Pull Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-grid-dotnet-get-started-pull-delivery.md
In this quickstart, you do the following steps:
> [!NOTE]
-> This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of messages to an Event Grid Namespace Topic and then receiving them. For an overview of the .NET client library, see [Azure Event Grid client library for .NET](https://github.com/Azure/azure-sdk-for-net/blob/Azure.Messaging.EventGrid_4.17.0-beta.1/sdk/eventgrid/Azure.Messaging.EventGridV2/src/Generated/EventGridClient.cs). For more samples, see [Event Grid .NET samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/feature/eventgrid/namespaces/sdk/eventgrid/Azure.Messaging.EventGrid/samples).
+> This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of messages to an Event Grid Namespace Topic and then receiving them. For an overview of the .NET client library, see [Azure Event Grid client library for .NET](https://github.com/Azure/azure-sdk-for-net/blob/Azure.Messaging.EventGrid_4.17.0-beta.1/sdk/eventgrid/Azure.Messaging.EventGridV2/src/Generated/EventGridClient.cs). For more samples, see [Event Grid .NET samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/eventgrid/Azure.Messaging.EventGrid.Namespaces/samples).
## Prerequisites
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-faqs.md
Previously updated : 04/09/2024 Last updated : 07/18/2024
Supported bandwidth offers:
### What's the maximum MTU supported?
-ExpressRoute and other hybrid networking services--VPN and vWAN--supports a maximum MTU of 1400 bytes.
+ExpressRoute supports the standard internet MTU of 1500 bytes.
See [TCP/IP performance tuning for Azure VMs](../virtual-network/virtual-network-tcpip-performance-tuning.md) for tuning the MTU of your VMs. ### Which service providers are available?
firewall Tutorial Firewall Dnat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-firewall-dnat.md
# Filter inbound Internet traffic with Azure Firewall DNAT using the Azure portal
-You can configure Azure Firewall Destination Network Address Translation (DNAT) to translate and filter inbound Internet traffic to your subnets. When you configure DNAT, the NAT rule collection action is set to **Dnat**. Each rule in the NAT rule collection can then be used to translate your firewall public IP address and port to a private IP address and port. DNAT rules implicitly add a corresponding network rule to allow the translated traffic. For security reasons, the recommended approach is to add a specific Internet source to allow DNAT access to the network and avoid using wildcards. To learn more about Azure Firewall rule processing logic, see [Azure Firewall rule processing logic](rule-processing.md).
+You can configure Azure Firewall Destination Network Address Translation (DNAT) to translate and filter inbound Internet traffic to your subnets. When you configure DNAT, the NAT rule collection action is set to **Dnat**. Each rule in the NAT rule collection can then be used to translate your firewall public IP address and port to a private/public IP address and port. DNAT rules implicitly add a corresponding network rule to allow the translated traffic. For security reasons, the recommended approach is to add a specific Internet source to allow DNAT access to the network and avoid using wildcards. To learn more about Azure Firewall rule processing logic, see [Azure Firewall rule processing logic](rule-processing.md).
> [!NOTE] > This article uses classic Firewall rules to manage the firewall. The preferred method is to use [Firewall Policy](../firewall-manager/policy-overview.md). To complete this procedure using Firewall Policy, see [Tutorial: Filter inbound Internet traffic with Azure Firewall policy DNAT using the Azure portal](tutorial-firewall-dnat-policy.md)
frontdoor Front Door Rules Engine Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-rules-engine-actions.md
Previously updated : 05/07/2024 Last updated : 07/16/2024 zone_pivot_groups: front-door-tiers
In this example, we append the value `AdditionalValue` to the `MyRequestHeader`
+> [!NOTE]
+> Certain Azure Front Door reserved headers can't be modified using rules engine actions, including the actions to modify request headers and response headers. The following list of reserved headers can't be modified, along with any headers prefixed with `x-ec` and `x-fd`.
+>
+> * `Accept-Ranges`
+> * `Host`
+> * `Connection`
+> * `Content-Length`
+> * `Transfer-Encoding`
+> * `TE`
+> * `Last-Modified`
+> * `Keep-Alive`
+> * `Expect`
+> * `Upgrade`
+> * `If-Modified-Since`
+> * `If-Unmodified-Since`
+> * `If-None-Match`
+> * `If-Match`
+> * `Range`
+> * `If-Range`
+> * `X-Ms-Via`
+> * `X-Ms-Force-Refresh`
+> * `X-MSEdge-Ref`
+> * `Warning`
+> * `Forwarded`
+> * `Via`
+> * `X-Forwarded-For`
+> * `X-Forwarded-Proto`
+> * `X-Forwarded-Host`
+> * `X-Azure-RequestChain`
+> * `X-Azure-FDID`
+> * `X-Azure-RequestChainv2`
+> * `X-Azure-Ref`
+ ## <a name="ModifyResponseHeader"></a> Modify response header Use the **modify response header** action to modify headers that are present in responses before they're returned to your clients.
In this example, we delete the header with the name `X-Powered-By` from the resp
+> [!NOTE]
+> Certain Azure Front Door reserved headers can't be modified using rules engine actions, including the actions to modify request headers and response headers. The following list of reserved headers can't be modified, along with any headers prefixed with `x-ec` and `x-fd`.
+>
+> * `Accept-Ranges`
+> * `Host`
+> * `Connection`
+> * `Content-Length`
+> * `Transfer-Encoding`
+> * `TE`
+> * `Last-Modified`
+> * `Keep-Alive`
+> * `Expect`
+> * `Upgrade`
+> * `If-Modified-Since`
+> * `If-Unmodified-Since`
+> * `If-None-Match`
+> * `If-Match`
+> * `Range`
+> * `If-Range`
+> * `X-Ms-Via`
+> * `X-Ms-Force-Refresh`
+> * `X-MSEdge-Ref`
+> * `Warning`
+> * `Forwarded`
+> * `Via`
+> * `X-Forwarded-For`
+> * `X-Forwarded-Proto`
+> * `X-Forwarded-Host`
+> * `X-Azure-RequestChain`
+> * `X-Azure-FDID`
+> * `X-Azure-RequestChainv2`
+> * `X-Azure-Ref`
+ ## <a name="UrlRedirect"></a> URL redirect Use the **URL redirect** action to redirect clients to a new URL. Clients are sent a redirection response from Front Door. Azure Front Door supports dynamic capture of URL path with `{url_path:seg#}` server variable, and converts URL path to lowercase or uppercase with `{url_path.tolower}` or `{url_path.toupper}`. For more information, see [Server variables](rule-set-server-variables.md).
governance Create Management Group Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-azure-cli.md
directory. You receive a notification when the process is complete. For more inf
locally. To find the version, run `az --version`. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). -- Any Azure AD user in the tenant can create a management group without the management group write
+- Any Microsoft Entra ID user in the tenant can create a management group without the management group write
permission assigned to that user if [hierarchy protection](./how-to/protect-resource-hierarchy.md#settingrequire-authorization) isn't enabled. This new management group becomes a child of the Root Management Group or the [default management group](./how-to/protect-resource-hierarchy.md#settingdefault-management-group) and the creator is given an "Owner" role assignment. Management group service allows this ability so that role assignments aren't needed at the root level. No users have access to the Root
- Management Group when it's created. To avoid the hurdle of finding the Azure AD Global Admins to
+ Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to
start using management groups, we allow the creation of the initial management groups at the root level.
governance Create Management Group Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-dotnet.md
directory. You receive a notification when the process is complete. For more inf
[Azure management libraries for .NET authentication](/dotnet/azure/sdk/authentication#mgmt-auth). Skip the step to install the .NET Core packages as we'll do that in the next steps. -- Any Azure AD user in the tenant can create a management group without the management group write
+- Any Microsoft Entra ID user in the tenant can create a management group without the management group write
permission assigned to that user if [hierarchy protection](./how-to/protect-resource-hierarchy.md#settingrequire-authorization) isn't enabled. This new management group becomes a child of the Root Management Group or the [default management group](./how-to/protect-resource-hierarchy.md#settingdefault-management-group) and the creator is given an "Owner" role assignment. Management group service allows this ability so that role assignments aren't needed at the root level. No users have access to the Root
- Management Group when it's created. To avoid the hurdle of finding the Azure AD Global Admins to
+ Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to
start using management groups, we allow the creation of the initial management groups at the root level.
governance Create Management Group Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-go.md
directory. You receive a notification when the process is complete. For more inf
[Azure management libraries for .NET authentication](/dotnet/azure/sdk/authentication#mgmt-auth). Skip the step to install the .NET Core packages as we'll do that in the next steps. -- Any Azure AD user in the tenant can create a management group without the management group write
+- Any Microsoft Entra ID user in the tenant can create a management group without the management group write
permission assigned to that user if [hierarchy protection](./how-to/protect-resource-hierarchy.md#settingrequire-authorization) isn't enabled. This new management group becomes a child of the Root Management Group or the [default management group](./how-to/protect-resource-hierarchy.md#settingdefault-management-group) and the creator is given an "Owner" role assignment. Management group service allows this ability so that role assignments aren't needed at the root level. No users have access to the Root
- Management Group when it's created. To avoid the hurdle of finding the Azure AD Global Admins to
+ Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to
start using management groups, we allow the creation of the initial management groups at the root level.
governance Create Management Group Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-javascript.md
directory. You receive a notification when the process is complete. For more inf
- Before you start, make sure that at least version 12 of [Node.js](https://nodejs.org/) is installed. -- Any Azure AD user in the tenant can create a management group without the management group write
+- Any Microsoft Entra ID user in the tenant can create a management group without the management group write
permission assigned to that user if [hierarchy protection](./how-to/protect-resource-hierarchy.md#settingrequire-authorization) isn't enabled. This new management group becomes a child of the Root Management Group or the [default management group](./how-to/protect-resource-hierarchy.md#settingdefault-management-group) and the creator is given an "Owner" role assignment. Management group service allows this ability so that role assignments aren't needed at the root level. No users have access to the Root
- Management Group when it's created. To avoid the hurdle of finding the Azure AD Global Admins to
+ Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to
start using management groups, we allow the creation of the initial management groups at the root level.
governance Create Management Group Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-portal.md
directory. You receive a notification when the process is complete. For more inf
- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. -- Any Azure AD user in the tenant can create a management group without the management group write
+- Any Microsoft Entra ID user in the tenant can create a management group without the management group write
permission assigned to that user if [hierarchy protection](./how-to/protect-resource-hierarchy.md#settingrequire-authorization) isn't enabled. This new management group becomes a child of the Root Management Group or the [default management group](./how-to/protect-resource-hierarchy.md#settingdefault-management-group) and the creator is given an "Owner" role assignment. Management group service allows this ability so that role assignments aren't needed at the root level. No users have access to the Root
- Management Group when it's created. To avoid the hurdle of finding the Azure AD Global Admins to
+ Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to
start using management groups, we allow the creation of the initial management groups at the root level.
governance Create Management Group Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-powershell.md
directory. You receive a notification when the process is complete. For more inf
- Before you start, make sure that the latest version of Azure PowerShell is installed. See [Install Azure PowerShell module](/powershell/azure/install-azure-powershell) for detailed information. -- Any Azure AD user in the tenant can create a management group without the management group write
+- Any Microsoft Entra ID user in the tenant can create a management group without the management group write
permission assigned to that user if [hierarchy protection](./how-to/protect-resource-hierarchy.md#settingrequire-authorization) isn't enabled. This new management group becomes a child of the Root Management Group or the [default management group](./how-to/protect-resource-hierarchy.md#settingdefault-management-group) and the creator is given an "Owner" role assignment. Management group service allows this ability so that role assignments aren't needed at the root level. No users have access to the Root
- Management Group when it's created. To avoid the hurdle of finding the Azure AD Global Admins to
+ Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to
start using management groups, we allow the creation of the initial management groups at the root level.
governance Create Management Group Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-python.md
directory. You receive a notification when the process is complete. For more inf
- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. -- Any Azure AD user in the tenant can create a management group without the management group write
+- Any Microsoft Entra ID user in the tenant can create a management group without the management group write
permission assigned to that user if [hierarchy protection](./how-to/protect-resource-hierarchy.md#settingrequire-authorization) isn't enabled. This new management group becomes a child of the Root Management Group or the [default management group](./how-to/protect-resource-hierarchy.md#settingdefault-management-group) and the creator is given an "Owner" role assignment. Management group service allows this ability so that role assignments aren't needed at the root level. No users have access to the Root
- Management Group when it's created. To avoid the hurdle of finding the Azure AD Global Admins to
+ Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to
start using management groups, we allow the creation of the initial management groups at the root level.
governance Create Management Group Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-rest-api.md
directory. You receive a notification when the process is complete. For more inf
[Invoke-RestMethod](/powershell/module/microsoft.powershell.utility/invoke-restmethod) or [Postman](https://www.postman.com). -- Any Azure AD user in the tenant can create a management group without the management group write
+- Any Microsoft Entra ID user in the tenant can create a management group without the management group write
permission assigned to that user if [hierarchy protection](./how-to/protect-resource-hierarchy.md#settingrequire-authorization) isn't enabled. This new management group becomes a child of the Root Management Group or the [default management group](./how-to/protect-resource-hierarchy.md#settingdefault-management-group) and the creator is given an "Owner" role assignment. Management group service allows this ability so that role assignments aren't needed at the root level. No users have access to the Root
- Management Group when it's created. To avoid the hurdle of finding the Azure AD Global Admins to
+ Management Group when it's created. To avoid the hurdle of finding the Microsoft Entra ID Global Admins to
start using management groups, we allow the creation of the initial management groups at the root level.
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/overview.md
will inherit down the hierarchy like any built-in role. For information about th
[Defining and creating a custom role](../../role-based-access-control/custom-roles.md) doesn't change with the inclusion of management groups. Use the full path to define the management group
-**/providers/Microsoft.Management/managementgroups/{_groupId_}**.
+`/providers/Microsoft.Management/managementgroups/{_groupId_}`.
Use the management group's ID and not the management group's display name. This common error happens since both are custom-defined fields when creating a management group.
since both are custom-defined fields when creating a management group.
"IsCustom": true, "Description": "This role provides members understand custom roles.", "Actions": [
- "Microsoft.Management/managementgroups/delete",
- "Microsoft.Management/managementgroups/read",
- "Microsoft.Management/managementgroup/write",
- "Microsoft.Management/managementgroup/subscriptions/delete",
- "Microsoft.Management/managementgroup/subscriptions/write",
+ "Microsoft.Management/managementGroups/delete",
+ "Microsoft.Management/managementGroups/read",
+ "Microsoft.Management/managementGroups/write",
+ "Microsoft.Management/managementGroups/subscriptions/delete",
+ "Microsoft.Management/managementGroups/subscriptions/write",
"Microsoft.resources/subscriptions/read", "Microsoft.Authorization/policyAssignments/*", "Microsoft.Authorization/policyDefinitions/*",
key-vault Multi Region Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/multi-region-replication.md
# Enable multi-region replication on Azure Managed HSM
-Multi-region replication allows you to extend a managed HSM pool from one Azure region (called a primary) to another Azure region (called a secondary). Once configured, both regions are active, able to serve requests and, with automated replication, share the same key material, roles, and permissions. The closest available region to the application receives and fulfills the request, thereby maximizing read throughput and latency. While regional outages are rare, multi-region replication enhances the availability of mission critical cryptographic keys should one region become unavailable. For more information on SLA, visit [SLA for Azure Key Vault Managed HSM](https://azure.microsoft.com/support/legal/sla/key-vault-managed-hsm/v1_0/).
+Multi-region replication allows you to extend a managed HSM pool from one Azure region (called the primary region) to another Azure region (called an extended region). Once configured, both regions are active, able to serve requests and, with automated replication, share the same key material, roles, and permissions. The closest available region to the application receives and fulfills the request, thereby maximizing read throughput and latency. While regional outages are rare, multi-region replication enhances the availability of mission critical cryptographic keys should one region become unavailable. For more information on SLA, visit [SLA for Azure Key Vault Managed HSM](https://azure.microsoft.com/support/legal/sla/key-vault-managed-hsm/v1_0/).
## Architecture :::image type="content" source="../media/multi-region-replication.png" alt-text="Architecture diagram of managed HSM Multi-Region Replication." lightbox="../media/multi-region-replication.png":::
-When multi-region replication is enabled on a managed HSM, a second managed HSM pool, with three load-balanced HSM partitions, is created in the secondary region. When requests are issued to the Traffic Manager global DNS endpoint `<hsm-name>.managedhsm.azure.net`, the closest available region receives and fulfills the request. While each region individually maintains regional high-availability due to the distribution of HSMs across the region, the traffic manager ensures that even if all partitions of a managed HSM in one region are unavailable due to a catastrophe, requests can still be served by the secondary managed HSM pool.
+When multi-region replication is enabled on a managed HSM, a second managed HSM pool, with three load-balanced HSM partitions, is created in an extended region. When requests are issued to the Traffic Manager global DNS endpoint `<hsm-name>.managedhsm.azure.net`, the closest available region receives and fulfills the request. While each region individually maintains regional high-availability due to the distribution of HSMs across the region, the traffic manager ensures that even if all partitions of a managed HSM in one region are unavailable due to a catastrophe, requests can still be served by the managed HSM pool in the extended region.
## Replication latency
Failover occurs when one of the regions in a multi-region Managed HSM becomes un
| Affected Region | Reads Allowed | Writes Allowed | |--|--|--|
-| Secondary | Yes | Yes |
-| Primary | Yes | Maybe |
+| Extended Region | Yes | Yes |
+| Primary Region | Yes | Maybe |
-If the secondary region becomes unavailable, read operations (get key, list keys, all crypto operations, list role assignments) are available if the primary region is alive. Write operations (create and update keys, create and update role assignments, create and update role definitions) are also available.
+If an extended region becomes unavailable, read operations (get key, list keys, all crypto operations, list role assignments) are available if the primary region is alive. Write operations (create and update keys, create and update role assignments, create and update role definitions) are also available.
If the primary region is unavailable, read operations are available, but write operations may not, depending on the scope of the outage. ## Time to failover
-Under the hood, DNS resolution handles the redirection of requests to either the primary or secondary region.
+Under the hood, DNS resolution handles the redirection of requests to either the primary or the extended regions.
If both regions are active, the Traffic Manager resolves incoming requests to the location that has the closest geographical proximity or lowest network latency to the origin of the request. DNS records are configured with a default TTL of 5 seconds.
The following regions are supported as primary regions (Regions where you can re
- US West Central > [!NOTE]
-> US Central, US East, US South Central, West US 2, Switzerland North, West Europe, Central India, Canada Central, Canada East, Japan West, Qatar Central, Poland Central and US West Central cannot be extended as a secondary region at this time. Other regions may be unavailable for extension due to capacity limitations in the region.
+> US Central, US East, US South Central, West US 2, Switzerland North, West Europe, Central India, Canada Central, Canada East, Japan West, Qatar Central, Poland Central and US West Central cannot be extended regions at this time. Other regions may be unavailable for extension due to capacity limitations in the region.
## Billing
-Multi-region replication into secondary region incurs extra billing (x2), as a new HSM pool is consumed in the secondary region. For more information, see [Azure Managed HSM pricing](https://azure.microsoft.com/pricing/details/key-vault).
+Multi-region replication into an extended region incurs extra billing (x2), as a new HSM pool is consumed in an extended region. For more information, see [Azure Managed HSM pricing](https://azure.microsoft.com/pricing/details/key-vault).
## Soft-delete behavior
-The [Managed HSM soft-delete feature](soft-delete-overview.md) allows recovery of deleted HSMs and keys however in a multi-region replication enabled scenario, there are subtle differences where the secondary HSM must be deleted before soft-delete can be executed on the primary HSM. Additionally, when a secondary is deleted, it's purged immediately and doesn't go into a soft-delete state that stops all billing for the secondary. You can always extend to a new region as the secondary from the primary if needed.
+The [Managed HSM soft-delete feature](soft-delete-overview.md) allows recovery of deleted HSMs and keys however in a multi-region replication enabled scenario, there are subtle differences where the secondary HSM must be deleted before soft-delete can be executed on the primary HSM. Additionally, when an extended region is removed from the primary HSM, the HSM in the removed region is purged instead of entering a soft-delete state, and billing for the purged HSM ends immediately. You can always extend to a new extended region from the primary if needed.
## Private link behavior with Multi-region replication
-The [Azure Private Link feature](private-link.md) allows you to access the Managed HSM service over a private endpoint in your virtual network. You would configure private endpoint on the Managed HSM in the primary region just as you would when not using the multi-region replication feature. For the Managed HSM in the secondary region, it is recommended to create another private endpoint once the Managed HSM in the primary region is replicated to the Managed HSM in the secondary region. This will redirect client requests to the Managed HSM closest to the client location.
+The [Azure Private Link feature](private-link.md) allows you to access the Managed HSM service over a private endpoint in your virtual network. You would configure private endpoint on the Managed HSM in the primary region just as you would when not using the multi-region replication feature. For the Managed HSM in an extended region, it is recommended to create another private endpoint and private DNS zone once the Managed HSM in the primary region is replicated to the Managed HSM in an extended region. This will redirect client requests to the Managed HSM closest to the client location.
-Some scenarios below with examples: Managed HSM in a primary region (UK South) and another Managed HSM in a secondary region (US West Central).
+Some scenarios below with examples: Managed HSM in a primary region (UK South) and another Managed HSM in an extended region (US West Central).
-- When both Managed HSMs in the primary and secondary regions are up and running with private endpoint enabled, client requests are redirected to the Managed HSM closest to client location. Client requests go to the closest region's private endpoint and then directed to the same region's Managed HSM by the traffic manager.
+- When both Managed HSMs in the primary and extended regions are up and running with private endpoint enabled, client requests are redirected to the Managed HSM closest to client location. Client requests go to the closest region's private endpoint and then directed to the same region's Managed HSM by the traffic manager.
:::image type="content" source="../media/managed-hsm-multiregion-scenario-1.png" alt-text="Diagram illustrating the first managed HSM multi-region scenario." lightbox="../media/managed-hsm-multiregion-scenario-1.png":::
Some scenarios below with examples: Managed HSM in a primary region (UK South) a
:::image type="content" source="../media/managed-hsm-multiregion-scenario-2.png" alt-text="Diagram illustrating the second managed HSM multi-region scenario." lightbox="../media/managed-hsm-multiregion-scenario-2.png"::: -- Managed HSMs in primary and secondary regions but only one private endpoint configured in either primary or secondary. For a client from a different VNET (VNET1) to connect to a Managed HSM through a private endpoint in a different VNET (VNET2), it requires VNET peering between the two VNETs. You can add VNET link for the private DNS zone which is created during the private endpoint creation.
+- Managed HSMs in primary and extended regions but only one private endpoint configured in either the primary or extended region. For a client from a different VNET (VNET1) to connect to a Managed HSM through a private endpoint in a different VNET (VNET2), it requires VNET peering between the two VNETs. You can add VNET link for the private DNS zone which is created during the private endpoint creation.
:::image type="content" source="../media/managed-hsm-multiregion-scenario-3.png" alt-text="Diagram illustrating the third managed HSM multi-region scenario." lightbox="../media/managed-hsm-multiregion-scenario-3.png":::
In the diagram below, private endpoint is created only in the UK South region, o
### Azure CLI commands
-If creating a new Managed HSM pool and then extending to a secondary, refer to [these instructions](quick-create-cli.md#create-a-managed-hsm) prior to extending. If extending from an already existing Managed HSM pool, then use the following instructions to create a secondary HSM into another region.
+If creating a new Managed HSM pool and then extending to an extended region, refer to [these instructions](quick-create-cli.md#create-a-managed-hsm) prior to extending. If extending from an already existing Managed HSM pool, then use the following instructions to extend the HSM pool into an extended region.
> [!NOTE] > These commands requires Azure CLI version 2.48.1 or higher. To install the latest version, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
-### Add a secondary HSM in another region
+### Extend a primary HSM into an extended region
-To extend a managed HSM pool to another region, run the following command that will automatically create a second HSM.
+To extend a managed HSM pool to another region, run the following command that will automatically create a new HSM in an extended region.
```azurecli-interactive az keyvault region add --hsm-name "ContosoMHSM" --region "australiaeast" ``` > [!NOTE]
-> "ContosoMHSM" in this example is the primary HSM pool name; "australiaeast" is the secondary region into which you are extending it.
+> "ContosoMHSM" in this example is the primary HSM pool name; "australiaeast" is the extended region into which you are extending it.
-### Remove a secondary HSM in another region
+### Remove an extended region from the primary HSM
-Once you remove a secondary HSM, the HSM partitions in the other region will be purged. All secondaries must be deleted before a primary managed HSM can be soft-deleted or purged. Only secondaries can be deleted using this command. The primary can only be deleted using the [soft-delete](soft-delete-overview.md#soft-delete-behavior) and [purge](soft-delete-overview.md#purge-protection) commands
+Once you remove an extended HSM, the HSM partitions in the other region will be purged. All secondaries must be deleted before a primary managed HSM can be soft-deleted or purged. Only secondaries can be deleted using this command. The primary can only be deleted using the [soft-delete](soft-delete-overview.md#soft-delete-behavior) and [purge](soft-delete-overview.md#purge-protection) commands
```azurecli-interactive az keyvault region remove --hsm-name ContosoMHSM --region australiaeast
logic-apps Sap Create Example Scenario Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connectors/sap-create-example-scenario-workflows.md
Both Standard and Consumption logic app workflows offer the SAP *managed* connec
## Prerequisites
-Before you start, make sure to [review and meet the SAP connector requirements](sap.md#prerequisites) for your specific scenario.
+- Before you start, make sure to [review and meet the SAP connector requirements](sap.md#prerequisites) for your specific scenario.
+ <a name="receive-messages-sap"></a>
To create a logic app workflow that sends an IDoc to an SAP server and returns a
### Add the Request trigger
-To have your workflow receive IDocs from SAP over XML HTTP, you can use the [Request built-in trigger](../../connectors/connectors-native-reqres.md). This trigger creates an endpoint with a URL where your SAP server can send HTTP POST requests to your workflow. When your workflow receives these requests, the trigger fires and runs the next step in your workflow.
+To have your workflow receive IDocs from SAP over XML HTTP, you can use the [**Request** built-in trigger](../../connectors/connectors-native-reqres.md). This trigger creates an endpoint with a URL where your SAP server can send HTTP POST requests to your workflow. When your workflow receives these requests, the trigger fires and runs the next step in your workflow.
To receive IDocs over Common Programming Interface Communication (CPIC) as plain XML or as a flat file, review the section, [Receive message from SAP](#receive-messages-sap).
Based on whether you have a Consumption workflow in multitenant Azure Logic Apps
### Add an SAP action to send an IDoc
-Next, create an action to send your IDoc to SAP when the workflow's request trigger fires. Based on whether you have a Consumption workflow in multitenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps:
+Next, create an action to send your IDoc to SAP when the workflow's **Request** trigger fires. Based on whether you have a Consumption workflow in multitenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps:
### [Consumption](#tab/consumption)
-1. In the workflow designer, under the Request trigger, select **New step**.
+1. In the workflow designer, under the **Request** trigger, select **New step**.
1. In the designer, [follow these general steps to find and add the SAP managed action named **Send message to SAP**](../create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
Next, create an action to send your IDoc to SAP when the workflow's request trig
For more information about IDoc messages, review [Message schemas for IDoc operations](/biztalk/adapters-and-accelerators/adapter-sap/message-schemas-for-idoc-operations).
- 1. In the **Send message to SAP** action, include the body output from the Request trigger.
+ 1. In the **Send message to SAP** action, include the body output from the **Request** trigger.
1. In the **Input Message** parameter, select inside the edit box to open the dynamic content list.
- 1. From the dynamic content list, under **When a HTTP request is received**, select **Body**. The **Body** field contains the body output from the Request trigger.
+ 1. From the dynamic content list, under **When a HTTP request is received**, select **Body**. The **Body** field contains the body output from the **Request** trigger.
> [!NOTE] > If the **Body** field doesn't appear in the list, next to the **When a HTTP request is received** label, select **See more**. ![Screenshot shows selecting the Request trigger's output named Body for Consumption workflow.](./media/sap-create-example-scenario-workflows/sap-send-message-select-body-consumption.png)
- The **Send message to SAP** action now includes the body content from the Request trigger and sends that output to your SAP server, for example:
+ The **Send message to SAP** action now includes the body content from the **Request** trigger and sends that output to your SAP server, for example:
![Screenshot shows completed SAP action for Consumption workflow.](./media/sap-create-example-scenario-workflows/sap-send-message-complete-consumption.png)
Next, create an action to send your IDoc to SAP when the workflow's request trig
### [Standard](#tab/standard)
-1. In the workflow designer, under the Request trigger, select the plus sign (**+**) > **Add an action**.
+1. In the workflow designer, under the **Request** trigger, select the plus sign (**+**) > **Add an action**.
1. In the designer, [follow these general steps to find and add the SAP built-in action named **[IDoc] Send document to SAP**](../create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
Next, create an action to send your IDoc to SAP when the workflow's request trig
1. In the **Plain XML IDoc** parameter, select inside the edit box, and open the dynamic content list (lightning icon).
- 1. From the dynamic content list, under **When a HTTP request is received**, select **Body**. The **Body** field contains the body output from the Request trigger.
+ 1. From the dynamic content list, under **When a HTTP request is received**, select **Body**. The **Body** field contains the body output from the **Request** trigger.
> [!NOTE] > If the **Body** field doesn't appear in the list, next to the **When a HTTP request is received** label, select **See more**. ![Screenshot shows selecting the Request trigger's output named Body for Standard workflow.](./media/sap-create-example-scenario-workflows/sap-send-idoc-select-body-standard.png)
- The **[IDoc] Send document to SAP** action now includes the body content from the Request trigger and sends that output to your SAP server, for example:
+ The **[IDoc] Send document to SAP** action now includes the body content from the **Request** trigger and sends that output to your SAP server, for example:
![Screenshot shows completed SAP action for Standard workflow.](./media/sap-create-example-scenario-workflows/sap-send-idoc-complete-standard.png)
Now, set up your workflow to return the results from your SAP server to the orig
### Create a remote function call (RFC) request-response pattern
-For the Consumption workflows that use the SAP managed connector and ISE-versioned SAP connector, if you have to receive replies by using a remote function call (RFC) to Azure Logic Apps from SAP ABAP, you must implement a request and response pattern. To receive IDocs in your workflow when you use the [Request trigger](../../connectors/connectors-native-reqres.md), make sure that the workflow's first action is a [Response action](../../connectors/connectors-native-reqres.md#add-response) that uses the **200 OK** status code without any content. This recommended step immediately completes the SAP Logical Unit of Work (LUW) asynchronous transfer over tRFC, which leaves the SAP CPIC conversation available again. You can then add more actions to your workflow for processing the received IDoc without blocking later transfers.
+For the Consumption workflows that use the SAP managed connector and ISE-versioned SAP connector, if you have to receive replies by using a remote function call (RFC) to Azure Logic Apps from SAP ABAP, you must implement a request and response pattern. To receive IDocs in your workflow when you use the [**Request** trigger](../../connectors/connectors-native-reqres.md), make sure that the workflow's first action is a [Response action](../../connectors/connectors-native-reqres.md#add-response) that uses the **200 OK** status code without any content. This recommended step immediately completes the SAP Logical Unit of Work (LUW) asynchronous transfer over tRFC, which leaves the SAP CPIC conversation available again. You can then add more actions to your workflow for processing the received IDoc without blocking later transfers.
> [!NOTE] >
In the following example, the `STFC_CONNECTION` RFC module generates a request a
1. If your Consumption logic app resource isn't already enabled, on your logic app menu, select **Overview**. On the toolbar, select **Enable**.
-1. On the designer toolbar, select **Run Trigger** > **Run** to manually start your workflow.
+1. On the designer toolbar, select **Run** > **Run** to manually start your workflow.
-1. To simulate a webhook trigger payload, send an HTTP POST request to the endpoint URL that's specified by your workflow's Request trigger. Make sure to include your message content with your request. To send the request, use a local tool or app tool such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/).
+1. To simulate a webhook trigger payload and trigger the workflow, send an HTTP request to the endpoint URL created by your workflow's **Request** trigger, including the method that the **Request** trigger expects, by using your HTTP request tool and its instructions. Make sure to include your message content with your request.
- For this example, the HTTP POST request sends an IDoc file, which must be in XML format and include the namespace for the SAP action that you selected, for example:
+ This example uses the **POST** method and the endpoint URL to send an IDoc file, which must be in XML format and include the namespace for the SAP action that you selected, for example:
```xml <?xml version="1.0" encoding="UTF-8" ?>
You've now created a workflow that can communicate with your SAP server. Now tha
1. Return to the workflow level. On the workflow menu, select **Overview**. On the toolbar, select **Run** > **Run** to manually start your workflow.
-1. To simulate a webhook trigger payload, send an HTTP POST request to the endpoint URL that's specified by your workflow's Request trigger. Make sure to your message content with your request. To send the request, use a local tool or app such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/).
+1. To simulate a webhook trigger payload and trigger the workflow, send an HTTP request to the endpoint URL created by your workflow's **Request** trigger, including the method that the **Request** trigger expects, by using your HTTP request tool and its instructions. Make sure to include your message content with your request.
- For this example, the HTTP POST request sends an IDoc file, which must be in XML format and include the namespace for the SAP action that you selected, for example:
+ This example uses the **POST** method and the endpoint URL to send an IDoc file, which must be in XML format and include the namespace for the SAP action that you selected, for example:
```xml <?xml version="1.0" encoding="UTF-8" ?>
When you connect to SAP from Azure Logic Apps, English is the default language u
However, you can set the language for your connection by using the [standard HTTP header `Accept-Language`](https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.4) with your inbound requests. Most web browsers add an `Accept-Language` header based on your locale settings. The web browser applies this header when you create a new SAP connection in the workflow designer. So, you can either update your web browser's settings to use your preferred language, or you can create your SAP connection using Azure Resource Manager instead of the workflow designer.
- For example, you can send a request with the `Accept-Language` header to your logic app workflow by using the Request trigger named **When a HTTP request is received**. All the actions in your workflow receive the header. Then, SAP uses the specified languages in its system messages, such as BAPI error messages. If you don't pass an `Accept-Language` header at run time, by default, English is used.
+ For example, you can send a request with the `Accept-Language` header to your logic app workflow by using the **Request** trigger named **When a HTTP request is received**. All the actions in your workflow receive the header. Then, SAP uses the specified languages in its system messages, such as BAPI error messages. If you don't pass an `Accept-Language` header at run time, by default, English is used.
If you use the `Accept-Language` header, you might get the following error: **Please check your account info and/or permissions and try again.** In this case, check the SAP component's error logs instead. The error actually happens in the SAP component that uses the header, so you might get one of these error messages:
In Standard workflows, the SAP built-in connector also has actions that separate
The following example workflow shows this pattern:
-1. Create and open a Consumption or Standard logic app with a blank workflow in the designer. Add the Request trigger.
+1. Create and open a Consumption or Standard logic app with a blank workflow in the designer. Add the **Request** trigger.
1. To help avoid sending duplicate IDocs to SAP, [follow these alternative steps to create and use an IDoc transaction ID in your SAP actions](#create=transaction-ID-variable).
The following example workflow shows this pattern:
If you experience a problem with your workflow sending duplicate IDocs to SAP, you can create a string variable that serves as an IDoc transaction identifier. You can then use this identifier to help prevent duplicate network transmissions in conditions such as temporary outages, network issues, or lost acknowledgments.
-1. In the designer, after you add the Request trigger, and before you add the SAP action named **[IDOC] Send document to SAP**, add the action named **Initialize variable** to your workflow.
+1. In the designer, after you add the **Request** trigger, and before you add the SAP action named **[IDOC] Send document to SAP**, add the action named **Initialize variable** to your workflow.
1. Rename the action to **Create IDoc transaction ID**.
logic-apps Sap Generate Schemas For Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connectors/sap-generate-schemas-for-artifacts.md
Both Standard and Consumption logic app workflows offer the SAP *managed* connec
- If you want to upload your generated schemas to a repository, such as an [integration account](../logic-apps-enterprise-integration-create-integration-account.md), make sure that the repository already exists. + ## Generate schemas for an SAP artifact The following example logic app workflow triggers when the workflow's SAP trigger receives a request from an SAP server. The workflow then runs an SAP action that generates schemas for the specified SAP artifact.
Based on whether you have a Consumption workflow in multitenant Azure Logic Apps
1. If your Consumption logic app resource isn't already enabled, on your logic app menu, select **Overview**. On the toolbar, select **Enable**.
-1. On the designer toolbar, select **Run Trigger** > **Run** to manually start your workflow.
+1. On the designer toolbar, select **Run** > **Run** to manually start your workflow.
-1. To simulate a webhook trigger payload, send an HTTP POST request to the endpoint URL that's specified by your workflow's Request trigger. To send the request, use a local tool or app such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/).
+1. To simulate a webhook trigger payload and trigger the workflow, send an HTTP request to the endpoint URL created by your workflow's **Request** trigger, including the method that the **Request** trigger expects, by using your HTTP request tool and its instructions. Make sure to include your message content with your request.
- For this example, the HTTP POST request sends an IDoc file, which must be in XML format and include the namespace for the SAP action that you selected, for example:
+ This example uses the **POST** method and the endpoint URL to send an IDoc file, which must be in XML format and include the namespace for the SAP action that you selected, for example:
```xml <?xml version="1.0" encoding="UTF-8" ?>
For more information about reviewing workflow run history, see [Monitor logic ap
1. Return to the workflow level. On the workflow menu, select **Overview**. On the toolbar, select **Run** > **Run** to manually start your workflow.
-1. To simulate a webhook trigger payload, send an HTTP POST request to the endpoint URL that's specified by your workflow's Request trigger. To send the request, use a local tool or app such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/).
+1. To simulate a webhook trigger payload and trigger the workflow, send an HTTP request to the endpoint URL created by your workflow's **Request** trigger, including the method that the **Request** trigger expects, by using your HTTP request tool and its instructions. Make sure to include your message content with your request.
- For this example, the HTTP POST request sends an IDoc file, which must be in XML format and include the namespace for the SAP action that you selected, for example:
+ This example uses the **POST** method and the endpoint URL to send an IDoc file, which must be in XML format and include the namespace for the SAP action that you selected, for example:
```xml <?xml version="1.0" encoding="UTF-8" ?>
logic-apps Create Single Tenant Workflows Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-azure-portal.md
In single-tenant Azure Logic Apps, workflows in the same logic app resource and
If you don't have an Office 365 account, you can use [any other available email connector](/connectors/connector-reference/connector-reference-logicapps-connectors) that can send messages from your email account, for example, Outlook.com. If you use a different email connector, you can still follow the example, and the general overall steps are the same. However, your options might differ in some ways. For example, if you use the Outlook.com connector, use your personal Microsoft account instead to sign in.
-* To test the example workflow in this guide, you need a local tool or app that can send calls to the endpoint created by the Request trigger. For example, you can use local tools such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/) to send the HTTP request.
* If you create your logic app resource and enable [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app. You can do so either when you create your logic app or after deployment. You need to have an Application Insights instance, but you can create this resource either [in advance](../azure-monitor/app/create-workspace-resource.md), when you create your logic app, or after deployment.
So now you'll add a trigger that starts your workflow.
## Add a trigger
-This example workflow starts with the [built-in Request trigger](../connectors/connectors-native-reqres.md) named **When a HTTP request is received**. This trigger creates an endpoint that other services or logic app workflows can call and waits for those inbound calls or requests to arrive. Built-in operations run natively and directly within the Azure Logic Apps runtime.
+This example workflow starts with the [built-in **Request** trigger](../connectors/connectors-native-reqres.md) named **When a HTTP request is received**. This trigger creates an endpoint that other services or logic app workflows can call and waits for those inbound calls or requests to arrive. Built-in operations run natively and directly within the Azure Logic Apps runtime.
1. On the workflow designer, make sure that your blank workflow is open and that the **Add a trigger** prompt is selected on the designer surface.
-1. By using **request** as the search term, [follow these steps to add the built-in Request trigger named **When a HTTP request is received**](create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger) to your workflow.
+1. By using **request** as the search term, [follow these steps to add the built-in **Request** trigger named **When a HTTP request is received**](create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger) to your workflow.
When the trigger appears on the designer, the trigger's information pane opens to show the trigger's properties, settings, and other actions.
This example workflow starts with the [built-in Request trigger](../connectors/c
1. Save your workflow. On the designer toolbar, select **Save**.
- When you save a workflow for the first time, and that workflow starts with a Request trigger, Azure Logic Apps automatically generates a URL for an endpoint that's created by the Request trigger. Later, when you test your workflow, you send a request to this URL, which fires the trigger and starts the workflow run.
+ When you save a workflow for the first time, and that workflow starts with a **Request** trigger, Azure Logic Apps automatically generates a URL for an endpoint that's created by the **Request** trigger. Later, when you test your workflow, you send a request to this URL, which fires the trigger and starts the workflow run.
## Add an action
Before you deploy your logic app and run your workflow in the Azure portal, if y
To find the inbound and outbound IP addresses used by your logic app and workflows, follow these steps:
-1. On your logic app menu, under **Settings**, select **Networking (preview)**.
+1. On your logic app menu, under **Settings**, select **Networking**.
1. On the networking page, find and review the **Inbound Traffic** and **Outbound Traffic** sections.
To find the fully qualified domain names (FQDNs) for connections, follow these s
## Trigger the workflow
-In this example, the workflow runs when the Request trigger receives an inbound request, which is sent to the URL for the endpoint that's created by the trigger. When you saved the workflow for the first time, Azure Logic Apps automatically generated this URL. So, before you can send this request to trigger the workflow, you need to find this URL.
+In this example, the workflow runs when the **Request** trigger receives an inbound request, which is sent to the URL for the endpoint that's created by the trigger. When you saved the workflow for the first time, Azure Logic Apps automatically generated this URL. So, before you can send this request to trigger the workflow, you need to find this URL.
-1. On the workflow designer, select the Request trigger that's named **When a HTTP request is received**.
+1. On the workflow designer, select the **Request** trigger that's named **When a HTTP request is received**.
1. After the information pane opens, on the **Parameters** tab, find the **HTTP POST URL** property. To copy the generated URL, select the **Copy Url** (copy file icon), and save the URL somewhere else for now. The URL follows this format:
In this example, the workflow runs when the Request trigger receives an inbound
> 1. To copy the endpoint URL, move your pointer over the end of the endpoint URL text, > and select **Copy to clipboard** (copy file icon).
-1. To test the URL by sending a request and triggering the workflow, open your preferred tool or app, and follow their instructions for creating and sending HTTP requests.
+1. To test the endpoint URL and trigger the workflow, send an HTTP request to the URL, including the method that the **Request** trigger expects, by using your HTTP request tool and its instructions.
- For this example, use the **GET** method with the copied URL, which looks like the following sample:
+ This example uses the **GET** method with the copied URL, which looks like the following sample:
**`GET https://fabrikam-workflows.azurewebsites.net:443/api/Fabrikam-Stateful-Workflow/triggers/manual/invoke?api-version=2020-05-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=xxxxxXXXXxxxxxXXXXxxxXXXXxxxxXXXX`**
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
This how-to guide shows how to create an example integration workflow that runs
For more information about single-tenant Azure Logic Apps, review [Single-tenant versus multitenant and integration service environment](single-tenant-overview-compare.md#resource-environment-differences).
-While the example workflow is cloud-based and has only two steps, you can create workflows from hundreds of operations that can connect a wide range of apps, data, services, and systems across cloud, on premises, and hybrid environments. The example workflow starts with the built-in Request trigger and follows with an Office 365 Outlook action. The trigger creates a callable endpoint for the workflow and waits for an inbound HTTPS request from any caller. When the trigger receives a request and fires, the next action runs by sending email to the specified email address along with selected outputs from the trigger.
+While the example workflow is cloud-based and has only two steps, you can create workflows from hundreds of operations that can connect a wide range of apps, data, services, and systems across cloud, on premises, and hybrid environments. The example workflow starts with the built-in **Request** trigger and follows with an Office 365 Outlook action. The trigger creates a callable endpoint for the workflow and waits for an inbound HTTPS request from any caller. When the trigger receives a request and fires, the next action runs by sending email to the specified email address along with selected outputs from the trigger.
> [!TIP]
+>
> If you don't have an Office 365 account, you can use any other available action > that can send messages from your email account, for example, Outlook.com. >
As you progress, you'll complete these high-level tasks:
1. To locally run webhook-based triggers and actions, such as the [built-in HTTP Webhook trigger](../connectors/connectors-native-webhook.md), in Visual Studio Code, you need to [set up forwarding for the callback URL](#webhook-setup).
-1. To test the example workflow in this guide, you need a local tool or app that can send calls to the endpoint created by the Request trigger. For example, you can use local tools such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/) to send the HTTP request.
- 1. If you create your logic app resources with settings that support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app resource. You can do so either when you create your logic app or after deployment. You need to have an Application Insights instance, but you can create this resource either [in advance](../azure-monitor/app/create-workspace-resource.md), when you create your logic app, or after deployment.
+1. Install or use a tool that can send HTTP requests to test your solution, for example:
+
+ [!INCLUDE [api-test-http-request-tools](../../includes/api-test-http-request-tools.md)]
+ <a name="set-up"></a> ## Set up Visual Studio Code
The workflow in this example uses the following trigger and actions:
1. On the workflow designer, in the **Add a trigger** pane, open the **Runtime** list, and select **In-App** so that you view only the available built-in connector triggers.
-1. Find the Request trigger named **When an HTTP request is received** by using the search box, and add that trigger to your workflow. For more information, see [Build a workflow with a trigger and actions](create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
+1. Find the **Request** trigger named **When an HTTP request is received** by using the search box, and add that trigger to your workflow. For more information, see [Build a workflow with a trigger and actions](create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
![Screenshot shows workflow designer, Add a trigger pane, and selected trigger named When an HTTP request is received.](./media/create-single-tenant-workflows-visual-studio-code/add-request-trigger.png)
If you need to delete an item from the designer, [follow these steps for deletin
### Add the Office 365 Outlook action
-1. On the designer, under the Request trigger, select the plus sign (**+**) > **Add an action**.
+1. On the designer, under the **Request** trigger, select the plus sign (**+**) > **Add an action**.
1. In the **Add an action** pane that opens, from the **Runtime** list, select **Shared** so that you view only the available managed connector actions.
For general information, see [Breakpoints - Visual Studio Code](https://code.vis
## Run, test, and debug locally
-To test your logic app workflow, follow these steps to start a debugging session, and find the URL for the endpoint that's created by the Request trigger. You need this URL so that you can later send a request to that endpoint.
+To test your logic app workflow, follow these steps to start a debugging session, and find the URL for the endpoint that's created by the **Request** trigger. You need this URL so that you can later send a request to that endpoint.
1. To debug a stateless workflow more easily, you can [enable the run history for that workflow](#enable-run-history-stateless).
To test your logic app workflow, follow these steps to start a debugging session
> If you get the error, **"Error exists after running preLaunchTask 'generateDebugSymbols'"**, > see the troubleshooting section, [Debugging session fails to start](#debugging-fails-to-start).
-1. Now, find the callback URL for the endpoint on the Request trigger.
+1. Now, find the callback URL for the endpoint on the **Request** trigger.
1. Reopen the Explorer pane so that you can view your project.
To test your logic app workflow, follow these steps to start a debugging session
![Screenshot shows Explorer pane, workflow.json file's shortcut menu with selected option, Overview.](./media/create-single-tenant-workflows-visual-studio-code/open-workflow-overview.png)
- 1. Find the **Callback URL** value, which looks similar to this URL for the example Request trigger:
+ 1. Find the **Callback URL** value, which looks similar to this URL for the example **Request** trigger:
`http://localhost:7071/api/<workflow-name>/triggers/manual/invoke?api-version=2020-05-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=<shared-access-signature>`
To test your logic app workflow, follow these steps to start a debugging session
1. Copy and save the **Callback URL** property value.
-1. To test the callback URL by sending a request and triggering the workflow, open your preferred tool or app, and follow their instructions for creating and sending HTTP requests.
+1. To test the callback URL and trigger the workflow, send an HTTP request to the URL, including the method that the **Request** trigger expects, by using your HTTP request tool and its instructions.
- For this example, use the **GET** method with the copied URL, which looks like the following sample:
+ This example uses the **GET** method with the copied URL, which looks like the following sample:
**`GET http://localhost:7071/api/Stateful-Workflow/triggers/manual/invoke?api-version=2020-05-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=<shared-access-signature>`**
To test your logic app workflow, follow these steps to start a debugging session
## Return a response
-When you have a workflow that starts with the Request trigger, you can return a response to the caller that sent a request to your workflow by using the [Request built-in action named **Response**](../connectors/connectors-native-reqres.md).
+When you have a workflow that starts with the **Request** trigger, you can return a response to the caller that sent a request to your workflow by using the [Request built-in action named **Response**](../connectors/connectors-native-reqres.md).
1. In the workflow designer, under the **Send an email** action, select the plus sign (**+**) > **Add an action**.
logic-apps Logic Apps Enterprise Integration Flatfile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-flatfile.md
For more information, review the following documentation:
* The logic app workflow, blank or existing, where you want to use the **Flat File** action.
- If you have a blank workflow, use any trigger that you want to start the workflow. This example uses the Request trigger.
+ If you have a blank workflow, use any trigger that you want to start the workflow. This example uses the **Request** trigger.
* Your logic app resource and workflow. Flat file operations don't have any triggers available, so your workflow has to minimally include a trigger. For more information, see the following documentation:
For more information, review the following documentation:
So, if you don't have or need an integration account, you can use the upload option. Otherwise, you can use the linking option. Either way, you can use these artifacts across all child workflows within the same logic app resource. + ## Limitations * XML content that you want to decode must be encoded in UTF-8 format.
After you create your schema, you now have to upload the schema based on the fol
1. If your workflow doesn't have a trigger or any other actions that your workflow needs, add those operations first. Flat File operations don't have any triggers available.
- This example continues with the Request trigger named **When a HTTP request is received**.
+ This example continues with the **Request** trigger named **When a HTTP request is received**.
1. On the workflow designer, under the step where you want to add the Flat File action, select **New step**.
After you create your schema, you now have to upload the schema based on the fol
1. If your workflow doesn't have a trigger or any other actions that your workflow needs, add those operations first. Flat File operations don't have any triggers available.
- This example continues with the Request trigger named **When a HTTP request is received**.
+ This example continues with the **Request** trigger named **When a HTTP request is received**.
1. On the designer, under the step where you want to add the Flat File action, select the plus sign (**+**), and then select **Add an action**.
After you create your schema, you now have to upload the schema based on the fol
1. If your workflow doesn't have a trigger or any other actions that your workflow needs, add those operations first. Flat File operations don't have any triggers available.
- This example continues with the Request trigger named **When a HTTP request is received**.
+ This example continues with the **Request** trigger named **When a HTTP request is received**.
1. On the workflow designer, under the step where you want to add the Flat File action, select **New step**.
After you create your schema, you now have to upload the schema based on the fol
1. If your workflow doesn't have a trigger or any other actions that your workflow needs, add those operations first. Flat File operations don't have any triggers available.
- This example continues with the Request trigger named **When a HTTP request is received**.
+ This example continues with the **Request** trigger named **When a HTTP request is received**.
1. On the designer, under the step where you want to add the Flat File action, select the plus sign (**+**), and then select **Add an action**.
You're now done with setting up your flat file decoding action. In a real world
## Test your workflow
-1. To send a call to the Request trigger's URL, which appears in the Request trigger's **HTTP POST URL** property, follow these steps:
+To trigger your workflow, follow these steps:
+
+1. In the **Request** trigger, find the **HTTP POST URL** property, and copy the URL.
- 1. Use a local tool or app such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/) to send the HTTP request.
+1. Open your HTTP request tool and use its instructions to send an HTTP request to the copied URL, including the method that the **Request** trigger expects.
- 1. Send the HTTP request using the **`POST`** method with the URL.
+ This example uses the **`POST`** method with the URL.
- 1. Include the XML content that you want to encode or decode in the request body.
+1. Include the XML content that you want to encode or decode in the request body.
1. After your workflow finishes running, go to the workflow's run history, and examine the Flat File action's inputs and outputs.
logic-apps Logic Apps Enterprise Integration Liquid Transform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-liquid-transform.md
For more information, review the following documentation:
> which differs in specific cases from the [Shopify implementation for Liquid](https://shopify.github.io/liquid). > For more information, see [Liquid template considerations](#liquid-template-considerations). + <a name="create-template"></a> ## Step 1: Create the template
The following steps show how to add a Liquid transformation action for Consumpti
1. If your workflow doesn't have a trigger or any other actions that your workflow needs, add those operations first. Liquid operations don't have any triggers available.
- This example continues with the Request trigger named **When a HTTP request is received**.
+ This example continues with the **Request** trigger named **When a HTTP request is received**.
1. On the workflow designer, under the step where you want to add the Liquid action, select **New step**.
The following steps show how to add a Liquid transformation action for Consumpti
1. If your workflow doesn't have a trigger or any other actions that your workflow needs, add those operations first. Liquid operations don't have any triggers available.
- This example continues with the Request trigger named **When a HTTP request is received**.
+ This example continues with the **Request** trigger named **When a HTTP request is received**.
1. On the designer, under the step where you want to add the Liquid action, select the plus sign (**+**), and then select **Add an action**.
The following steps show how to add a Liquid transformation action for Consumpti
## Test your workflow
-1. To send a call to the Request trigger's URL, which appears in the Request trigger's **HTTP POST URL** property, follow these steps:
+To trigger your workflow, follow these steps:
+
+1. In the **Request** trigger, find the **HTTP POST URL** property, and copy the URL.
- 1. Use a local tool or app such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/) to send the HTTP request.
+1. Open your HTTP request tool and use its instructions to send an HTTP request to the copied URL, including the method that the **Request** trigger expects.
- 1. Send the HTTP request using the **`POST`** method with the URL.
+ This example uses the **`POST`** method with the URL.
- 1. Include the JSON input to transform, for example:
+1. Include the JSON input to transform, for example:
- ```json
- {
- "devices": "Surface, Mobile, Desktop computer, Monitors",
- "firstName": "Dean",
- "lastName": "Ledet",
- "phone": "(111)0001111"
- }
- ```
+ ```json
+ {
+ "devices": "Surface, Mobile, Desktop computer, Monitors",
+ "firstName": "Dean",
+ "lastName": "Ledet",
+ "phone": "(111)0001111"
+ }
+ ```
1. After your workflow finishes running, go to the workflow's run history, and examine the **Transform JSON to JSON** action's inputs and outputs, for example:
logic-apps Logic Apps Http Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-http-endpoint.md
This guide shows how to create a callable endpoint for your workflow by adding t
* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* A logic app workflow where you want to use the request-based trigger to create the callable endpoint. You can start with either a blank workflow or an existing workflow where you can replace the current trigger. This example starts with a blank workflow.
+* A logic app workflow where you want to use the **Request** trigger to create the callable endpoint. You can start with either a blank workflow or an existing workflow where you can replace the current trigger. This example starts with a blank workflow.
-* To test the URL for the callable endpoint that you create, you'll need a local tool or app such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/) to send the HTTP request.
## Create a callable endpoint
Based on whether you have a Standard or Consumption logic app workflow, follow t
:::image type="content" source="./media/logic-apps-http-endpoint/find-trigger-url-standard.png" alt-text="Screenshot shows Standard workflow and Overview page with workflow URL." lightbox="./media/logic-apps-http-endpoint/find-trigger-url-standard.png":::
-1. To test the callback URL that you now have for the Request trigger, use a local tool or app such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/), and send the request using the method that the Request trigger expects.
+1. To test the callback URL and trigger the workflow, send an HTTP request to the URL, including the method that the **Request** trigger expects, by using your HTTP request tool and its instructions.
- This example uses the `POST` method:
+ This example uses the **POST** method with the copied URL, which looks like the following sample:
`POST https://{logic-app-name}.azurewebsites.net:443/api/{workflow-name}/triggers/{trigger-name}/invoke?api-version=2022-05-01&sp=%2Ftriggers%2F{trigger-name}%2Frun&sv=1.0&sig={shared-access-signature}`
Based on whether you have a Standard or Consumption logic app workflow, follow t
:::image type="content" source="./media/logic-apps-http-endpoint/find-trigger-url-consumption.png" alt-text="Screenshot shows Consumption logic app Overview page with workflow URL." lightbox="./media/logic-apps-http-endpoint/find-trigger-url-consumption.png":::
-1. To test the callback URL that you now have for the Request trigger, use a local tool or app such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/), and send the request using the method that the Request trigger expects.
+1. To test the callback URL and trigger the workflow, send an HTTP request to the URL, including the method that the **Request** trigger expects, by using your HTTP request tool.
- This example uses the `POST` method:
+ This example uses the **POST** method with the copied URL, which looks like the following sample:
`POST https://{server-name}.{region}.logic.azure.com/workflows/{workflow-ID}/triggers/{trigger-name}/paths/invoke/?api-version=2016-10-01&sp=%2Ftriggers%2F{trigger-name}%2Frun&sv=1.0&sig={shared-access-signature}`
Based on whether you have a Standard or Consumption logic app workflow, follow t
## Select expected request method
-By default, the Request trigger expects a `POST` request. However, you can specify a different method that the caller must use, but only a single method.
+By default, the **Request** trigger expects a `POST` request. However, you can specify a different method that the caller must use, but only a single method.
### [Standard](#tab/standard)
-1. In the Request trigger, open the **Advanced parameters** list, and select **Method**, which adds this property to the trigger.
+1. In the **Request** trigger, open the **Advanced parameters** list, and select **Method**, which adds this property to the trigger.
1. From the **Method** list, select the method that the trigger should expect instead. Or, you can specify a custom method.
By default, the Request trigger expects a `POST` request. However, you can speci
### [Consumption](#tab/consumption)
-1. In the Request trigger, open the **Add new parameter** list, and select **Method**, which adds this property to the trigger.
+1. In the **Request** trigger, open the **Add new parameter** list, and select **Method**, which adds this property to the trigger.
1. From the **Method** list, select the method that the trigger should expect instead. Or, you can specify a custom method.
When you want to accept parameter values through the endpoint's URL, you have th
* [Accept values through GET parameters](#get-parameters) or URL parameters.
- These values are passed as name-value pairs in the endpoint's URL. For this option, you need to use the GET method in your Request trigger. In a subsequent action, you can get the parameter values as trigger outputs by using the `triggerOutputs()` function in an expression.
+ These values are passed as name-value pairs in the endpoint's URL. For this option, you need to use the GET method in your **Request** trigger. In a subsequent action, you can get the parameter values as trigger outputs by using the `triggerOutputs()` function in an expression.
-* [Accept values through a relative path](#relative-path) for parameters in your Request trigger.
+* [Accept values through a relative path](#relative-path) for parameters in your **Request** trigger.
These values are passed through a relative path in the endpoint's URL. You also need to explicitly [select the method](#select-method) that the trigger expects. In a subsequent action, you can get the parameter values as trigger outputs by referencing those outputs directly.
When you want to accept parameter values through the endpoint's URL, you have th
### [Standard](#tab/standard)
-1. In the Request trigger, open the **Advanced parameters**, add the **Method** property to the trigger, and select the **GET** method.
+1. In the **Request** trigger, open the **Advanced parameters**, add the **Method** property to the trigger, and select the **GET** method.
For more information, see [Select expected request method](#select-method).
When you want to accept parameter values through the endpoint's URL, you have th
#### Test your callable endpoint
-1. From the Request trigger, copy the workflow URL, and paste the URL into another browser window. In the URL, add the parameter name and value to the URL in the following format, and press Enter.
+1. From the **Request** trigger, copy the workflow URL, and paste the URL into another browser window. In the URL, add the parameter name and value to the URL in the following format, and press Enter.
`...invoke/{parameter-name}/{parameter-value}?api-version=2022-05-01...`
When you want to accept parameter values through the endpoint's URL, you have th
### [Consumption](#tab/consumption)
-1. In the Request trigger, open the **Add new parameter list**, add the **Method** property to the trigger, and select the **GET** method.
+1. In the **Request** trigger, open the **Add new parameter list**, add the **Method** property to the trigger, and select the **GET** method.
For more information, see [Select expected request method](#select-method).
When you want to accept parameter values through the endpoint's URL, you have th
#### Test your callable endpoint
-1. From the Request trigger, copy the workflow URL, and paste the URL into another browser window. In the URL, add the parameter name and value following the question mark (`?`) to the URL in the following format, and press Enter.
+1. From the **Request** trigger, copy the workflow URL, and paste the URL into another browser window. In the URL, add the parameter name and value following the question mark (`?`) to the URL in the following format, and press Enter.
`...invoke?{parameter-name=parameter-value}&api-version=2016-10-01...`
When you want to accept parameter values through the endpoint's URL, you have th
### [Standard](#tab/standard)
-1. In the Request trigger, open the **Advanced parameters** list, and select **Relative path**, which adds this property to the trigger.
+1. In the **Request** trigger, open the **Advanced parameters** list, and select **Relative path**, which adds this property to the trigger.
![Screenshot shows Standard workflow, Request trigger, and added property named Relative path.](./media/logic-apps-http-endpoint/add-relative-path-standard.png)
When you want to accept parameter values through the endpoint's URL, you have th
![Screenshot shows Standard workflow, Request trigger, and Relative path parameter value.](./media/logic-apps-http-endpoint/relative-path-url-standard.png)
-1. Under the Request trigger, [follow these general steps to add the action where you want to use the parameter value](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
+1. Under the **Request** trigger, [follow these general steps to add the action where you want to use the parameter value](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
For this example, add the **Response** action.
When you want to accept parameter values through the endpoint's URL, you have th
1. Save your workflow.
- In the Request trigger, the callback URL is updated and now includes the relative path, for example:
+ In the **Request** trigger, the callback URL is updated and now includes the relative path, for example:
`https://mystandardlogicapp.azurewebsites.net/api/Stateful-Workflow/triggers/When_a_HTTP_request_is_received/invoke/address/%7BpostalCode%7D?api-version=2022-05-01&sp=%2Ftriggers%2FWhen_a_HTTP_request_is_received%2Frun&sv=1.0&sig={shared-access-signature}`
-1. To test your callable endpoint, copy the updated callback URL from the Request trigger, paste the URL into another browser window, replace `%7BpostalCode%7D` in the URL with `123456`, and press Enter.
+1. To test the callable endpoint, copy the updated callback URL from the **Request** trigger, paste the URL into another browser window, replace `%7BpostalCode%7D` in the URL with `123456`, and press Enter.
The browser returns a response with this text: `Postal Code: 123456`
When you want to accept parameter values through the endpoint's URL, you have th
### [Consumption](#tab/consumption)
-1. In the Request trigger, open the **Add new parameter** list, and select **Relative path**, which adds this property to the trigger.
+1. In the **Request** trigger, open the **Add new parameter** list, and select **Relative path**, which adds this property to the trigger.
![Screenshot shows Consumption workflow, Request trigger, and added property named Relative path.](./media/logic-apps-http-endpoint/add-relative-path-consumption.png)
When you want to accept parameter values through the endpoint's URL, you have th
![Screenshot shows Consumption workflow, Request trigger, and Relative path parameter value.](./media/logic-apps-http-endpoint/relative-path-url-consumption.png)
-1. Under the Request trigger, [follow these general steps to add the action where you want to use the parameter value](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
+1. Under the **Request** trigger, [follow these general steps to add the action where you want to use the parameter value](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
For this example, add the **Response** action.
When you want to accept parameter values through the endpoint's URL, you have th
1. Save your workflow.
- In the Request trigger, the callback URL is updated and now includes the relative path, for example:
+ In the **Request** trigger, the callback URL is updated and now includes the relative path, for example:
`https://prod-07.westus.logic.azure.com/workflows/{logic-app-resource-ID}/triggers/manual/paths/invoke/address/{postalCode}?api-version=2016-10-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig={shared-access-signature}`
-1. To test your callable endpoint, copy the updated callback URL from the Request trigger, paste the URL into another browser window, replace `{postalCode}` in the URL with `123456`, and press Enter.
+1. To test the callable endpoint, copy the updated callback URL from the **Request** trigger, paste the URL into another browser window, replace `{postalCode}` in the URL with `123456`, and press Enter.
The browser returns a response with this text: `Postal Code: 123456`
After you create the endpoint, you can trigger the workflow by sending an HTTPS
## Tokens generated from schema
-When you provide a JSON schema in the Request trigger, the workflow designer generates tokens for the properties in that schema. You can then use those tokens for passing data through your workflow.
+When you provide a JSON schema in the **Request** trigger, the workflow designer generates tokens for the properties in that schema. You can then use those tokens for passing data through your workflow.
For example, if you add more properties, such as `"suite"`, to your JSON schema, tokens for those properties are available for you to use in the later steps for your workflow. Here's the complete JSON schema:
For nested workflows, the parent workflow continues to wait for a response until
### Construct the response
-In the response body, you can include multiple headers and any type of content. For example, the following response's header specifies that the response's content type is `application/json` and that the body contains values for the `town` and `postalCode` properties, based on the JSON schema described earlier in this topic for the Request trigger.
+In the response body, you can include multiple headers and any type of content. For example, the following response's header specifies that the response's content type is `application/json` and that the body contains values for the `town` and `postalCode` properties, based on the JSON schema described earlier in this topic for the **Request** trigger.
![Screenshot shows Response action and response content type.](./media/logic-apps-http-endpoint/content-for-response-action.png)
logic-apps Logic Apps Scenario Edi Send Batch Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-scenario-edi-send-batch-messages.md
Make sure that your batch receiver and batch sender logic app workflows use the
## Prerequisites
-To follow this example, you need these items:
- * An Azure subscription. If you don't have a subscription, you can [start with a free Azure account](https://azure.microsoft.com/free/). * Basic knowledge about how to create logic app workflows. For more information, see [Create an example Consumption logic app workflow in multitenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md).
To follow this example, you need these items:
* To use Visual Studio rather than the Azure portal, make sure you [set up Visual Studio for working with Azure Logic Apps](quickstart-create-logic-apps-with-visual-studio.md). + <a name="receiver"></a> ## Create X12 batch receiver
the batch into subsets to collect messages with that key.
## Test your workflows
-To test your batching solution, post X12 messages to your batch sender logic app from a local tool or app that can send HTTP requests, such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/). Soon, you start getting X12 messages in your request bin, either every 10 minutes or in batches of 10, all with the same partition key.
+To test your batching solution, post X12 messages to your batch sender logic app workflow using your HTTP request tool and its instructions. Soon, you start getting X12 messages in your request bin, either every 10 minutes or in batches of 10, all with the same partition key.
## Next steps
logic-apps Quickstart Create Example Consumption Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-example-consumption-workflow.md
To create and manage a Consumption logic app workflow using other tools, see the
| **Region** | Yes | <*Azure-region*> | The Azure datacenter region for storing your app's information. This example deploys the sample logic app to the **West US** region in Azure. | | **Enable log analytics** | Yes | **No** | This option appears and applies only when you select the **Consumption** logic app type. <br><br>Change this option only when you want to enable diagnostic logging. For this quickstart, keep the default selection. |
- When you're done, your settings look similar to the following example:
-
- :::image type="content" source="media/quickstart-create-example-consumption-workflow/create-logic-app-settings.png" alt-text="Screenshot shows Azure portal and logic app resource creation page with details for new logic app." lightbox="media/quickstart-create-example-consumption-workflow/create-logic-app-settings.png":::
- > [!NOTE] >
- > If you selected an Azure region that supports availability zone redundancy, the **Zone redundancy**
- > section is automatically enabled. This preview section offers the choice to enable availability zone
- > redundancy for your logic app. However, currently supported Azure regions don't include **West US**,
- > so you can ignore this section for this example. For more information, see
+ > Availability zones are automatically enabled for new and existing Consumption logic app workflows in
+ > [Azure regions that support availability zones](../reliability/availability-zones-service-support.md#azure-regions-with-availability-zone-support).
+ > For more information, see [Reliability in Azure Functions](../reliability/reliability-functions.md#availability-zone-support) and
> [Protect logic apps from region failures with zone redundancy and availability zones](set-up-zone-redundancy-availability-zones.md).
+ After you finish, your settings look similar to the following example:
+
+ :::image type="content" source="media/quickstart-create-example-consumption-workflow/create-logic-app-settings.png" alt-text="Screenshot shows Azure portal and logic app resource creation page with details for new logic app." lightbox="media/quickstart-create-example-consumption-workflow/create-logic-app-settings.png":::
+ 1. When you're ready, select **Review + Create**. 1. On the validation page that appears, confirm all the provided information, and select **Create**.
This example uses an Office 365 Outlook action that sends an email each time tha
:::image type="content" source="media/quickstart-create-example-consumption-workflow/dynamic-content-see-more.png" alt-text="Screenshot shows open dynamic content list and selected option, See more." lightbox="media/quickstart-create-example-consumption-workflow/dynamic-content-see-more.png":::
- When you're done, the email subject looks like the following example:
+ After you finish, the email subject looks like the following example:
:::image type="content" source="media/quickstart-create-example-consumption-workflow/send-email-feed-title.png" alt-text="Screenshot shows action named Send an email, with example email subject and included property named Feed title." lightbox="media/quickstart-create-example-consumption-workflow/send-email-feed-title.png":::
This example uses an Office 365 Outlook action that sends an email each time tha
## Test your workflow
-To check that the workflow runs correctly, you can either wait for the trigger to fire based on your specifed schedule, or you can manually run the workflow.
+To check that the workflow runs correctly, you can either wait for the trigger to fire based on your specified schedule, or you can manually run the workflow.
* On the designer toolbar, from the **Run** menu, select **Run**.
If you don't receive emails from the workflow as expected:
## Clean up resources
-When you're done with this quickstart, delete the sample logic app resource and any related resources by deleting the resource group that you created for this example.
+When you complete this quickstart, delete the sample logic app resource and any related resources by deleting the resource group that you created for this example.
1. In the Azure search box, enter **resource groups**, and select **Resource groups**.
logic-apps Secure Single Tenant Workflow Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md
For more information, review the following documentation:
## Prerequisites
-You need to have a new or existing Azure virtual network that includes a subnet without any delegations. This subnet is used to deploy and allocate private IP addresses from the virtual network.
+- A new or existing Azure virtual network that includes a subnet without any delegations. This subnet is used to deploy and allocate private IP addresses from the virtual network.
-For more information, review the following documentation:
+ For more information, review the following documentation:
+
+ - [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md)
+ - [What is subnet delegation?](../virtual-network/subnet-delegation-overview.md)
+ - [Add or remove a subnet delegation](../virtual-network/manage-subnet-delegation.md)
-- [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md)-- [What is subnet delegation?](../virtual-network/subnet-delegation-overview.md)-- [Add or remove a subnet delegation](../virtual-network/manage-subnet-delegation.md) <a name="set-up-inbound"></a>
For more information, review the following documentation:
To secure inbound traffic to your workflow, complete these high-level steps:
-1. Start your workflow with a built-in trigger that can receive and handle inbound requests, such as the Request trigger or the HTTP + Webhook trigger. This trigger sets up your workflow with a callable endpoint.
+1. Start your workflow with a built-in trigger that can receive and handle inbound requests, such as the **Request** trigger or the **HTTP + Webhook** trigger. This trigger sets up your workflow with a callable endpoint.
1. Add a private endpoint for your logic app resource to your virtual network.
To secure inbound traffic to your workflow, complete these high-level steps:
Along with the [virtual network setup in the top-level prerequisites](#prerequisites), you need to have a new or existing Standard logic app workflow that starts with a built-in trigger that can receive requests.
-For example, the Request trigger creates an endpoint on your workflow that can receive and handle inbound requests from other callers, including workflows. This endpoint provides a URL that you can use to call and trigger the workflow. For this example, the steps continue with the Request trigger.
+For example, the **Request** trigger creates an endpoint on your workflow that can receive and handle inbound requests from other callers, including workflows. This endpoint provides a URL that you can use to call and trigger the workflow. For this example, the steps continue with the **Request** trigger.
For more information, review [Receive and respond to inbound HTTP requests using Azure Logic Apps](../connectors/connectors-native-reqres.md).
For more information, review [Receive and respond to inbound HTTP requests using
1. If you haven't already, create a single-tenant based logic app, and a blank workflow.
-1. After the designer opens, add the Request trigger as the first step in your workflow.
+1. After the designer opens, add the **Request** trigger as the first step in your workflow.
1. Based on your scenario requirements, add other actions that you want to run in your workflow.
For more information, review [Create single-tenant logic app workflows in Azure
1. On the **Overview** page, copy and save the **Workflow URL** for later use.
- To trigger the workflow, you call or send a request to this URL.
-
-1. Make sure that the URL works by calling or sending a request to the URL. You can use any local tool or app that you want for creating and sending HTTP requests, such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/).
+1. To test the URL and trigger the workflow, send an HTTP request to the URL by using your HTTP request tool and its instructions.
### Set up private endpoint connection
For more information, review the following documentation:
1. If you haven't already, in the [Azure portal](https://portal.azure.com), create a single-tenant based logic app, and a blank workflow.
-1. After the designer opens, add the Request trigger as the first step in your workflow.
+1. After the designer opens, add the **Request** trigger as the first step in your workflow.
1. Add an HTTP action to call an internal service that's unavailable through the Internet and runs with a private IP address such as `10.0.1.3`.
logic-apps Set Up Zone Redundancy Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/set-up-zone-redundancy-availability-zones.md
Title: Protect logic apps from region failures with zone redundancy
-description: Set up availability zones for logic apps with zone redundancy for business continuity and disaster recovery.
+ Title: Protect logic apps from zonal failures
+description: Set up availability zone support for logic apps with zone redundancy for business continuity and disaster recovery.
ms.suite: integration Previously updated : 01/04/2024 Last updated : 07/17/2024
-#Customer intent: As a developer, I want to protect logic apps from regional failures by setting up availability zones.
+#Customer intent: As a developer, I want to protect logic apps from zonal failures by setting up availability zones and zone redundancy.
-# Protect logic apps from zonal failures with zone redundancy and availability zones
+# Protect logic apps from zonal failures with availability zones and zone redundancy
[!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)] In each Azure region, *availability zones* are physically separate locations that are tolerant to local failures. Such failures can range from software and hardware failures to events such as earthquakes, floods, and fires. These zones achieve tolerance through the redundancy and logical isolation of Azure services.
-To provide resiliency and distributed availability, at least three separate availability zones exist in any Azure region that supports and enables zone redundancy. The Azure Logic Apps platform distributes these zones and logic app workloads across these zones. This capability is a key requirement for enabling resilient architectures and providing high availability if datacenter failures happen in a region. For more information about availability zone redundancy, review [Azure regions and availability zones](../availability-zones/az-overview.md).
+To provide resiliency and distributed availability, at least three separate availability zones exist in any Azure region that supports and enables zone redundancy. The Azure Logic Apps platform distributes these zones and logic app workloads across these zones. This capability is a key requirement for enabling resilient architectures and providing high availability if datacenter failures happen in a region.
-This article provides a brief overview, considerations, and information about how to enable availability zone redundancy in Azure Logic Apps.
+For more information, see the following documentation:
-## Considerations
-
-### [Standard](#tab/standard)
+* [What are availability zones](../reliability/availability-zones-overview.md)?
+* [Azure regions with availability zone support](../reliability/availability-zones-service-support.md)
-Availability zone support is available for Standard logic apps, which are powered by Azure Functions extensibility. For more information, see [What is reliability in Azure Functions?](../reliability/reliability-functions.md#availability-zone-support).
+This guide provides a brief overview, considerations, and information about how to enable availability zones in Azure Logic Apps.
-* You can enable availability zone redundancy *only when you create* Standard logic apps, either in a [supported Azure region](../azure-functions/azure-functions-az-redundancy.md#requirements) or in an [App Service Environment v3 (ASE v3) - Windows plans only](../app-service/environment/overview-zone-redundancy.md). Currently, this capability supports only built-in connector operations, not Azure (managed) connector operations.
-
-* You can enable availability zone redundancy *only for new* Standard logic apps with workflows that run in single-tenant Azure Logic Apps. You can't enable availability zone redundancy for existing Standard logic app workflows.
+## Considerations
-* You can enable availability zone redundancy *only at creation time*. No programmatic tool support, such as Azure PowerShell or Azure CLI, currently exists to enable availability zone redundancy after creation.
+### [Standard](#tab/standard)
-### [Consumption (preview)](#tab/consumption)
+Availability zones are supported with Standard logic app workflows, which run in single-tenant Azure Logic Apps and are powered by Azure Functions extensibility. For more information, see [Reliability in Azure Functions](../reliability/reliability-functions.md#availability-zone-support).
-Availability zone redundancy is currently in *preview* for Consumption logic apps, which run in multi-tenant Azure Logic Apps. During preview, the following considerations apply:
+* You can enable this capability only when you create a Standard logic app in a [supported Azure region](../reliability/reliability-functions.md#regional-availability) or in an [App Service Environment v3 (ASE v3) - Windows plans only](../app-service/environment/overview-zone-redundancy.md).
-* You can enable availability zone redundancy *only for new* Consumption logic app workflows that you create in the following Azure regions, which will expand as available:
+* You can enable this capability *only for new* Standard logic apps. You can't enable availability zone support for existing Standard logic app workflows.
- * Australia East
- * Brazil South
- * Canada Central
- * Central India
- * Central US
- * East Asia
- * East US
- * East US 2
- * France Central
- * Germany West Central
- * Japan East
- * Korea Central
- * Norway East
- * South Central US
- * UK South
- * West Europe
- * West US 3
+* You can enable this capability *only at creation time*. No programmatic tool support, such as Azure PowerShell or Azure CLI, currently exists to enable availability zone support after creation.
- You have to create these Consumption logic apps *using the Azure portal*. No programmatic tool support, such as Azure PowerShell or Azure CLI, currently exists to enable availability zone redundancy.
+* This capability supports only built-in connector operations, which directly run with the Azure Logic Apps runtime, not connector operations that are hosted and run in Azure.
-* You can't enable availability zone redundancy for existing Consumption logic app workflows. Any existing Consumption logic app workflows are unaffected until mid-May 2022.
+### [Consumption](#tab/consumption)
- However, after this time, the Azure Logic Apps team will gradually start to move existing Consumption logic app workflows towards using availability zone redundancy, several Azure regions at a time. The option to enable availability zone redundancy on new Consumption logic app workflows remains available during this time.
+Availability zones are supported with Consumption logic app workflows, which run in multitenant Azure Logic Apps. This capability is automatically enabled for new and existing Consumption logic app workflows in [Azure regions that support availability zones](../reliability/availability-zones-service-support.md#azure-regions-with-availability-zone-support).
With HTTP-based actions, certificates exported or created with AES256 encryption
## Enable availability zones
-### [Standard](#tab/standard)
+For Standard logic apps only, follow these steps:
1. In the [Azure portal](https://portal.azure.com), start creating a Standard logic app. On the **Create Logic App** page, stop after you select **Standard** as the plan type for your logic app.
- ![Screenshot showing Azure portal, "Create Logic App" page, logic app details, and the "Standard" plan type selected.](./media/set-up-zone-redundancy-availability-zones/select-standard-plan.png)
+ :::image type="content" source="media/set-up-zone-redundancy-availability-zones/select-standard-plan.png" alt-text="Screenshot shows Azure portal, Create Logic App page, logic app details, and selected Standard plan type." lightbox="media/set-up-zone-redundancy-availability-zones/select-standard-plan.png":::
- For a tutorial, review [Create Standard logic app workflows with single-tenant Azure Logic Apps in the Azure portal](create-single-tenant-workflows-azure-portal.md).
+ For a tutorial, see [Create Standard logic app workflows with single-tenant Azure Logic Apps in the Azure portal](create-single-tenant-workflows-azure-portal.md).
After you select **Standard**, the **Zone redundancy** section and options become available.
-1. Under **Zone redundancy**, select **Enabled**.
-
- At this point, your logic app creation experience appears similar to this example:
-
- ![Screenshot showing Azure portal, "Create Logic App" page, Standard logic app details, and the "Enabled" option under "Zone redundancy" selected.](./media/set-up-zone-redundancy-availability-zones/enable-zone-redundancy-standard.png)
-
-1. Finish creating your logic app workflow.
-
-1. If you use a firewall and haven't set up access for traffic through the required IP addresses, make sure to complete that [requirement](#prerequisites).
-
-### [Consumption (preview)](#tab/consumption)
-
-1. In the [Azure portal](https://portal.azure.com), start creating a Consumption logic app. On the **Create Logic App** page, stop after you select **Consumption** as the plan type for your logic app.
-
- ![Screenshot showing Azure portal, "Create Logic App" page, logic app details, and the "Consumption" plan type selected.](./media/set-up-zone-redundancy-availability-zones/select-consumption-plan.png)
-
- For a quick tutorial, see [Quickstart: Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps using the Azure portal](quickstart-create-example-consumption-workflow.md).
-
- After you select **Consumption**, the **Zone redundancy** section and options become available.
+ > [!NOTE]
+ >
+ > The **Zone redundancy** options appear unavailable if you select an unsupported Azure region or an
+ > existing Windows plan that created in an unsupported Azure region. Make sure to select a supported
+ > Azure region and a Windows plan that was created in a supported Azure region, or create a new Windows plan.
1. Under **Zone redundancy**, select **Enabled**. At this point, your logic app creation experience appears similar to this example:
- ![Screenshot showing Azure portal, "Create Logic App" page, Consumption logic app details, and the "Enabled" option under "Zone redundancy" selected.](./media/set-up-zone-redundancy-availability-zones/enable-zone-redundancy-consumption.png)
+ :::image type="content" source="media/set-up-zone-redundancy-availability-zones/enable-zone-redundancy-standard.png" alt-text="Screenshot shows Azure portal, Create Logic App page, Standard logic app details, and the Enabled option selected under Zone redundancy." lightbox="media/set-up-zone-redundancy-availability-zones/enable-zone-redundancy-standard.png":::
1. Finish creating your logic app workflow. 1. If you use a firewall and haven't set up access for traffic through the required IP addresses, make sure to complete that [requirement](#prerequisites). --
-## Next steps
+## Related content
* [Business continuity and disaster recovery for Azure Logic Apps](business-continuity-disaster-recovery-guidance.md)
machine-learning Concept Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-customer-managed-keys.md
For example, the managed identity for Azure Cosmos DB would need to have those p
When you *don't* use a customer-managed key, Microsoft creates and manages resources in a Microsoft-owned Azure subscription and uses a Microsoft-managed key to encrypt the data.
-When you use a customer-managed key, the resources are in your Azure subscription and encrypted with your key. While these resources exist in your subscription, Microsoft manages them. They're automatically created and configured when you create your Azure Machine Learning workspace.
+When you use a customer-managed key, the resources are in your Azure subscription and encrypted with your key. While these resources exist in your subscription, Microsoft manages them. These resources are automatically created and configured when you create your Azure Machine Learning workspace.
-These Microsoft-managed resources are located in a new Azure resource group that's created in your subscription. This resource group is separate from the resource group for your workspace. It contains the Microsoft-managed resources that your key is used with. The formula for naming the resource group is: `<Azure Machine Learning workspace resource group name><GUID>`.
+These Microsoft-managed resources are located in a new Azure resource group created in your subscription. This resource group is separate from the resource group for your workspace. It contains the Microsoft-managed resources that your key is used with. The formula for naming the resource group is: `<Azure Machine Learning workspace resource group name><GUID>`.
> [!TIP] > The [Request Units](../cosmos-db/request-units.md) for Azure Cosmos DB automatically scale as needed.
Azure Machine Learning uses compute resources to train and deploy machine learni
Compute clusters have local OS disk storage and can mount data from storage accounts in your subscription during a job. When you're mounting data from your own storage account in a job, you can enable customer-managed keys on those storage accounts for encryption.
-The OS disk for each compute node that's stored in Azure Storage is always encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts, and not with customer-managed keys. This compute target is ephemeral, so data that's stored on the OS disk is deleted after the cluster scales down. Clusters typically scale down when no jobs are queued, autoscaling is on, and the minimum node count is set to zero. The underlying virtual machine is deprovisioned, and the OS disk is deleted.
+The OS disk for each compute node is stored in Azure Storage, and is always encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts, and not with customer-managed keys. This compute target is ephemeral, so data stored on the OS disk is deleted after the cluster scales down. Clusters typically scale down when no jobs are queued, autoscaling is on, and the minimum node count is set to zero. The underlying virtual machine is deprovisioned, and the OS disk is deleted.
Azure Disk Encryption isn't supported for the OS disk. Each virtual machine also has a local temporary disk for OS operations. If you want, you can use the disk to stage training data. If you create the workspace with the `hbi_workspace` parameter set to `TRUE`, the temporary disk is encrypted. This environment is short lived (only during your job), and encryption support is limited to system-managed keys only.
Microsoft creates the following resources to store metadata for your workspace:
From the perspective of data lifecycle management, data in the preceding resources is created and deleted as you create and delete corresponding objects in Azure Machine Learning.
-Your Azure Machine Learning workspace reads and writes data by using its managed identity. This identity is granted access to the resources through a role assignment (Azure role-based access control) on the data resources. The encryption key that you provide is used to encrypt data that's stored on Microsoft-managed resources. It's also used to create indexes for Azure AI Search at runtime.
+Your Azure Machine Learning workspace reads and writes data by using its managed identity. This identity is granted access to the resources through a role assignment (Azure role-based access control) on the data resources. The encryption key that you provide is used to encrypt data that stored on Microsoft-managed resources. At runtime, the key is also used to create indexes for Azure AI Search.
Extra networking controls are configured when you create a private link endpoint on your workspace to allow for inbound connectivity. This configuration includes the creation of a private link endpoint connection to the Azure Cosmos DB instance. Network access is restricted to only trusted Microsoft services.
Extra networking controls are configured when you create a private link endpoint
A new architecture for the customer-managed key encryption workspace is available in preview, reducing cost compared to the current architecture and mitigating likelihood of Azure policy conflicts. In this new model, encrypted data is stored service-side on Microsoft-managed resources instead of in your subscription.
-Data that previously was stored in CosmosDB in your subscription, is stored in multi-tenant Microsoft-managed resources using document-level encryption using your encryption key. Search indices that were previously stored in Azure AI Search in your subscription, are stored on Microsoft-managed resources that are provisioned dedicated for you per workspace. The cost of the Azure AI search instance is charged under your Azure ML workspace in Azure Cost Management.
+Data that previously was stored in Azure Cosmos DB in your subscription, is stored in multitenant Microsoft-managed resources with document-level encryption using your encryption key. Search indices that were previously stored in Azure AI Search in your subscription, are stored on Microsoft-managed resources that are provisioned dedicated for you per workspace. The cost of the Azure AI search instance is charged under your Azure Machine Learning workspace in Microsoft Cost Management.
-Pipelines metadata that previously was stored in a storage account in a managed resource group, is now stored on the storage account in your subscription that is associated to the Azure Machine Learning workspace. Since this Azure Storage resource is managed separately in your subscription, you are responsible to configure encryption settings on it.
+Pipelines metadata that previously was stored in a storage account in a managed resource group, is now stored on the storage account in your subscription that is associated to the Azure Machine Learning workspace. Since this Azure Storage resource is managed separately in your subscription, you're responsible to configure encryption settings on it.
-Set the `enableServiceSideCMKEncryption` when you create a workspace to opt-in for this preview. Preview availability varies by [workspace kind](concept-workspace.md):
+To opt in for this preview, set the `enableServiceSideCMKEncryption` on a REST API or in your Bicep or Resource Manager template. You can also use Azure portal. Preview availability varies by [workspace kind](concept-workspace.md):
| Kind | Supported | | -- | -- |
Set the `enableServiceSideCMKEncryption` when you create a workspace to opt-in f
| Hub | No | | Project | No | > [!NOTE] > During this preview key rotation and data labeling capabilities are not supported.
machine-learning How To Deploy Models Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-serverless.md
Previously updated : 05/09/2024- Last updated : 07/19/2024+
+reviewer: santiagxf
In this article, you learn how to deploy a model from the model catalog as a serverless API with pay-as-you-go token-based billing.
-Certain models in the model catalog can be deployed as a serverless API with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need. This deployment option doesn't require quota from your subscription.
+[Certain models in the model catalog](concept-endpoint-serverless-availability.md) can be deployed as a serverless API with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need. This deployment option doesn't require quota from your subscription.
## Prerequisites
Certain models in the model catalog can be deployed as a serverless API with pay
You can use any compatible web browser to [deploy ARM templates](../azure-resource-manager/templates/deploy-portal.md) in the Microsoft Azure portal or using any of the deployment tools. This tutorial uses the [Azure CLI](/cli/azure/).
-## Subscribe your workspace to the model offering
-
-For models offered through the Azure Marketplace, you can deploy them to serverless API endpoints to consume their predictions. If it's your first time deploying the model in the workspace, you have to subscribe your workspace for the particular model offering from the Azure Marketplace. Each workspace has its own subscription to the particular Azure Marketplace offering of the model, which allows you to control and monitor spending.
-
-> [!NOTE]
-> Models offered through the Azure Marketplace are available for deployment to serverless API endpoints in specific regions. Check [Region availability for models in Serverless API endpoints](concept-endpoint-serverless-availability.md) to verify which regions are available. If the one you need is not listed, you can deploy to a workspace in a supported region and then [consume serverless API endpoints from a different workspace](how-to-connect-models-serverless.md).
+## Find your model and model ID in the model catalog
1. Sign in to [Azure Machine Learning studio](https://ml.azure.com)
-1. Ensure your account has the **Azure AI Developer** role permissions on the resource group, or that you meet the [permissions required to subscribe to model offerings](#permissions-required-to-subscribe-to-model-offerings).
+1. For models offered through the Azure Marketplace, ensure that your account has the **Azure AI Developer** role permissions on the resource group, or that you meet the [permissions required to subscribe to model offerings](#permissions-required-to-subscribe-to-model-offerings).
+
+ Models that are offered by non-Microsoft providers (for example, Llama and Mistral models) are billed through the Azure Marketplace. For such models, you're required to subscribe your workspace to the particular model offering. Models that are offered by Microsoft (for example, Phi-3 models) don't have this requirement, as billing is done differently. For details about billing for serverless deployment of models in the model catalog, see [Billing for serverless APIs](concept-model-catalog.md#pay-for-model-usage-in-maas).
-1. Go to your workspace.
+1. Go to your workspace. To use the serverless API model deployment offering, your workspace must belong to one of the [regions that are supported for serverless deployment](concept-endpoint-serverless-availability.md) for the particular model you want to deploy.
1. Select **Model catalog** from the left sidebar and find the model card of the model you want to deploy. In this article, you select a **Meta-Llama-3-8B-Instruct** model.
For models offered through the Azure Marketplace, you can deploy them to serverl
:::image type="content" source="media/how-to-deploy-models-serverless/model-card.png" alt-text="A screenshot showing a model's details page." lightbox="media/how-to-deploy-models-serverless/model-card.png":::
+The next section covers the steps for subscribing your workspace to a model offering. You can skip this section and go to [Deploy the model to a serverless API endpoint](#deploy-the-model-to-a-serverless-api-endpoint), if you're deploying a Microsoft model.
+
+## Subscribe your workspace to the model offering
+
+For non-Microsoft models offered through the Azure Marketplace, you can deploy them to serverless API endpoints to consume their predictions. If it's your first time deploying the model in the workspace, you have to subscribe your workspace for the particular model offering from the Azure Marketplace. Each workspace has its own subscription to the particular Azure Marketplace offering of the model, which allows you to control and monitor spending.
+
+> [!NOTE]
+> Models offered through the Azure Marketplace are available for deployment to serverless API endpoints in specific regions. Check [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md) to verify which models and regions are available. If the one you need is not listed, you can deploy to a workspace in a supported region and then [consume serverless API endpoints from a different workspace](how-to-connect-models-serverless.md).
1. Create the model's marketplace subscription. When you create a subscription, you accept the terms and conditions associated with the model offer. # [Studio](#tab/azure-studio)
- 1. On the model's **Details** page, select **Deploy** and then select **Serverless API** to open the deployment wizard.
+ 1. On the model's **Details** page, select **Deploy** and then select **Serverless API with Azure AI Content Safety (preview)** to open the deployment wizard.
1. Select the checkbox to acknowledge the Microsoft purchase policy.
For models offered through the Azure Marketplace, you can deploy them to serverl
} ```
-1. Once you sign up the workspace for the particular Azure Marketplace offering, subsequent deployments of the same offering in the same workspace don't require subscribing again.
+1. Once you subscribe the workspace for the particular Azure Marketplace offering, subsequent deployments of the same offering in the same workspace don't require subscribing again.
1. At any point, you can see the model offers to which your workspace is currently subscribed:
For models offered through the Azure Marketplace, you can deploy them to serverl
## Deploy the model to a serverless API endpoint
-Once you've created a model's subscription, you can deploy the associated model to a serverless API endpoint. The serverless API endpoint provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance organizations need. This deployment option doesn't require quota from your subscription.
+Once you've created a subscription for a non-Microsoft model, you can deploy the associated model to a serverless API endpoint. For Microsoft models (such as Phi-3 models), you don't need to create a subscription.
+
+The serverless API endpoint provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance organizations need. This deployment option doesn't require quota from your subscription.
-In this article, you create an endpoint with name **meta-llama3-8b-qwerty**.
+In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**.
1. Create the serverless endpoint # [Studio](#tab/azure-studio)
- 1. From the previous wizard, select **Deploy** (if you've just subscribed the workspace to the model offer in the previous section), or select **Continue to deploy** (if your deployment wizard had the note *You already have an Azure Marketplace subscription for this workspace*).
+ 1. To deploy a Microsoft model that doesn't require subscribing to a model offering, select **Deploy** and then select **Serverless API with Azure AI Content Safety (preview)** to open the deployment wizard.
+
+ 1. Alternatively, for a non-Microsoft model that requires a model subscription, if you've just subscribed your workspace to the model offer in the previous section, continue to select **Deploy**. Alternatively, select **Continue to deploy** (if your deployment wizard had the note *You already have an Azure Marketplace subscription for this workspace*).
:::image type="content" source="media/how-to-deploy-models-serverless/deploy-pay-as-you-go-subscribed-workspace.png" alt-text="A screenshot showing a workspace that is already subscribed to the offering." lightbox="media/how-to-deploy-models-serverless/deploy-pay-as-you-go-subscribed-workspace.png":::
In this article, you create an endpoint with name **meta-llama3-8b-qwerty**.
1. At this point, your endpoint is ready to be used.
-1. If you need to consume this deployment from a different workspace, or you plan to use prompt flow to build intelligent applications, you need to create a connection to the serverless API deployment. To learn how to configure an existing serverless API endpoint on a new project or hub, see [Consume deployed serverless API endpoints from a different workspace or from Prompt flow](how-to-connect-models-serverless.md).
+1. If you need to consume this deployment from a different workspace, or you plan to use prompt flow to build intelligent applications, you need to create a connection to the serverless API deployment. To learn how to configure an existing serverless API endpoint on a new workspace or hub, see [Consume deployed serverless API endpoints from a different workspace or from Prompt flow](how-to-connect-models-serverless.md).
> [!TIP] > If you're using prompt flow in the same workspace where the deployment was deployed, you still need to create the connection.
-## Using the serverless API endpoint
+## Use the serverless API endpoint
Models deployed in Azure Machine Learning and Azure AI studio in Serverless API endpoints support the [Azure AI Model Inference API](reference-model-inference-api.md) that exposes a common set of capabilities for foundational models and that can be used by developers to consume predictions from a diverse set of models in a uniform and consistent way.
-Read more about the [capabilities of this API](reference-model-inference-api.md#capabilities) and how [you can leverage it when building applications](reference-model-inference-api.md#getting-started).
+Read more about the [capabilities of this API](reference-model-inference-api.md#capabilities) and how [you can use it when building applications](reference-model-inference-api.md#getting-started).
## Delete endpoints and subscriptions
az resource delete --name <resource-name>
## Cost and quota considerations for models deployed as serverless API endpoints
-Models deployed as a serverless API endpoint are offered through the Azure Marketplace and integrated with Azure Machine Learning for use. You can find the Azure Marketplace pricing when deploying or fine-tuning the models.
+Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per workspace. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
+
+#### Cost for Microsoft models
+
+You can find the pricing information on the __Pricing and terms__ tab of the deployment wizard when deploying Microsoft models (such as Phi-3 models) as serverless API endpoints.
+
+#### Cost for non-Microsoft models
+
+Non-Microsoft models deployed as serverless API endpoints are offered through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying or fine-tuning these models.
Each time a workspace subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference and fine-tuning; however, multiple meters are available to track each scenario independently.
For more information on how to track costs, see [Monitor costs for models offere
:::image type="content" source="media/how-to-deploy-models-serverless/costs-model-as-service-cost-details.png" alt-text="A screenshot showing different resources corresponding to different model offers and their associated meters." lightbox="media/how-to-deploy-models-serverless/costs-model-as-service-cost-details.png":::
-Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per workspace. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
## Permissions required to subscribe to model offerings
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
Title: Set up AutoML with the studio UI
+ Title: Set up Automated ML for tabular data in the studio
-description: Learn how to set up AutoML training jobs without a single line of code with Azure Machine Learning automated ML in the Azure Machine Learning studio.
+description: Learn how to set up Automated ML training jobs for tabular data without a single line of code by using Automated ML in Azure Machine Learning studio.
Previously updated : 07/20/2023 Last updated : 07/15/2024 -
-# Set up no-code AutoML training for tabular data with the studio UI
+#customer intent: As a developer, I want to use Automated ML in Azure Machine Learning studio so that I can to set up machine learning training jobs without writing any code.
+
-In this article, you learn how to set up AutoML training jobs without a single line of code using Azure Machine Learning automated ML in the [Azure Machine Learning studio](overview-what-is-azure-machine-learning.md#studio).
+# Set up no-code Automated ML training for tabular data with the studio UI
-Automated machine learning, AutoML, is a process in which the best machine learning algorithm to use for your specific data is selected for you. This process enables you to generate machine learning models quickly. [Learn more about how Azure Machine Learning implements automated machine learning](concept-automated-ml.md).
-
-For an end to end example, try the [Tutorial: AutoML- train no-code classification models](tutorial-first-experiment-automated-ml.md).
+In this article, you set up automated machine learning training jobs by using Azure Machine Learning Automated ML in [Azure Machine Learning studio](overview-what-is-azure-machine-learning.md#studio). This approach lets you set up the job without writing a single line of code. Automated ML is a process where Azure Machine Learning selects the best machine learning algorithm for your specific data. The process enables you to generate machine learning models quickly. For more information, see the [Overview of the Automated ML process](concept-automated-ml.md).
-For a Python code-based experience, [configure your automated machine learning experiments](how-to-configure-auto-train.md) with the Azure Machine Learning SDK.
+This tutorial provides a high-level overview for working with Automated ML in the studio. The following articles provide detailed instructions for working with specific machine learning models:
+- **Classification**: [Tutorial: Train a classification model with Automated ML in the studio](tutorial-first-experiment-automated-ml.md)
+- **Time series forecasting**: [Tutorial: Forecast demand with Automated ML in the studio](tutorial-automated-ml-forecast.md)
+- **Natural Language Processing (NLP)**: [Set up Automated ML to train an NLP model (Azure CLI or Python SDK)](how-to-auto-train-nlp-models.md)
+- **Computer vision**: [Set up AutoML to train computer vision models (Azure CLI or Python SDK)](how-to-auto-train-image-models.md)
+- **Regression**: [Train a regression model with Automated ML (Python SDK)](./v1/how-to-auto-train-models-v1.md)
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+- An Azure subscription. You can create a [free or paid account](https://azure.microsoft.com/free/) for Azure Machine Learning.
-* An Azure Machine Learning workspace. See [Create workspace resources](quickstart-create-resources.md).
+- An Azure Machine Learning workspace or compute instance. To prepare these resources, see [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md).
-## Get started
+- The data asset to use for the Automated ML training job. This tutorial describes how to select an existing data asset or create a data asset from a data source, such as a local file, web url, or datastore. For more information, see [Create and manage data assets](how-to-create-data-assets.md).
-1. Sign in to [Azure Machine Learning studio](https://ml.azure.com).
+ > [!IMPORTANT]
+ > There are two requirements for the training data:
+ > - The data must be in tabular form.
+ > - The value to predict (the _target_ column) must be present in the data.
-1. Select your subscription and workspace.
+<a name="create-and-run-experiment"></a>
-1. Navigate to the left pane. Select **Automated ML** under the **Authoring** section.
+## Create experiment
-[![Azure Machine Learning studio navigation pane](media/how-to-use-automated-ml-for-ml-models/nav-pane.png)](media/how-to-use-automated-ml-for-ml-models/nav-pane-expanded.png#lightbox)
+Create and run an experiment by following these steps:
- If this is your first time doing any experiments, you see an empty list and links to documentation.
+1. Sign in to [Azure Machine Learning studio](https://ml.azure.com), and select your subscription and workspace.
-Otherwise, you see a list of your recent automated ML experiments, including those created with the SDK.
+1. On the left menu, select **Automated ML** under the **Authoring** section:
-## Create and run experiment
+ :::image type="content" source="media/how-to-use-automated-ml-for-ml-models/automated-ml-overview.png" border="false" alt-text="Screenshot that shows the Authoring overview page for Automated ML in Azure Machine Learning studio." lightbox="media/how-to-use-automated-ml-for-ml-models/automated-ml-overview-large.png":::
-1. Select **+ New automated ML job** and populate the form.
+ The first time you work with experiments in the studio, you see an empty list and links to documentation. Otherwise, you see a list of your recent Automated ML experiments, including items created with the Azure Machine Learning SDK.
-1. Select a data asset from your storage container, or create a new data asset. Data asset can be created from local files, web urls, datastores, or Azure open datasets. Learn more about [data asset creation](how-to-create-data-assets.md).
+1. Select **New automated ML job** to start the **Submit an Automated ML job** process.
- >[!Important]
- > Requirements for training data:
- >* Data must be in tabular form.
- >* The value you want to predict (target column) must be present in the data.
+ By default, the process selects the **Train automatically** option on the **Training method** tab and continues to the configuration settings.
- 1. To create a new dataset from a file on your local computer, select **+Create dataset** and then select **From local file**.
+1. On the **Basics settings** tab, enter values for the required settings, including the **Job** name and **Experiment** name. You can also provide values for the optional settings, as desired.
- 1. Select **Next** to open the **Datastore and file selection form**. , you select where to upload your dataset; the default storage container that's automatically created with your workspace, or choose a storage container that you want to use for the experiment.
-
- 1. If your data is behind a virtual network, you need to enable the **skip the validation** function to ensure that the workspace can access your data. For more information, see [Use Azure Machine Learning studio in an Azure virtual network](how-to-enable-studio-virtual-network.md).
-
- 1. Select **Browse** to upload the data file for your dataset.
+1. Select **Next** to continue.
- 1. Review the **Settings and preview** form for accuracy. The form is intelligently populated based on the file type.
+### Identify data asset
- Field| Description
- -|-
- File format| Defines the layout and type of data stored in a file.
- Delimiter| One or more characters for specifying the boundary between separate, independent regions in plain text or other data streams.
- Encoding| Identifies what bit to character schema table to use to read your dataset.
- Column headers| Indicates how the headers of the dataset, if any, will be treated.
- Skip rows | Indicates how many, if any, rows are skipped in the dataset.
-
- Select **Next**.
+On the **Task type & data** tab, you specify the data asset for the experiment and the machine learning model to use to train the data.
- 1. The **Schema** form is intelligently populated based on the selections in the **Settings and preview** form. Here configure the data type for each column, review the column names, and select which columns to **Not include** for your experiment.
-
- Select **Next.**
+In this tutorial, you can use an existing data asset, or create a new data asset from a file on your local computer. The studio UI pages change based on your selection for the data source and type of training model.
- 1. The **Confirm details** form is a summary of the information previously populated in the **Basic info** and **Settings and preview** forms. You also have the option to create a data profile for your dataset using a profiling enabled compute.
+If you choose to use an existing data asset, you can continue to the [Configure training model](#configure-training-model) section.
- Select **Next**.
-1. Select your newly created dataset once it appears. You're also able to view a preview of the dataset and sample statistics.
+To create a new data asset, follow these steps:
-1. On the **Configure job** form, select **Create new** and enter **Tutorial-automl-deploy** for the experiment name.
+1. To create a new data asset from a file on your local computer, select **Create**.
-1. Select a target column; this is the column that you would like to do predictions on.
+1. On the **Data type** page:
-1. Select a compute type for the data profiling and training job. You can select a [compute cluster](concept-compute-target.md#azure-machine-learning-compute-managed) or [compute instance](concept-compute-instance.md).
-
-1. Select a compute from the dropdown list of your existing computes. To create a new compute, follow the instructions in step 8.
-
-1. Select **Create a new compute** to configure your compute context for this experiment.
-
- Field|Description
- |
- Compute name| Enter a unique name that identifies your compute context.
- Virtual machine priority| Low priority virtual machines are cheaper but don't guarantee the compute nodes.
- Virtual machine type| Select CPU or GPU for virtual machine type.
- Virtual machine size| Select the virtual machine size for your compute.
- Min / Max nodes| To profile data, you must specify one or more nodes. Enter the maximum number of nodes for your compute. The default is six nodes for an Azure Machine Learning Compute.
- Advanced settings | These settings allow you to configure a user account and existing virtual network for your experiment.
-
- Select **Create**. Creation of a new compute can take a few minutes.
+ 1. Enter a **Data asset** name.
+ 1. For the **Type**, select **Tabular** from the dropdown list.
+ 1. Select **Next**.
- Select **Next**.
+1. On the **Data source** page, select **From local files**.
-1. On the **Task type and settings** form, select the task type: classification, regression, or forecasting. See [supported task types](concept-automated-ml.md#when-to-use-automl-classification-regression-forecasting-computer-vision--nlp) for more information.
+ Machine Learning studio adds extra options to the left menu for you to configure the data source.
- 1. For **classification**, you can also enable deep learning.
+1. Select **Next** to continue to the **Destination storage type** page, where you specify the Azure Storage location to upload your data asset.
- 1. For **forecasting** you can,
-
- 1. Enable deep learning.
+ You can specify the default storage container automatically created with your workspace, or choose a Storage container to use for the experiment.
+
+ 1. For the **Datastore type**, select **Azure Blob Storage**.
+ 1. In the list of datastores, select _workspaceblobstore_.
+ 1. Select **Next**.
+
+1. On the **File and folder selection** page, use the **Upload files or folder** dropdown menu and select the **Upload files** or **Upload folder** option.
- 1. Select *time column*: This column contains the time data to be used.
+ 1. Browse to the location of the data to upload and select **Open**.
+ 1. After the files upload, select **Next**.
- 1. Select *forecast horizon*: Indicate how many time units (minutes/hours/days/weeks/months/years) will the model be able to predict to the future. The further into the future the model is required to predict, the less accurate the model becomes. [Learn more about forecasting and forecast horizon](how-to-auto-train-forecast.md).
+ Machine Learning studio validates and uploads your data.
-1. (Optional) View addition configuration settings: additional settings you can use to better control the training job. Otherwise, defaults are applied based on experiment selection and data.
+ > [!NOTE]
+ > If your data is behind a virtual network, you need to enable the **Skip the validation** function to ensure the workspace can access your data. For more information, see [Use Azure Machine Learning studio in an Azure virtual network](how-to-enable-studio-virtual-network.md).
- Additional configurations|Description
- |
- Primary metric| Main metric used for scoring your model. [Learn more about model metrics](how-to-configure-auto-train.md#primary-metric).
- Enable ensemble stacking | Ensemble learning improves machine learning results and predictive performance by combining multiple models as opposed to using single models. [Learn more about ensemble models](concept-automated-ml.md#ensemble).
- Blocked models| Select models you want to exclude from the training job. <br><br> Allowing models is only available for [SDK experiments](how-to-configure-auto-train.md#supported-algorithms). <br> See the [supported algorithms for each task type](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels).
- Explain best model| Automatically shows explainability on the best model created by Automated ML.
- Positive class label| Label that Automated ML will use to calculate binary metrics.
-
+1. Check your uploaded data on the **Settings** page for accuracy. The fields on the page are prepopulated based on the file type of your data:
+
+ | Field | Description |
+ | | |
+ | **File format** | Defines the layout and type of data stored in a file. |
+ | **Delimiter** | Identifies one or more characters for specifying the boundary between separate, independent regions in plain text or other data streams. |
+ | **Encoding** | Identifies what bit to character schema table to use to read your dataset. |
+ | **Column headers** | Indicates how the headers of the dataset, if any, are treated. |
+ | **Skip rows** | Indicates how many, if any, rows are skipped in the dataset. |
+
+1. Select **Next** to continue to the **Schema** page. This page is also prepopulated based on your **Settings** selections. You can configure the data type for each column, review the column names, and manage columns:
+
+ - To change the data type for a column, use the **Type** dropdown menu to select an option.
+ - To exclude a column from the data asset, toggle the **Include** option for the column.
+
+1. Select **Next** to continue to the **Review** page. Review the summary of your configuration settings for the job, and then select **Create**.
+
+### Configure training model
+
+When the data asset is ready, Machine Learning studio returns to the **Task type & data** tab for the **Submit an Automated ML job** process. The new data asset is listed on the page.
+
+Follow these steps to complete the job configuration:
+
+1. Expand the **Select task type** dropdown menu, and choose the training model to use for the experiment. The options include classification, regression, time series forecasting, natural language processing (NLP), or computer vision. For more information about these options, see the descriptions of the [supported task types](concept-automated-ml.md#when-to-use-automl-classification-regression-forecasting-computer-vision--nlp).
+
+1. After you specify the training model, select your dataset in the list.
+
+1. Select **Next** to continue to the **Task settings** tab.
+
+1. In the **Target column** dropdown list, select the column to use for the model predictions.
+
+1. Depending on your training model, configure the following required settings:
+
+ - **Classification**: Choose whether to **Enable deep learning**.
+
+ - **Time series forecasting**: Choose whether to **Enable deep learning**, and confirm your preferences for the required settings:
+
+ - Use the **Time column** to specify the time data to use in the model.
+
+ - Choose whether to enable one or more **Autodetect** options. When you deselect an **Autodetect** option, such as **Autodetect forecast horizon**, you can specify a specific value. The **Forecast horizon** value indicates how many time units (minutes/hours/days/weeks/months/years) the model can predict for the future. The further into the future the model is required to predict, the less accurate the model becomes.
+
+ For more information about how to configure these settings, see [Use Automated ML to train a time-series forecasting model](how-to-auto-train-forecast.md).
+
+ - **Natural language processing**: Confirm your preferences for the required settings:
+
+ - Use the **Select sub type** option to configure the sub classification type for the NLP model. You can choose from Multi-class classification, Multi-label classification, and Named entity recognition (NER).
+
+ - In the **Sweep settings** section, provide values for the **Slack factor** and **Sampling algorithm**.
+
+ - In the **Search space** section, configure the set of **Model algorithm** options.
+
+ For more information about how to configure these settings, see [Set up Automated ML to train an NLP model (Azure CLI or Python SDK)](how-to-auto-train-nlp-models.md).
+
+ - **Computer vision**: Choose whether to enable **Manual sweeping**, and confirm your preferences for the required settings:
+
+ - Use the **Select sub type** option to configure the sub classification type for the computer vision model. You can choose from Image classification (Multi-class) or (Multi-label), Object detection, and Polygon (instance segmentation).
+
+ For more information about how to configure these settings, see [Set up AutoML to train computer vision models (Azure CLI or Python SDK)](how-to-auto-train-image-models.md).
+
+### Specify optional settings
+
+Machine Learning studio provides optional settings that you can configure based on your machine learning model selection. The following sections describe the extra settings.
+
+#### Configure additional settings
+
+You can select the **View additional configuration settings** option to see actions to perform on the data in preparation for training.
+
+The **Additional configuration** page shows default values based on your experiment selection and data. You can use the default values or configure the following settings:
+
+| Setting | Description |
+| | |
+| **Primary metric** | Identify the main metric for scoring your model. For more information, see [model metrics](how-to-configure-auto-train.md#primary-metric). |
+| **Enable ensemble stacking** | Allow ensemble learning and improve machine learning results and predictive performance by combining multiple models as opposed to using single models. For more information, see [ensemble models](concept-automated-ml.md#ensemble). |
+| **Use all supported models** | Use this option to instruct Automated ML whether to use all supported models in the experiment. For more information, see the [supported algorithms for each task type](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels). <br> - Select this option to configure the **Blocked models** setting. <br> - Deselect this option to configure the **Allowed models** setting. |
+| **Blocked models** | (Available when **Use all supported models** is selected) Use the dropdown list and select the models to exclude from the training job. |
+| **Allowed models** | (Available when **Use all supported models** isn't selected) Use the dropdown list and select the models to use for the training job. <br> **Important**: Available only for [SDK experiments](how-to-configure-auto-train.md#supported-algorithms). |
+| **Explain best model** | Choose this option to automatically show explainability on the best model created by Automated ML. |
+| **Positive class label** | Enter the label for Automated ML to use for calculating binary metrics. |
+
+<a name="customize-featurization"></a>
+
+#### Configure featurization settings
+
+You can select the **View featurization settings** option to see actions to perform on the data in preparation for training.
+
+The **Featurization** page shows default featurization techniques for your data columns. You can enable/disable automatic featurization and customize the automatic featurization settings for your experiment.
+
-1. (Optional) View featurization settings: if you choose to enable **Automatic featurization** in the **Additional configuration settings** form, default featurization techniques are applied. In the **View featurization settings**, you can change these defaults and customize accordingly. Learn how to [customize featurizations](#customize-featurization).
+1. Select the **Enable featurization** option to allow configuration.
+
+ > [!IMPORTANT]
+ > When your data contains non-numeric columns, featurization is always enabled.
+
+1. Configure each available column, as desired. The following table summarizes the customizations currently available via the studio.
+
+ | Column | Customization |
+ | | |
+ | **Feature type** | Change the value type for the selected column. |
+ | **Impute with** | Select what value to impute missing values with in your data. |
+
+ :::image type="content" source="media/how-to-use-automated-ml-for-ml-models/updated-featurization.png" alt-text="Screenshot that shows custom featurization in the Azure Machine Learning studio." lightbox="media/how-to-use-automated-ml-for-ml-models/updated-featurization.png":::
+
+The featurization settings don't affect the input data needed for inferencing. If you exclude columns from training, the excluded columns are still required as input for inferencing on the model.
+
+#### Configure limits for the job
- ![Screenshot shows the Select task type dialog box with View featurization settings called out.](media/how-to-use-automated-ml-for-ml-models/view-featurization.png)
+The **Limits** section provides configuration options for the following settings:
-1. The **[Optional] Limits** form allows you to do the following.
+| Setting | Description | Value |
+| | | |
+| **Max trials** | Specify the maximum number of trials to try during the Automated ML job, where each trial has a different combination of algorithm and hyperparameters. | Integer between 1 and 1,000 |
+| **Max concurrent trials** | Specify the maximum number of trial jobs that can be executed in parallel. | Integer between 1 and 1,000 |
+| **Max nodes** | Specify the maximum number of nodes this job can use from the selected compute target. | 1 or more, depending on the compute configuration |
+| **Metric score threshold** | Enter the iteration metric threshold value. When the iteration reaches the threshold, the training job terminates. Keep in mind that meaningful models have a correlation greater than zero. Otherwise, the result is the same as guessing. | Average metric threshold, between bounds [0, 10] |
+| **Experiment timeout (minutes)** | Specify the maximum time the entire experiment can run. After the experiment reaches the limit, the system cancels the Automated ML job, including all its trials (children jobs). | Number of minutes |
+| **Iteration timeout (minutes)** | Specify the maximum time each trial job can run. After the trial job reaches this limit, the system cancels the trial. | Number of minutes |
+| **Enable early termination** | Use this option to end the job when the score isn't improving in the short term. | Select the option to enable early end of job |
- | Option | Description |
- ||--|
- |**Max trials**| Maximum number of trials, each with different combination of algorithm and hyperparameters to try during the AutoML job. Must be an integer between 1 and 1000.
- |**Max concurrent trials**| Maximum number of trial jobs that can be executed in parallel. Must be an integer between 1 and 1000.
- |**Max nodes**| Maximum number of nodes this job can use from selected compute target.
- |**Metric score threshold**| When this threshold value will be reached for an iteration metric the training job will terminate. Keep in mind that meaningful models have correlation > 0, otherwise they are as good as guessing the average Metric threshold should be between bounds [0, 10].
- |**Experiment timeout (minutes)**| Maximum time in minutes the entire experiment is allowed to run. Once this limit is reached the system will cancel the AutoML job, including all its trials (children jobs).
- |**Iteration timeout (minutes)**| Maximum time in minutes each trial job is allowed to run. Once this limit is reached the system will cancel the trial.
- |**Enable early termination**| Select to end the job if the score is not improving in the short term.
+### Validate and test
-1. The **[Optional] Validate and test** form allows you to do the following.
+The **Validate and test** section provides the following configuration options:
-a. Specify the type of validation to be used for your training job. If you do not explicitly specify either a `validation_data` or `n_cross_validations` parameter, automated ML applies default techniques depending on the number of rows provided in the single dataset `training_data`.
+1. Specify the **Validation type** to use for your training job. If you don't explicitly specify either a `validation_data` or `n_cross_validations` parameter, Automated ML applies default techniques depending on the number of rows provided in the single dataset `training_data`.
-| Training data size | Validation technique |
-||--|
-|**Larger than 20,000 rows**| Train/validation data split is applied. The default is to take 10% of the initial training data set as the validation set. In turn, that validation set is used for metrics calculation.
-|**Smaller than 20,000& rows**| Cross-validation approach is applied. The default number of folds depends on the number of rows. <br> **If the dataset is less than 1,000 rows**, 10 folds are used. <br> **If the rows are between 1,000 and 20,000**, then three folds are used.
+ | Training data size | Validation technique |
+ | | |
+ | **Larger than 20,000 rows** | Train/validation data split is applied. The default is to take 10% of the initial training data set as the validation set. In turn, that validation set is used for metrics calculation. |
+ | **Smaller than 20,000& rows** | Cross-validation approach is applied. The default number of folds depends on the number of rows. <br> - **Dataset with less than 1,000 rows**: 10 folds are used <br> - **Dataset with 1,000 to 20,000 rows**: Three folds are used |
-b. Provide a test dataset (preview) to evaluate the recommended model that automated ML generates for you at the end of your experiment. When you provide test data, a test job is automatically triggered at the end of your experiment. This test job is only job on the best model that is recommended by automated ML. Learn how to get the [results of the remote test job](#view-remote-test-job-results-preview).
+1. Provide the **Test data** (preview) to evaluate the recommended model that Automated ML generates at the end of your experiment. When you provide test dataset, a test job is automatically triggered at the end of your experiment. This test job is the only job on the best model recommended by Automated ML. For more information, see [View remote test job results (preview)](#view-remote-test-job-results-preview).
->[!IMPORTANT]
-> Providing a test dataset to evaluate generated models is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
- * Test data is considered a separate from training and validation, so as to not bias the results of the test job of the recommended model. [Learn more about bias during model validation](concept-automated-ml.md#training-validation-and-test-data).
- * You can either provide your own test dataset or opt to use a percentage of your training dataset. Test data must be in the form of an [Azure Machine Learning TabularDataset](how-to-create-data-assets.md#create-data-assets).
- * The schema of the test dataset should match the training dataset. The target column is optional, but if no target column is indicated no test metrics are calculated.
- * The test dataset shouldn't be the same as the training dataset or the validation dataset.
- * Forecasting jobs don't support train/test split.
-
-![Screenshot shows the form where to select validation data and test data](media/how-to-use-automated-ml-for-ml-models/validate-and-test.png)
-
-## Customize featurization
+ > [!IMPORTANT]
+ > Providing a test dataset to evaluate generated models is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and can change at any time.
+
+ - Test data is considered separate from training and validation, and it shouldn't bias the results of the test job of the recommended model. For more information, see [Training, validation, and test data](concept-automated-ml.md#training-validation-and-test-data).
+
+ - You can either provide your own test dataset or opt to use a percentage of your training dataset. Test data must be in the form of an [Azure Machine Learning table dataset](how-to-create-data-assets.md#create-data-assets).
+
+ - The schema of the test dataset should match the training dataset. The target column is optional, but if no target column is indicated, no test metrics are calculated.
+
+ - The test dataset shouldn't be the same as the training dataset or the validation dataset.
+
+ - **Forecasting** jobs don't support train/test split.
+
+ :::image type="content" source="media/how-to-use-automated-ml-for-ml-models/validate-and-test.png" alt-text="Screenshot that shows how to select validation data and test data in the studio.":::
-In the **Featurization** form, you can enable/disable automatic featurization and customize the automatic featurization settings for your experiment. To open this form, see step 10 in the [Create and run experiment](#create-and-run-experiment) section.
+### Configure the compute
-The following table summarizes the customizations currently available via the studio.
+Follow these steps and configure the compute:
-Column| Customization
-|
-Feature type| Change the value type for the selected column.
-Impute with| Select what value to impute missing values with in your data.
+1. Select **Next** to continue to the **Compute** tab.
-![Screenshot showing Azure Machine Learning studio custom featurization.](media/how-to-use-automated-ml-for-ml-models/updated-featurization.png)
+1. Use the **Select compute type** dropdown list to choose an option for the data profiling and training job. The options include [compute cluster](concept-compute-target.md#azure-machine-learning-compute-managed), [compute instance](concept-compute-instance.md) or [serverless](how-to-use-serverless-compute.md).
+
+1. After you select the compute type, the other UI on the page changes based on your selection:
+
+ - **Serverless**: The configuration settings display on the current page. Continue to the next step for descriptions of the settings to configure.
+
+ - **Compute cluster** or **Compute instance**: Choose from the following options:
+
+ - Use the **Select Automated ML compute** dropdown list to select an existing compute for your workspace, and then select **Next**. Continue to the [Run experiment and view results](#run-experiment-and-view-results) section.
+
+ - Select **New** to create a new compute instance or cluster. This option opens the **Create compute** page. Continue to the next step for descriptions of the settings to configure.
+
+1. For a serverless compute or a new compute, configure any required (**\***) settings:
+
+ The configuration settings differ depending on your compute type. The following table summarizes the various settings you might need to configure:
+
+ | Field | Description |
+ | | |
+ | **Compute name** | Enter a unique name that identifies your compute context. |
+ | **Location** | Specify the region for the machine. |
+ | **Virtual machine priority** | Low priority virtual machines are cheaper but don't guarantee the compute nodes. |
+ | **Virtual machine type** | Select CPU or GPU for virtual machine type. |
+ | **Virtual machine tier** | Select the priority for your experiment. |
+ | **Virtual machine size** | Select the virtual machine size for your compute. |
+ | **Min / Max nodes** | To profile data, you must specify one or more nodes. Enter the maximum number of nodes for your compute. The default is six nodes for an Azure Machine Learning Compute. |
+ | **Idle seconds before scale down** | Specify the idle time before the cluster is automatically scaled down to the minimum node count. |
+ | **Advanced settings** | These settings allow you to configure a user account and existing virtual network for your experiment. |
+
+1. After you configure the required settings, select **Next** or **Create**, as appropriate.
+
+ Creation of a new compute can take a few minutes. When creation completes, select **Next**.
## Run experiment and view results
-Select **Finish** to run your experiment. The experiment preparing process can take up to 10 minutes. Training jobs can take an additional 2-3 minutes more for each pipeline to finish running. If you have specified to generate RAI dashboard for the best recommended model, it may take up to 40 minutes.
+Select **Finish** to run your experiment. The experiment preparing process can take up to 10 minutes. Training jobs can take an additional 2-3 minutes more for each pipeline to finish running. If you specified to generate RAI dashboard for the best recommended model, it can take up to 40 minutes.
> [!NOTE]
-> The algorithms automated ML employs have inherent randomness that can cause slight variation in a recommended model's final metrics score, like accuracy. Automated ML also performs operations on data such as train-test split, train-validation split or cross-validation when necessary. So if you run an experiment with the same configuration settings and primary metric multiple times, you'll likely see variation in each experiments final metrics score due to these factors.
+> The algorithms Automated ML employs have inherent randomness that can cause slight variation in a recommended model's final metrics score, like accuracy. Automated ML also performs operations on data such as train-test split, train-validation split or cross-validation, as necessary. If you run an experiment with the same configuration settings and primary metric multiple times, you likely see variation in each experiment's final metrics score due to these factors.
### View experiment details The **Job Detail** screen opens to the **Details** tab. This screen shows you a summary of the experiment job including a status bar at the top next to the job number.
-The **Models** tab contains a list of the models created ordered by the metric score. By default, the model that scores the highest based on the chosen metric is at the top of the list. As the training job tries out more models, they're added to the list. Use this to get a quick comparison of the metrics for the models produced so far.
+The **Models** tab contains a list of the models created ordered by the metric score. By default, the model that scores the highest based on the chosen metric is at the top of the list. As the training job tries more models, the exercised models are added to the list. Use this approach to get a quick comparison of the metrics for the models produced so far.
### View training job details
-Drill down on any of the completed models to see training job details.
+Drill down on any of the completed models for the training job details. You can see performance metric charts for specific models on the **Metrics** tab. For more information, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md). On this page, you can also find details on all the properties of the model along with associated code, child jobs, and images.
+
+## View remote test job results (preview)
-You can see model specific performance metric charts on the **Metrics** tab. [Learn more about charts](how-to-understand-automated-ml.md).
+If you specified a test dataset or opted for a train/test split during your experiment setup on the **Validate and test** form, Automated ML automatically tests the recommended model by default. As a result, Automated ML calculates test metrics to determine the quality of the recommended model and its predictions.
-This is also where you can find details on all the properties of the model along with associated code, child jobs, and images.
+> [!IMPORTANT]
+> Testing your models with a test dataset to evaluate generated models is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and can change at any time.
+>
+> This feature isn't available for the following Automated ML scenarios:
+> - [Computer vision tasks](how-to-auto-train-image-models.md)
+> - [Many models and hiearchical time-series forecasting training (preview)](how-to-auto-train-forecast.md)
+> - [Forecasting tasks where deep learning neural networks (DNN) are enabled](how-to-auto-train-forecast.md#enable-deep-learning)
+> - [Automated ML jobs from local computes or Azure Databricks clusters](how-to-configure-auto-train.md#compute-to-run-experiment)
-## View remote test job results (preview)
+Follow these steps to view the test job metrics of the recommended model:
+
+1. In the studio, browse to the **Models** page, and select the best model.
-If you specified a test dataset or opted for a train/test split during your experiment setup--on the **Validate and test** form, automated ML automatically tests the recommended model by default. As a result, automated ML calculates test metrics to determine the quality of the recommended model and its predictions.
+1. Select the **Test results (preview)** tab.
->[!IMPORTANT]
-> Testing your models with a test dataset to evaluate generated models is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
+1. Select the job you want, and view the **Metrics** tab:
-> [!WARNING]
-> This feature is not available for the following automated ML scenarios
-> * [Computer vision tasks](how-to-auto-train-image-models.md)
-> * [Many models and hiearchical time series forecasting training (preview)](how-to-auto-train-forecast.md)
-> * [Forecasting tasks where deep learning neural networks (DNN) are enabled](how-to-auto-train-forecast.md#enable-deep-learning)
-> * [Automated ML jobs from local computes or Azure Databricks clusters](how-to-configure-auto-train.md#compute-to-run-experiment)
+ :::image type="content" source="./media/how-to-use-automated-ml-for-ml-models/test-best-model-results.png" alt-text="Screenshot that shows the test results tab for the automatically tested, recommended model.":::
-To view the test job metrics of the recommended model,
-
-1. Navigate to the **Models** page, select the best model.
-1. Select the **Test results (preview)** tab.
-1. Select the job you want, and view the **Metrics** tab.
- ![Test results tab of automatically tested, recommended model](./media/how-to-use-automated-ml-for-ml-models/test-best-model-results.png)
-
-To view the test predictions used to calculate the test metrics,
+View the test predictions used to calculate the test metrics by following these steps:
+
+1. At the bottom of the page, select the link under **Outputs dataset** to open the dataset.
-1. Navigate to the bottom of the page and select the link under **Outputs dataset** to open the dataset.
1. On the **Datasets** page, select the **Explore** tab to view the predictions from the test job.
- 1. Alternatively, the prediction file can also be viewed/downloaded from the **Outputs + logs** tab, expand the **Predictions** folder to locate your `predicted.csv` file.
-Alternatively, the predictions file can also be viewed/downloaded from the Outputs + logs tab, expand Predictions folder to locate your predictions.csv file.
+ The prediction file can also be viewed and downloaded from the **Outputs + logs** tab. Expand the **Predictions** folder to locate your _prediction.csv_ file.
+
+The model test job generates the _predictions.csv_ file stored in the default datastore created with the workspace. This datastore is visible to all users with the same subscription. Test jobs aren't recommended for scenarios if any of the information used for or created by the test job needs to remain private.
-The model test job generates the predictions.csv file that's stored in the default datastore created with the workspace. This datastore is visible to all users with the same subscription. Test jobs aren't recommended for scenarios if any of the information used for or created by the test job needs to remain private.
+## Test existing Automated ML model (preview)
-## Test an existing automated ML model (preview)
+After your experiment completes, you can test the models Automated ML generates for you.
->[!IMPORTANT]
+> [!IMPORTANT]
> Testing your models with a test dataset to evaluate generated models is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
+>
+> This feature isn't available for the following Automated ML scenarios:
+> - [Computer vision tasks](how-to-auto-train-image-models.md)
+> - [Many models and hiearchical time-series forecasting training (preview)](how-to-auto-train-forecast.md)
+> - [Forecasting tasks where deep learning neural networks (DNN) are enabled](how-to-auto-train-forecast.md#enable-deep-learning)
+> - [Automated ML jobs from local computes or Azure Databricks clusters](how-to-configure-auto-train.md#compute-to-run-experiment)
-> [!WARNING]
-> This feature is not available for the following automated ML scenarios
-> * [Computer vision tasks](how-to-auto-train-image-models.md)
-> * [Many models and hiearchical time series forecasting training (preview)](how-to-auto-train-forecast.md)
-> * [Forecasting tasks where deep learning neural networks (DNN) are enabled](how-to-auto-train-forecast.md#enable-deep-learning)
-> * [Automated ML runs from local computes or Azure Databricks clusters](how-to-configure-auto-train.md#compute-to-run-experiment)
+If you want to test a different Automated ML generated model, and not the recommended model, follow these steps:
-After your experiment completes, you can test the model(s) that automated ML generates for you. If you want to test a different automated ML generated model, not the recommended model, you can do so with the following steps.
+1. Select an existing Automated ML experiment job.
-1. Select an existing automated ML experiment job.
-1. Navigate to the **Models** tab of the job and select the completed model you want to test.
-1. On the model **Details** page, select the **Test model(preview)** button to open the **Test model** pane.
-1. On the **Test model** pane, select the compute cluster and a test dataset you want to use for your test job.
-1. Select the **Test** button. The schema of the test dataset should match the training dataset, but the **target column** is optional.
-1. Upon successful creation of model test job, the **Details** page displays a success message. Select the **Test results** tab to see the progress of the job.
+1. Browse to the **Models** tab of the job and select the completed model you want to test.
-1. To view the results of the test job, open the **Details** page and follow the steps in the [view results of the remote test job](#view-remote-test-job-results-preview) section.
+1. On the model **Details** page, select the **Test model (preview)** option to open the **Test model** pane.
- ![Test model form](./media/how-to-use-automated-ml-for-ml-models/test-model-form.png)
-
+1. On the **Test model** pane, select the compute cluster and a test dataset you want to use for your test job.
-## Responsible AI dashboard (preview)
+1. Select the **Test** option. The schema of the test dataset should match the training dataset, but the **Target column** is optional.
-To better understand your model, you can see various insights about your model using the Responsible Ai dashboard. It allows you to evaluate and debug your best Automated machine learning model. The Responsible AI dashboard will evaluate model errors and fairness issues, diagnose why those errors are happening by evaluating your train and/or test data, and observing model explanations. Together, these insights could help you build trust with your model and pass the audit processes. Responsible AI dashboards can't be generated for an existing Automated machine learning model. It is only created for the best recommended model when a new AutoML job is created. Users should continue to just use Model Explanations (preview) until support is provided for existing models.
+1. Upon successful creation of model test job, the **Details** page displays a success message. Select the **Test results** tab to see the progress of the job.
-To generate a Responsible AI dashboard for a particular model,
+1. To view the results of the test job, open the **Details** page and follow the steps in the [View remote test job results (preview)](#view-remote-test-job-results-preview) section.
-1. While submitting an Automated ML job, proceed to the **Task settings** section on the left nav bar and select the **View additional configuration settings** option.
-
-2. In the new form appearing post that selection, select the **Explain best model** checkbox.
+ :::image type="content" source="./media/how-to-use-automated-ml-for-ml-models/test-model-form.png" alt-text="Screenshot that shows the Test model form.":::
+
+## Responsible AI dashboard (preview)
+To better understand your model, you can see various insights about your model by using the Responsible AI dashboard. This UI allows you to evaluate and debug your best Automated ML model. The Responsible AI dashboard evaluates model errors and fairness issues, diagnoses why the errors are happening by evaluating your train and/or test data, and observing model explanations. Together, these insights could help you build trust with your model and pass the audit processes. Responsible AI dashboards can't be generated for an existing Automated ML model. The dashboard is created only for the best recommended model when a new Automated ML job is created. Users should continue to use Model Explanations (preview) until support is provided for existing models.
+Generate a Responsible AI dashboard for a particular model by following these steps:
- ![Screenshot showing the Automated ML job configuration page with Explain best model selected.](media/how-to-use-automated-ml-for-ml-models/best-model-selection-updated.png)
+1. While you submit an Automated ML job, proceed to the **Task settings** section on the left menu and select the **View additional configuration settings** option.
+
+1. On the **Additional configuration** page, select the **Explain best model** option:
-3. Proceed to the **Compute** page of the setup form and choose the **Serverless** option for your compute.
+ :::image type="content" source="media/how-to-use-automated-ml-for-ml-models/best-model-selection-updated.png" alt-text="Screenshot showing the Automated ML job configuration page with Explain best model selected.":::
- ![Serverless compute selection](media/how-to-use-automated-ml-for-ml-models/compute-serverless.png)
+1. Switch to the **Compute** tab, and select the **Serverless** option for your compute:
-4. Once complete, navigate to the Models page of your Automated ML job, which contains a list of your trained models. Select on the **View Responsible AI dashboard** link:
+ :::image type="content" source="media/how-to-use-automated-ml-for-ml-models/compute-serverless.png" alt-text="Screenshot hat shows the Serverless compute selection.":::
- ![View dashboard page within an Automated ML job](media/how-to-use-automated-ml-for-ml-models/view-responsible-ai.png)
+1. After the operation completes, browse to the **Models** page of your Automated ML job, which contains a list of your trained models. Select the **View Responsible AI dashboard** link:
-The Responsible AI dashboard appears for that model as shown in this image:
+ :::image type="content" source="media/how-to-use-automated-ml-for-ml-models/view-responsible-ai.png" alt-text="Screenshot that shows the View dashboard page within an Automated ML job." lightbox="media/how-to-use-automated-ml-for-ml-models/view-responsible-ai.png":::
- ![Responsible AI dashboard](media/how-to-use-automated-ml-for-ml-models/responsible-ai-dashboard.png)
+ The Responsible AI dashboard appears for the selected model:
-In the dashboard, you'll find four components activated for your Automated MLΓÇÖs best model:
+ :::image type="content" source="media/how-to-use-automated-ml-for-ml-models/responsible-ai-dashboard.png" alt-text="Screenshot that shows the Responsible AI dashboard." lightbox="media/how-to-use-automated-ml-for-ml-models/responsible-ai-dashboard.png":::
-| Component | What does the component show? | How to read the chart? |
-| - | - | - |
-| [Error Analysis](concept-error-analysis.md) | Use error analysis when you need to: <br> Gain a deep understanding of how model failures are distributed across a dataset and across several input and feature dimensions. <br> Break down the aggregate performance metrics to automatically discover erroneous cohorts in order to inform your targeted mitigation steps. | [Error Analysis Charts](how-to-responsible-ai-dashboard.md) |
-| [Model Overview and Fairness](concept-fairness-ml.md) | Use this component to: <br> Gain a deep understanding of your model performance across different cohorts of data. <br> Understand your model fairness issues by looking at the disparity metrics. These metrics can evaluate and compare model behavior across subgroups identified in terms of sensitive (or nonsensitive) features. | [Model Overview and Fairness Charts](how-to-responsible-ai-dashboard.md#model-overview-and-fairness-metrics) |
-| [Model Explanations](how-to-machine-learning-interpretability.md) | Use the model explanation component to generate human-understandable descriptions of the predictions of a machine learning model by looking at: <br> Global explanations: For example, what features affect the overall behavior of a loan allocation model? <br> Local explanations: For example, why was a customer's loan application approved or rejected? | [Model Explainability Charts](how-to-responsible-ai-dashboard.md#feature-importances-model-explanations) |
-| [Data Analysis](concept-data-analysis.md) | Use data analysis when you need to: <br> Explore your dataset statistics by selecting different filters to slice your data into different dimensions (also known as cohorts). <br> Understand the distribution of your dataset across different cohorts and feature groups. <br> Determine whether your findings related to fairness, error analysis, and causality (derived from other dashboard components) are a result of your dataset's distribution. <br> Decide in which areas to collect more data to mitigate errors that come from representation issues, label noise, feature noise, label bias, and similar factors. | [Data Explorer Charts](how-to-responsible-ai-dashboard.md#data-analysis) |
+ In the dashboard, you see four components activated for your Automated ML best model:
-5. You can further create cohorts (subgroups of data points that share specified characteristics) to focus your analysis of each component on different cohorts. The name of the cohort that's currently applied to the dashboard is always shown at the top left of your dashboard. The default view in your dashboard is your whole dataset, titled "All data" (by default). Learn more about the [global control of your dashboard here.](how-to-responsible-ai-dashboard.md#global-controls)
+ | Component | What does the component show? | How to read the chart? |
+ | | | |
+ | [Error Analysis](concept-error-analysis.md) | Use error analysis when you need to: <br> - Gain a deep understanding of how model failures are distributed across a dataset and across several input and feature dimensions. <br> - Break down the aggregate performance metrics to automatically discover erroneous cohorts in order to inform your targeted mitigation steps. | [Error Analysis Charts](how-to-responsible-ai-dashboard.md) |
+ | [Model Overview and Fairness](concept-fairness-ml.md) | Use this component to: <br> - Gain a deep understanding of your model performance across different cohorts of data. <br> - Understand your model fairness issues by looking at the disparity metrics. These metrics can evaluate and compare model behavior across subgroups identified in terms of sensitive (or nonsensitive) features. | [Model Overview and Fairness Charts](how-to-responsible-ai-dashboard.md#model-overview-and-fairness-metrics) |
+ | [Model Explanations](how-to-machine-learning-interpretability.md) | Use the model explanation component to generate human-understandable descriptions of the predictions of a machine learning model by looking at: <br> - Global explanations: For example, what features affect the overall behavior of a loan allocation model? <br> - Local explanations: For example, why was a customer's loan application approved or rejected? | [Model Explainability Charts](how-to-responsible-ai-dashboard.md#feature-importances-model-explanations) |
+ | [Data Analysis](concept-data-analysis.md) | Use data analysis when you need to: <br> - Explore your dataset statistics by selecting different filters to slice your data into different dimensions (also known as cohorts). <br> - Understand the distribution of your dataset across different cohorts and feature groups. <br> - Determine whether your findings related to fairness, error analysis, and causality (derived from other dashboard components) are a result of your dataset's distribution. <br> - Decide in which areas to collect more data to mitigate errors that come from representation issues, label noise, feature noise, label bias, and similar factors. | [Data Explorer Charts](how-to-responsible-ai-dashboard.md#data-analysis) |
+1. You can further create cohorts (subgroups of data points that share specified characteristics) to focus your analysis of each component on different cohorts. The name of the cohort currently applied to the dashboard is always shown at the top left of your dashboard. The default view in your dashboard is your whole dataset, titled **All data** by default. For more information, see [Global controls](how-to-responsible-ai-dashboard.md#global-controls) for your dashboard.
## Edit and submit jobs (preview)
->[!IMPORTANT]
-> The ability to copy, edit and submit a new experiment based on an existing experiment is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
-
-In scenarios where you would like to create a new experiment based on the settings of an existing experiment, automated ML provides the option to do so with the **Edit and submit** button in the studio UI.
+In scenarios where you want to create a new experiment based on the settings of an existing experiment, Automated ML provides the **Edit and submit** option in the studio UI. This functionality is limited to experiments initiated from the studio UI and requires the data schema for the new experiment to match that of the original experiment.
-This functionality is limited to experiments initiated from the studio UI and requires the data schema for the new experiment to match that of the original experiment.
+> [!IMPORTANT]
+> The ability to copy, edit, and submit a new experiment based on an existing experiment is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and can change at any time.
-The **Edit and submit** button opens the **Create a new Automated ML job** wizard with the data, compute and experiment settings prepopulated. You can go through each form and edit selections as needed for your new experiment.
+The **Edit and submit** option opens the **Create a new Automated ML job** wizard with the data, compute, and experiment settings prepopulated. You can configure the options on each tab in the wizard and edit selections as needed for your new experiment.
## Deploy your model
-Once you have the best model at hand, it's time to deploy it as a web service to predict on new data.
+After you have the best model, you can deploy the model as a web service to predict on new data.
->[!TIP]
-> If you are looking to deploy a model that was generated via the `automl` package with the Python SDK, you must [register your model)](./how-to-deploy-online-endpoints.md) to the workspace.
+> [!NOTE]
+> To deploy a model generated via the `automl` package with the Python SDK, you must [register your model)](./how-to-deploy-online-endpoints.md) to the workspace.
>
-> Once you're model is registered, find it in the studio by selecting **Models** on the left pane. Once you open your model, you can select the **Deploy** button at the top of the screen, and then follow the instructions as described in **step 2** of the **Deploy your model** section.
+> After you register the model, you can locate the model in the studio by selecting **Models** on the left menu. On the model overview page, you can select the **Deploy** option and continue to Step 2 in this section.
+
+Automated ML helps you deploy the model without writing code.
+
+1. Initiate the deployment by using one of the following methods:
-Automated ML helps you with deploying the model without writing code:
+ - Deploy the best model with the metric criteria you defined:
-1. You have a couple options for deployment.
+ 1. After the experiment completes, select **Job 1** and browse to the parent job page.
- + Option 1: Deploy the best model, according to the metric criteria you defined.
- 1. After the experiment is complete, navigate to the parent job page by selecting **Job 1** at the top of the screen.
- 1. Select the model listed in the **Best model summary** section.
- 1. Select **Deploy** on the top left of the window.
+ 1. Select the model listed in the **Best model summary** section, and then select **Deploy**.
- + Option 2: To deploy a specific model iteration from this experiment.
- 1. Select the desired model from the **Models** tab
- 1. Select **Deploy** on the top left of the window.
+ - Deploy a specific model iteration from this experiment:
-1. Populate the **Deploy model** pane.
+ - Select the desired model from the **Models** tab, and then select **Deploy**.
- Field| Value
- -|-
- Name| Enter a unique name for your deployment.
- Description| Enter a description to better identify what this deployment is for.
- Compute type| Select the type of endpoint you want to deploy: [*Azure Kubernetes Service (AKS)*](../aks/intro-kubernetes.md) or [*Azure Container Instance (ACI)*](../container-instances/container-instances-overview.md).
- Compute name| *Applies to AKS only:* Select the name of the AKS cluster you wish to deploy to.
- Enable authentication | Select to allow for token-based or key-based authentication.
- Use custom deployment assets| Enable this feature if you want to upload your own scoring script and environment file. Otherwise, automated ML provides these assets for you by default. [Learn more about scoring scripts](how-to-deploy-online-endpoints.md).
+1. Populate the **Deploy model** pane:
- >[!Important]
- > File names must be under 32 characters and must begin and end with alphanumerics. May include dashes, underscores, dots, and alphanumerics between. Spaces are not allowed.
+ | Field | Value |
+ | | |
+ | **Name** | Enter a unique name for your deployment. |
+ | **Description** | Enter a description to better identify the deployment purpose. |
+ | **Compute type** | Select the type of endpoint you want to deploy: [*Azure Kubernetes Service (AKS)*](../aks/intro-kubernetes.md) or [*Azure Container Instance (ACI)*](../container-instances/container-instances-overview.md). |
+ | **Compute name** | (Applies to AKS only) Select the name of the AKS cluster you wish to deploy to. |
+ | **Enable authentication** | Select to allow for token-based or key-based authentication. |
+ | **Use custom deployment assets** | Enable custom assets if you want to upload your own scoring script and environment file. Otherwise, Automated ML provides these assets for you by default. For more information, see [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md). |
- The *Advanced* menu offers default deployment features such as [data collection](how-to-enable-app-insights.md) and resource utilization settings. If you wish to override these defaults do so in this menu.
+ > [!IMPORTANT]
+ > File names must be between 1 and 32 characters. The name must begin and end with alphanumerics, and can include dashes, underscores, dots, and alphanumerics between. Spaces aren't allowed.
+
+ The **Advanced** menu offers default deployment features such as data collection and resource utilization settings. You can use the options in this menu to override these defaults. For more information, see [Monitor online endpoints](how-to-monitor-online-endpoints.md).
1. Select **Deploy**. Deployment can take about 20 minutes to complete.
- Once deployment begins, the **Model summary** tab appears. See the deployment progress under the **Deploy status** section.
-Now you have an operational web service to generate predictions! You can test the predictions by querying the service from [Power BI's built in Azure Machine Learning support](/power-bi/connect-data/service-aml-integrate?context=azure%2fmachine-learning%2fcontext%2fml-context).
+ After deployment starts, the **Model summary** tab opens. You can monitor the deployment progress under the **Deploy status** section.
+
+Now you have an operational web service to generate predictions! You can test the predictions by querying the service from the [End-to-end AI samples in Microsoft Fabric](/fabric/data-science/use-ai-samples).
-## Next steps
+## Related content
-* [Understand automated machine learning results](how-to-understand-automated-ml.md).
-* [Learn more about automated machine learning](concept-automated-ml.md) and Azure Machine Learning.
+- [Understand Automated ML results](how-to-understand-automated-ml.md)
+- [Train classification models with no-code Automated ML (Tutorial)](tutorial-first-experiment-automated-ml.md)
+- [Configure your Automated ML experiments with the Python SDK](how-to-configure-auto-train.md)
machine-learning How To Use Batch Model Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-model-deployments.md
A model deployment is a set of resources required for hosting the model that doe
| `environment` | The environment to score the model. The example defines an environment inline using `conda_file` and `image`. The `conda_file` dependencies will be installed on top of the `image`. The environment will be automatically registered with an autogenerated name and version. See the [Environment schema](./reference-yaml-environment.md#yaml-syntax) for more options. As a best practice for production scenarios, you should create the environment separately and reference it here. To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. | | `compute` | The compute to run batch scoring. The example uses the `batch-cluster` created at the beginning and references it using the `azureml:<compute-name>` syntax. | | `resources.instance_count` | The number of instances to be used for each batch scoring job. |
- | `settings.max_concurrency_per_instance` | [Optional] The maximum number of parallel `scoring_script` runs per instance. |
- | `settings.mini_batch_size` | [Optional] The number of files the `scoring_script` can process in one `run()` call. |
- | `settings.output_action` | [Optional] How the output should be organized in the output file. `append_row` will merge all `run()` returned output results into one single file named `output_file_name`. `summary_only` won't merge the output results and will only calculate `error_threshold`. |
- | `settings.output_file_name` | [Optional] The name of the batch scoring output file for `append_row` `output_action`. |
- | `settings.retry_settings.max_retries` | [Optional] The number of max tries for a failed `scoring_script` `run()`. |
- | `settings.retry_settings.timeout` | [Optional] The timeout in seconds for a `scoring_script` `run()` for scoring a mini batch. |
- | `settings.error_threshold` | [Optional] The number of input file scoring failures that should be ignored. If the error count for the entire input goes above this value, the batch scoring job will be terminated. The example uses `-1`, which indicates that any number of failures is allowed without terminating the batch scoring job. |
- | `settings.logging_level` | [Optional] Log verbosity. Values in increasing verbosity are: WARNING, INFO, and DEBUG. |
- | `settings.environment_variables` | [Optional] Dictionary of environment variable name-value pairs to set for each batch scoring job. |
+ | `settings.max_concurrency_per_instance` | The maximum number of parallel `scoring_script` runs per instance. |
+ | `settings.mini_batch_size` | The number of files the `scoring_script` can process in one `run()` call. |
+ | `settings.output_action` | How the output should be organized in the output file. `append_row` will merge all `run()` returned output results into one single file named `output_file_name`. `summary_only` won't merge the output results and will only calculate `error_threshold`. |
+ | `settings.output_file_name` | The name of the batch scoring output file for `append_row` `output_action`. |
+ | `settings.retry_settings.max_retries` | The number of max tries for a failed `scoring_script` `run()`. |
+ | `settings.retry_settings.timeout` | The timeout in seconds for a `scoring_script` `run()` for scoring a mini batch. |
+ | `settings.error_threshold` | The number of input file scoring failures that should be ignored. If the error count for the entire input goes above this value, the batch scoring job will be terminated. The example uses `-1`, which indicates that any number of failures is allowed without terminating the batch scoring job. |
+ | `settings.logging_level` | Log verbosity. Values in increasing verbosity are: WARNING, INFO, and DEBUG. |
+ | `settings.environment_variables` | Dictionary of environment variable name-value pairs to set for each batch scoring job. |
# [Python](#tab/python)
migrate Tutorial Assess Vmware Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/tutorial-assess-vmware-azure-vmware-solution.md
Title: Assess VMware servers for migration to Azure VMware Solution (AVS) with Azure Migrate description: Learn how to assess servers in VMware environment for migration to AVS with Azure Migrate.--
-ms.
++ Previously updated : 02/26/2024 Last updated : 07/19/2024 #Customer intent: As a VMware VM admin, I want to assess my VMware VMs in preparation for migration to Azure VMware Solution (AVS)
As part of your migration journey to Azure, you assess your on-premises workload
This article shows you how to assess discovered VMware virtual machines/servers for migration to Azure VMware Solution (AVS), using the Azure Migrate. AVS is a managed service that allows you to run the VMware platform in Azure.
-In this tutorial, you'll learn how to:
+In this tutorial, you learn how to:
> [!div class="checklist"] - Run an assessment based on server metadata and configuration information. - Run an assessment based on performance data.
Before you follow this tutorial to assess your servers for migration to AVS, mak
- To discover servers using the Azure Migrate appliance, [follow this tutorial](tutorial-discover-vmware.md). - To discover servers using an imported CSV file, [follow this tutorial](../tutorial-discover-import.md).
+- To import servers using an RVTools file, [follow this tutorial](tutorial-import-vmware-using-rvtools-xlsx.md).
## Decide which assessment to run
Decide whether you want to run an assessment using sizing criteria based on serv
**Assessment** | **Details** | **Recommendation** | | **As-is on-premises** | Assess based on server configuration data/metadata. | Recommended node size in AVS is based on the on-premises VM/server size, along with the settings you specify in the assessment for the node type, storage type, and failure-to-tolerate setting.
-**Performance-based** | Assess based on collected dynamic performance data. | Recommended node size in AVS is based on CPU and memory utilization data, along with the settings you specify in the assessment for the node type, storage type, and failure-to-tolerate setting.
+**Performance-based** | Assess based on collected dynamic performance data. | Recommended node size in AVS is based on CPU and memory utilization data, along with the settings you specify in the assessment for the node type, storage type, and failure-to-tolerate setting. If the data has been imported using RVTools XLSX or Azure Migrate CSV, Performance-based assessment takes the used storage for thin-provisioned VMs.
> [!NOTE] > Azure VMware Solution (AVS) assessment can be created for VMware VMs/servers only.
Decide whether you want to run an assessment using sizing criteria based on serv
Run an assessment as follows:
-1. In **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**.
-
-1. In **Azure Migrate: Discovery and assessment**, select **Assess**.
-
-1. In **Assess servers** > **Assessment type**, select **Azure VMware Solution (AVS)**.
+1. In **Servers, databases and web apps**, select **Azure Migrate: Discovery and assessment** > **Assess** > **Azure VMware Solution (AVS)**.
1. In **Discovery source**: - If you discovered servers using the appliance, select **Servers discovered from Azure Migrate appliance**.
- - If you discovered servers using an imported CSV file, select **Imported servers**.
+ - If you discovered servers using an imported CSV file or an RVTools XLSX file, select **Imported servers**.
1. Select **Edit** to review the assessment properties. :::image type="content" source="../media/tutorial-assess-vmware-azure-vmware-solution/assess-servers.png" alt-text="Page for selecting the assessment settings":::
-
1. In **Assessment settings**, set the necessary values or retain the default values: **Section** | **Setting** | **Details** | | |
- Target and pricing settings | **Target location** | The Azure region to which you want to migrate. Azure SQL configuration and cost recommendations are based on the location that you specify.
- Target and pricing settings | **Environment type** | The environment for the SQL deployments to apply pricing applicable to Production or Dev/Test.
- Target and pricing settings | **Offer/Licensing program** |The Azure offer if you're enrolled. Currently, the field is Pay-as-you-go by default, which gives you retail Azure prices. <br/><br/>You can avail additional discount by applying reserved capacity and Azure Hybrid Benefit on top of Pay-as-you-go offer.<br/>You can apply Azure Hybrid Benefit on top of Pay-as-you-go offer and Dev/Test environment. The assessment doesn't support applying Reserved Capacity on top of Pay-as-you-go offer and Dev/Test environment. <br/>If the offer is set to *Pay-as-you-go* and Reserved capacity is set to *No reserved instances*, the monthly cost estimates are calculated by multiplying the number of hours chosen in the VM uptime field with the hourly price of the recommended SKU.
- Target and pricing settings | **Savings options - Azure SQL MI and DB (PaaS)** | Specify the reserved capacity savings option that you want the assessment to consider, helping to optimize your Azure compute cost. <br><br> [Azure reservations](../../cost-management-billing/reservations/save-compute-costs-reservations.md) (1 year or 3 year reserved) are a good option for the most consistently running resources.<br><br> When you select 'None', the Azure compute cost is based on the Pay as you go rate or based on actual usage.<br><br> You need to select pay-as-you-go in offer/licensing program to be able to use Reserved Instances. When you select any savings option other than 'None', the 'Discount (%)' and "VM uptime" settings aren't applicable. The monthly cost estimates are calculated by multiplying 744 hours with the hourly price of the recommended SKU.
- Target and pricing settings | **Savings options - SQL Server on Azure VM (IaaS)** | Specify the savings option that you want the assessment to consider, helping to optimize your Azure compute cost. <br><br> [Azure reservations](../../cost-management-billing/reservations/save-compute-costs-reservations.md) (1 year or 3 year reserved) are a good option for the most consistently running resources.<br><br> [Azure Savings Plan](../../cost-management-billing/savings-plan/savings-plan-compute-overview.md) (1 year or 3 year savings plan) provide additional flexibility and automated cost optimization. Ideally post migration, you could use Azure reservation and savings plan at the same time (reservation is consumed first), but in the Azure Migrate assessments, you can only see cost estimates of 1 savings option at a time. <br><br> When you select 'None', the Azure compute cost is based on the Pay as you go rate or based on actual usage.<br><br> You need to select pay-as-you-go in offer/licensing program to be able to use Reserved Instances or Azure Savings Plan. When you select any savings option other than 'None', the 'Discount (%)' and "VM uptime" settings aren't applicable. The monthly cost estimates are calculated by multiplying 744 hours in the VM uptime field with the hourly price of the recommended SKU.
- Target and pricing settings | **Currency** | The billing currency for your account.
- Target and pricing settings | **Discount (%)** | Any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
- Target and pricing settings | **VM uptime** | Specify the duration (days per month/hour per day) that servers/VMs run. This is useful for computing cost estimates for SQL Server on Azure VM where you're aware that Azure VMs might not run continuously. <br/> Cost estimates for servers where recommended target is *SQL Server on Azure VM* are based on the duration specified. Default is 31 days per month/24 hours per day.
- Target and pricing settings | **Azure Hybrid Benefit** | Specify whether you already have a Windows Server and/or SQL Server license or Enterprise Linux subscription (RHEL and SLES). Azure Hybrid Benefit is a licensing benefit that helps you to significantly reduce the costs of running your workloads in the cloud. It works by letting you use your on-premises Software Assurance-enabled Windows Server and SQL Server licenses on Azure. For example, if you have a SQL Server license and they're covered with active Software Assurance of SQL Server Subscriptions, you can apply for the Azure Hybrid Benefit when you bring licenses to Azure.
- Assessment criteria | **Sizing criteria** | Set to be *Performance-based* by default, which means Azure Migrate collects performance metrics pertaining to SQL instances and the databases managed by it to recommend an optimal-sized SQL Server on Azure VM and/or Azure SQL Database and/or Azure SQL Managed Instance configuration.
- Assessment criteria | **Performance history** | Indicate the data duration on which you want to base the assessment. (Default is one day)
- Assessment criteria | **Percentile utilization** | Indicate the percentile value you want to use for the performance sample. (Default is 95th percentile)
- Assessment criteria | **Comfort factor** | Indicate the buffer you want to use during assessment. This accounts for issues like seasonal usage, short performance history, and likely increases in future usage. For example, consider a comfort factor of 2 for effective utilization of 2 Cores. In this case, the assessment considers the effective cores as 4 cores. Similarly, for the same comfort factor and an effective utilization of 8 GB memory, the assessment considers effective memory as 16 GB.
- Assessment criteria | **Optimization preference** | Specify the preference for the recommended assessment report. Selecting **Minimize cost** would result in the Recommended assessment report recommending those deployment types that have least migration issues and are most cost effective, whereas selecting **Modernize to PaaS** would result in Recommended assessment report recommending PaaS(Azure SQL MI or DB) deployment types over IaaS Azure(VMs), wherever the SQL Server instance is ready for migration to PaaS irrespective of cost.
- Azure SQL Managed Instance sizing | **Service Tier** | Choose the most appropriate service tier option to accommodate your business needs for migration to Azure SQL Managed Instance:<br/><br/>Select *Recommended* if you want Azure Migrate to recommend the best suited service tier for your servers. This can be General purpose or Business critical.<br/><br/>Select *General Purpose* if you want an Azure SQL configuration designed for budget-oriented workloads.<br/><br/>Select *Business Critical* if you want an Azure SQL configuration designed for low-latency workloads with high resiliency to failures and fast failovers.
- Azure SQL Managed Instance sizing | **Instance type** | Defaulted to *Single instance*.
- Azure SQL Managed Instance sizing | **Pricing Tier** | Defaulted to *Standard*.
- SQL Server on Azure VM sizing | **VM series** | Specify the Azure VM series you want to consider for *SQL Server on Azure VM* sizing. Based on the configuration and performance requirements of your SQL Server or SQL Server instance, the assessment recommends a VM size from the selected list of VM series. <br/>You can edit settings as needed. For example, if you don't want to include D-series VM, you can exclude D-series from this list.<br/> As Azure SQL assessments intend to give the best performance for your SQL workloads, the VM series list only has VMs that are optimized for running your SQL Server on Azure Virtual Machines (VMs). [Learn more](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist?preserve-view=true&view=azuresql#vm-size).
- SQL Server on Azure VM sizing | **Storage Type** | Defaulted to *Recommended*, which means the assessment recommends the best suited Azure Managed Disk based on the chosen environment type, on-premises disk size, IOPS and throughput.
- Azure SQL Database sizing | **Service Tier** | Choose the most appropriate service tier option to accommodate your business needs for migration to Azure SQL Database:<br/><br/>Select **Recommended** if you want Azure Migrate to recommend the best suited service tier for your servers. This can be General purpose or Business critical.<br/><br/>Select **General Purpose** if you want an Azure SQL configuration designed for budget-oriented workloads.<br/><br/>Select **Business Critical** if you want an Azure SQL configuration designed for low-latency workloads with high resiliency to failures and fast failovers.
- Azure SQL Database sizing | **Instance type** | Defaulted to *Single database*.
- Azure SQL Database sizing | **Purchase model** | Defaulted to *vCore*.
- Azure SQL Database sizing | **Compute tier** | Defaulted to *Provisioned*.
- High availability and disaster recovery properties | **Disaster recovery region** | Defaulted to the [cross-region replication pair](../../reliability/cross-region-replication-azure.md#azure-paired-regions) of the Target Location. In the unlikely event that the chosen Target Location doesn't yet have such a pair, the specified Target Location itself is chosen as the default disaster recovery region.
- High availability and disaster recovery properties | **Multi-subnet intent** | Defaulted to Disaster recovery. <br/><br/> Select **Disaster recovery** if you want asynchronous data replication where some replication delays are tolerable. This allows higher durability using geo-redundancy. In the event of failover, data that hasn't yet been replicated may be lost. <br/><br/> Select **High availability** if you desire the data replication to be synchronous and no data loss due to replication delay is allowable. This setting allows assessment to leverage built-in high availability options in Azure SQL Databases and Azure SQL Managed Instances, and availability zones and zone-redundancy in Azure Virtual Machines to provide higher availability. In the event of failover, no data is lost.
- High availability and disaster recovery properties | **Internet Access** | Defaulted to Available.<br/><br/> Select **Available** if you allow outbound internet access from Azure VMs. This allows the use of [Cloud Witness](/azure/azure-sql/virtual-machines/windows/hadr-cluster-quorum-configure-how-to?view=azuresql&preserve-view=true&tabs=powershell) which is the recommended approach for Windows Server Failover Clusters in Azure Virtual Machines. <br/><br/> Select **Not available** if the Azure VMs have no outbound internet access. This requires the use of a Shared Disk as a witness for Windows Server Failover Clusters in Azure Virtual Machines.
- High availability and disaster recovery properties | **Async commit mode intent** | Defaulted to Disaster recovery. <br/><br/> Select **Disaster recovery** if you're using asynchronous commit availability mode to enable higher durability for the data without affecting performance. In the event of failover, data that hasn't yet been replicated may be lost. <br/><br/> Select **High availability** if you're using asynchronous commit data availability mode to improve availability and scale out read traffic. This setting allows assessment to leverage built-in high availability features in Azure SQL Databases, Azure SQL Managed Instances, and Azure Virtual Machines to provide higher availability and scale out.
+ Target settings | **Target location** | The Azure region to which you want to migrate. Size and cost recommendations are based on the location that you specify.
+ Target settings | **Storage type** | Defaulted to **vSAN**. This is the default storage type for an AVS private cloud.
+ Target settings | **Reserved instance** | Specify whether you want to use reserve instances for Azure VMware Solution nodes when you migrate your VMs. If you decide to use a reserved instance, you can't specify **Discount (%)**. [Learn more](https://learn.microsoft.com/azure/azure-vmware/reserved-instance) about reserved instances.
+ VM size | **Node type** | Defaulted to **AV36**. Azure Migrate recommends the node needed to migrate the servers to AVS.
+ VM size | **FTT setting, RAID level** | Select the Failure to Tolerate and RAID combination. The selected FTT option, combined with the on-premises server disk requirement, determines the total vSAN storage required in AVS.
+ VM size | **CPU Oversubscription** | Specify the ratio of virtual cores associated with one physical core in the AVS node. Oversubscription of greater than 4:1 might cause performance degradation, but can be used for web server type workloads.
+ VM size | **Memory overcommit factor** | Specify the ratio of memory over commit on the cluster. A value of 1 represents 100% memory use, 0.5, for example, is 50%, and 2 would be using 200% of available memory. You can only add values from 0.5 to 10 up to one decimal place.
+ VM size | **Dedupe and compression factor** | Specify the anticipated dedupe and compression factor for your workloads. The actual value can be obtained from on-premises vSAN or storage config and this might vary by workload. A value of 3 would mean 3x so for a 300GB disk only 100GB storage would be used. A value of 1 would mean no dedupe or compression. You can only add values from 1 to 10 up to one decimal place.
+ Node size | **Sizing criteria** | Set to be *Performance-based* by default, which means Azure Migrate collects performance metrics based on which it provides recommendations.
+ Node size | **Performance history** | Indicate the data duration on which you want to base the assessment. (Default is one day)
+ Node size | **Percentile utilization** | Indicate the percentile value you want to use for the performance sample. (Default is 95th percentile)
+ Node size | **Comfort factor** | Indicate the buffer you want to use during assessment. This accounts for issues like seasonal usage, short performance history, and likely increases in future usage. For example, consider a comfort factor of 2 for effective utilization of 2 Cores. In this case, the assessment considers the effective cores as 4 cores. Similarly, for the same comfort factor and an effective utilization of 8 GB memory, the assessment considers effective memory as 16 GB.
+ Pricing | **Offer/Licencing program** | The [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) you're enrolled in is displayed. The assessment estimates the cost for that offer.
+ Pricing | **Currency** | Select the billing currency for your account.
+ Pricing | **Discount (%)** | Add any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
1. Select **Save** if you make changes.
To view an assessment:
- **Ready for AVS**: The server can be migrated as-is to Azure (AVS) without any changes. It starts in AVS with full AVS support. - **Ready with conditions**: There might be some compatibility issues, for example, internet protocol or deprecated OS in VMware and need to be remediated before migrating to Azure VMware Solution. To fix any readiness problems, follow the remediation guidance that the assessment suggests.
- - **Not ready for AVS**: The VM won't start in AVS. For example, if the on-premises VMware VM has an external device attached such as a CD-ROM the VMware VMotion operation fails (if using VMware VMotion).
+ - **Not ready for AVS**: The VM won't start in AVS. For example, if the on-premises VMware VM has an external device attached such as a CD-ROM, the VMware VMotion operation fails (if using VMware VMotion).
- **Readiness unknown**: Azure Migrate couldn't determine the readiness of the server because of insufficient metadata collected from the on-premises environment. 3. Review the suggested tool.
Server Assessment assigns a confidence rating to performance-based assessments.
The confidence rating helps you estimate the reliability of size recommendations in the assessment. The rating is based on the availability of data points needed to compute the assessment. > [!NOTE]
-> Confidence ratings aren't assigned if you create an assessment based on a CSV file.
+> Confidence ratings aren't assigned if you create an assessment based on a CSV file or an RVTools XLSX file.
Confidence ratings are as follows.
networking Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/networking-overview.md
Title: Azure networking services overview
-description: Learn about networking services in Azure, including connectivity, application protection, application delivery, and network monitoring services.
+description: Learn about the various networking services in Azure, including networking foundation, load balancing and content delivery, hybrid connectivity, and network security services.
Previously updated : 04/02/2024 Last updated : 07/17/2024 # Azure networking services overview
-The networking services in Azure provide various networking capabilities that can be used together or separately. Select any of the following key capabilities to learn more about them:
-- [**Connectivity services**](#connect): Connect Azure resources and on-premises resources using any or a combination of these networking services in Azure - Virtual Network (VNet), Virtual WAN, ExpressRoute, VPN Gateway, NAT Gateway, Azure DNS, Peering service, Azure Virtual Network Manager, Route Server, and Azure Bastion.-- [**Application protection services**](#protect): Protect your applications using any or a combination of these networking services in Azure - Load Balancer, Private Link, DDoS protection, Firewall, Network Security Groups, Web Application Firewall, and Virtual Network Endpoints.-- [**Application delivery services**](#deliver): Deliver applications in the Azure network using any or a combination of these networking services in Azure - Azure Front Door Service, Traffic Manager, Application Gateway, Internet Analyzer, and Load Balancer.-- [**Network monitoring**](#monitor): Monitor your network resources using any or a combination of these networking services in Azure - Network Watcher, ExpressRoute Monitor, Azure Monitor, or VNet Terminal Access Point (TAP).
+The networking services in Azure provide various networking capabilities that can be used together or separately. Select each of the following networking scenarios to learn more about them:
-## <a name="connect"></a>Connectivity services
+- [**Networking foundation**](#foundation): Azure networking foundation services provide core connectivity for your resources in Azure - Virtual Network (VNet), Private Link, Azure DNS, Azure Virtual Network Manager, Azure Bastion, Route Server, NAT Gateway, Traffic Manager, Azure Network Watcher, and Azure Monitor.
+- [**Load balancing and content delivery**](#delivery): Azure load balancing and content delivery services allow for management, distribution, and optimization of your applications and workloads - Load balancer, Application Gateway, and Azure Front Door.
+- [**Hybrid connectivity**](#hybrid): Azure hybrid connectivity services secure communication to and from your resources in Azure - VPN Gateway, ExpressRoute, Virtual WAN, and Peering Service.
+- [**Network security**](#security): Azure network security services protect your web applications and IaaS services from DDoS attacks and malicious actors - Firewall Manager, Firewall, Web Application Firewall, and DDoS Protection.
+
+## <a name="foundation"></a>Networking foundation
-This section describes services that provide connectivity between Azure resources, connectivity from an on-premises network to Azure resources, and branch to branch connectivity in Azure - Virtual Network (VNet), ExpressRoute, VPN Gateway, Virtual WAN, Virtual network NAT Gateway, Azure DNS, Peering service, Route Server, and Azure Bastion.
+This section describes services that provide the building blocks for designing and architecting a network environment in Azure - Virtual Network (VNet), Private Link, Azure DNS, Azure Virtual Network Manager, Azure Bastion, Route Server, NAT Gateway, Traffic Manager, Azure Network Watcher, and Azure Monitor.
### <a name="vnet"></a>Virtual network [Azure Virtual Network (VNet)](../../virtual-network/virtual-networks-overview.md) is the fundamental building block for your private network in Azure. You can use VNets to: - **Communicate between Azure resources**: You can deploy virtual machines, and several other types of Azure resources to a virtual network, such as Azure App Service Environments, the Azure Kubernetes Service (AKS), and Azure Virtual Machine Scale Sets. To view a complete list of Azure resources that you can deploy into a virtual network, see [Virtual network service integration](../../virtual-network/virtual-network-for-azure-services.md). - **Communicate between each other**: You can connect virtual networks to each other, enabling resources in either virtual network to communicate with each other, using virtual network peering or Azure Virtual Network Manager. The virtual networks you connect can be in the same, or different, Azure regions. For more information, see [Virtual network peering](../../virtual-network/virtual-network-peering-overview.md) and [Azure Virtual Network Manager](../../virtual-network-manager/overview.md).-- **Communicate to the internet**: All resources in a VNet can communicate outbound to the internet, by default. You can communicate inbound to a resource by assigning a public IP address or a public Load Balancer. You can also use [Public IP addresses](../../virtual-network/ip-services/virtual-network-public-ip-address.md) or public [Load Balancer](../../load-balancer/load-balancer-overview.md) to manage your outbound connections.
+- **Communicate to the internet**: All resources in a virtual network can communicate outbound to the internet, by default. You can communicate inbound to a resource by assigning a public IP address or a public Load Balancer. You can also use [Public IP addresses](../../virtual-network/ip-services/virtual-network-public-ip-address.md) or public [Load Balancer](../../load-balancer/load-balancer-overview.md) to manage your outbound connections.
- **Communicate with on-premises networks**: You can connect your on-premises computers and networks to a virtual network using [VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoute](../../expressroute/expressroute-introduction.md). - **Encrypt traffic between resources**: You can use [Virtual network encryption](../../virtual-network/virtual-network-encryption-overview.md) to encrypt traffic between resources in a virtual network.
-### <a name="avnm"></a>Azure Virtual Network Manager
-
-[Azure Virtual Network Manager](../../virtual-network-manager/overview.md) is a management service that enables you to group, configure, deploy, and manage virtual networks globally across subscriptions. With Virtual Network Manager, you can define [network groups](../../virtual-network-manager/concept-network-groups.md) to identify and logically segment your virtual networks. Then you can determine the [connectivity](../../virtual-network-manager/concept-connectivity-configuration.md) and [security configurations](../../virtual-network-manager/concept-security-admins.md) you want and apply them across all the selected virtual networks in network groups at once.
--
-### <a name="expressroute"></a>ExpressRoute
-
-[ExpressRoute](../../expressroute/expressroute-introduction.md) enables you to extend your on-premises networks into the Microsoft cloud over a private connection facilitated by a connectivity provider. This connection is private. Traffic doesn't go over the internet. With ExpressRoute, you can establish connections to Microsoft cloud services, such as Microsoft Azure, Microsoft 365, and Dynamics 365.
--
-### <a name="vpngateway"></a>VPN Gateway
-
-[VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md) helps you create encrypted cross-premises connections to your virtual network from on-premises locations, or create encrypted connections between VNets. There are different configurations available for VPN Gateway connections. Some of the main features include:
+#### <a name="nsg"></a>Network security groups
-* Site-to-site VPN connectivity
-* Point-to-site VPN connectivity
-* VNet-to-VNet VPN connectivity
+You can filter network traffic to and from Azure resources in an Azure virtual network with a network security group. For more information, see [Network security groups](../../virtual-network/network-security-groups-overview.md).
-The following diagram illustrates multiple site-to-site VPN connections to the same virtual network. To view more connection diagrams, see [VPN Gateway - design](../../vpn-gateway/design.md).
+#### <a name="serviceendpoints"></a>Service endpoints
+[Virtual Network (VNet) service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) extend your virtual network private address space and the identity of your virtual network to the Azure services, over a direct connection. Endpoints allow you to secure your critical Azure service resources to only your virtual networks. Traffic from your virtual network to the Azure service always remains on the Microsoft Azure backbone network.
-### <a name="virtualwan"></a>Virtual WAN
-[Azure Virtual WAN](../../virtual-wan/virtual-wan-about.md) is a networking service that brings many networking, security, and routing functionalities together to provide a single operational interface. Connectivity to Azure VNets is established by using virtual network connections. Some of the main features include:
+### <a name="privatelink"></a>Azure Private Link
-* Branch connectivity (via connectivity automation from Virtual WAN Partner devices such as SD-WAN or VPN CPE)
-* Site-to-site VPN connectivity
-* Remote user VPN connectivity (point-to-site)
-* Private connectivity (ExpressRoute)
-* Intra-cloud connectivity (transitive connectivity for virtual networks)
-* VPN ExpressRoute inter-connectivity
-* Routing, Azure Firewall, and encryption for private connectivity
+[Azure Private Link](../../private-link/private-link-overview.md) enables you to access Azure PaaS Services (for example, Azure Storage and SQL Database) and Azure hosted customer-owned/partner services over a private endpoint in your virtual network.
+Traffic between your virtual network and the service travels through the Microsoft backbone network. Exposing your service to the public internet is no longer necessary. You can create your own private link service in your virtual network and deliver it to your customers.
### <a name="dns"></a>Azure DNS
The following diagram illustrates multiple site-to-site VPN connections to the s
Using Azure DNS, you can host and resolve public domains, manage DNS resolution in your virtual networks, and enable name resolution between Azure and your on-premises resources.
+### <a name="avnm"></a>Azure Virtual Network Manager
+
+[Azure Virtual Network Manager](../../virtual-network-manager/overview.md) is a management service that enables you to group, configure, deploy, and manage virtual networks globally across subscriptions. With Virtual Network Manager, you can define [network groups](../../virtual-network-manager/concept-network-groups.md) to identify and logically segment your virtual networks. Then you can determine the [connectivity](../../virtual-network-manager/concept-connectivity-configuration.md) and [security configurations](../../virtual-network-manager/concept-security-admins.md) you want and apply them across all the selected virtual networks in network groups at once.
++ ### <a name="bastion"></a>Azure Bastion
-[Azure Bastion](../../bastion/bastion-overview.md) is a service that you can deploy to let you connect to a virtual machine using your browser and the Azure portal, or via the native SSH or RDP client already installed on your local computer. The Azure Bastion service is a fully platform-managed PaaS service that you deploy inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly from the Azure portal over TLS. When you connect via Azure Bastion, your virtual machines don't need a public IP address, agent, or special client software. There are a variety of different SKU/tiers available for Azure Bastion. The tier you select affects the features that are available. For more information, see [About Bastion configuration settings](../../bastion/configuration-settings.md).
+[Azure Bastion](../../bastion/bastion-overview.md) is a service that you can deploy in a virtual network to allow you to connect to a virtual machine using your browser and the Azure portal. You can also connect using the native SSH or RDP client already installed on your local computer. The Azure Bastion service is a fully platform-managed PaaS service that you deploy inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly from the Azure portal over TLS. When you connect via Azure Bastion, your virtual machines don't need a public IP address, agent, or special client software. There are various different SKU/tiers available for Azure Bastion. The tier you select affects the features that are available. For more information, see [About Bastion configuration settings](../../bastion/configuration-settings.md).
:::image type="content" source="../../bastion/media/bastion-overview/architecture.png" alt-text="Diagram showing Azure Bastion architecture.":::
+### <a name="routeserver"></a>Route Server
+
+[Azure Route Server](../../route-server/overview.md) simplifies dynamic routing between your network virtual appliance (NVA) and your virtual network. It allows you to exchange routing information directly through Border Gateway Protocol (BGP) routing protocol between any NVA that supports the BGP routing protocol and the Azure Software Defined Network (SDN) in the Azure Virtual Network (VNet) without the need to manually configure or maintain route tables.
+ ### <a name="nat"></a>NAT Gateway Virtual Network NAT(network address translation) simplifies outbound-only Internet connectivity for virtual networks. When configured on a subnet, all outbound connectivity uses your specified static public IP addresses. Outbound connectivity is possible without load balancer or public IP addresses directly attached to virtual machines. For more information, see [What is Azure NAT gateway](../../virtual-network/nat-gateway/nat-overview.md)?
-### <a name="routeserver"></a>Route Server
+### <a name="trafficmanager"></a>Traffic Manager
-[Azure Route Server](../../route-server/overview.md) simplifies dynamic routing between your network virtual appliance (NVA) and your virtual network. It allows you to exchange routing information directly through Border Gateway Protocol (BGP) routing protocol between any NVA that supports the BGP routing protocol and the Azure Software Defined Network (SDN) in the Azure Virtual Network (VNet) without the need to manually configure or maintain route tables.
+[Azure Traffic Manager](../../traffic-manager/traffic-manager-routing-methods.md) is a DNS-based traffic load balancer that enables you to distribute traffic optimally to services across global Azure regions, while providing high availability and responsiveness. Traffic Manager provides a range of traffic-routing methods to distribute traffic such as priority, weighted, performance, geographic, multi-value, or subnet.
-### <a name="azurepeeringservice"></a>Peering Service
+The following diagram shows endpoint priority-based routing with Traffic
-[Azure Peering Service](../../peering-service/about.md) enhances customer connectivity to Microsoft cloud services such as Microsoft 365, Dynamics 365, software as a service (SaaS) services, Azure, or any Microsoft services accessible via the public internet.
-## <a name="protect"></a>Application protection services
+For more information about Traffic Manager, see [What is Azure Traffic Manager?](../../traffic-manager/traffic-manager-overview.md).
-This section describes networking services in Azure that help protect your network resources - Protect your applications using any or a combination of these networking services in Azure - DDoS protection, Private Link, Firewall, Web Application Firewall, Network Security Groups, and Virtual Network Service Endpoints.
+### <a name="networkwatcher"></a>Azure Network Watcher
-### <a name="ddosprotection"></a>DDoS Protection
+[Azure Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md?toc=%2fazure%2fnetworking%2ftoc.json) provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network.
-[Azure DDoS Protection](../../ddos-protection/manage-ddos-protection.md) provides countermeasures against the most sophisticated DDoS threats. The service provides enhanced DDoS mitigation capabilities for your application and resources deployed in your virtual networks. Additionally, customers using Azure DDoS Protection have access to DDoS Rapid Response support to engage DDoS experts during an active attack.
+### <a name="azuremonitor"></a>Azure Monitor
-Azure DDoS Protection consists of two tiers:
+[Azure Monitor](../../azure-monitor/overview.md?toc=%2fazure%2fnetworking%2ftoc.json) maximizes the availability and performance of your applications by delivering a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. It helps you understand how your applications are performing and proactively identifies issues affecting them and the resources they depend on.
-- [DDoS Network Protection](../../ddos-protection/ddos-protection-overview.md#ddos-network-protection), combined with application design best practices, provides enhanced DDoS mitigation features to defend against DDoS attacks. It's automatically tuned to help protect your specific Azure resources in a virtual network.-- [DDoS IP Protection](../../ddos-protection/ddos-protection-overview.md#ddos-ip-protection) is a pay-per-protected IP model. DDoS IP Protection contains the same core engineering features as DDoS Network Protection, but will differ in the following value-added
+## <a name="delivery"></a>Load balancing and content delivery
+This section describes networking services in Azure that help deliver applications and workloads - Load Balancer, Application Gateway, and Azure Front Door Service.
-### <a name="privatelink"></a>Azure Private Link
+### <a name="loadbalancer"></a>Load Balancer
-[Azure Private Link](../../private-link/private-link-overview.md) enables you to access Azure PaaS Services (for example, Azure Storage and SQL Database) and Azure hosted customer-owned/partner services over a private endpoint in your virtual network.
-Traffic between your virtual network and the service travels through the Microsoft backbone network. Exposing your service to the public internet is no longer necessary. You can create your own private link service in your virtual network and deliver it to your customers.
+[Azure Load Balancer](../../load-balancer/load-balancer-overview.md) provides high-performance, low-latency Layer 4 load-balancing for all UDP and TCP protocols. It manages inbound and outbound connections. You can configure public and internal load-balanced endpoints. You can define rules to map inbound connections to back-end pool destinations by using TCP and HTTP health-probing options to manage service availability.
+Azure Load Balancer is available in Standard, Regional, and Gateway SKUs.
-### <a name="firewall"></a>Azure Firewall
+The following picture shows an Internet-facing multi-tier application that utilizes both external and internal load balancers:
-[Azure Firewall](../../firewall/overview.md) is a managed, cloud-based network security service that protects your Azure Virtual Network resources. Using Azure Firewall, you can centrally create, enforce, and log application and network connectivity policies across subscriptions and virtual networks. Azure Firewall uses a static public IP address for your virtual network resources allowing outside firewalls to identify traffic originating from your virtual network.
+### <a name="applicationgateway"></a>Application Gateway
-### <a name="waf"></a>Web Application Firewall
+[Azure Application Gateway](../../application-gateway/overview.md) is a web traffic load balancer that enables you to manage traffic to your web applications. It's an Application Delivery Controller (ADC) as a service, offering various layer 7 load-balancing capabilities for your applications.
-[Azure Web Application Firewall](../../web-application-firewall/overview.md) (WAF) provides protection to your web applications from common web exploits and vulnerabilities such as SQL injection, and cross site scripting. Azure WAF provides out of box protection from OWASP top 10 vulnerabilities via managed rules. Additionally customers can also configure custom rules, which are customer managed rules to provide extra protection based on source IP range, and request attributes such as headers, cookies, form data fields or query string parameters.
+The following diagram shows url path-based routing with Application Gateway.
-Customers can choose to deploy [Azure WAF with Application Gateway](../../web-application-firewall/ag/ag-overview.md), which provides regional protection to entities in public and private address space. Customers can also choose to deploy [Azure WAF with Front Door](../../web-application-firewall/afds/afds-overview.md) which provides protection at the network edge to public endpoints.
+### <a name="frontdoor"></a>Azure Front Door
-### <a name="nsg"></a>Network security groups
+[Azure Front Door](../../frontdoor/front-door-overview.md) enables you to define, manage, and monitor the global routing for your web traffic by optimizing for best performance and instant global failover for high availability. With Front Door, you can transform your global (multi-region) consumer and enterprise applications into robust, high-performance personalized modern applications, APIs, and content that reach a global audience with Azure.
-You can filter network traffic to and from Azure resources in an Azure virtual network with a network security group. For more information, see [Network security groups](../../virtual-network/network-security-groups-overview.md).
-### <a name="serviceendpoints"></a>Service endpoints
+## <a name="hybrid"></a>Hybrid connectivity
-[Virtual Network (VNet) service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) extend your virtual network private address space and the identity of your VNet to the Azure services, over a direct connection. Endpoints allow you to secure your critical Azure service resources to only your virtual networks. Traffic from your VNet to the Azure service always remains on the Microsoft Azure backbone network.
+This section describes network connectivity services that provide a secure communication between your on-premises network and Azure - VPN Gateway, ExpressRoute, Virtual WAN, and Peering Service.
+### <a name="vpngateway"></a>VPN Gateway
-## <a name="deliver"></a>Application delivery services
+[VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md) helps you create encrypted cross-premises connections to your virtual network from on-premises locations, or create encrypted connections between VNets. There are different configurations available for VPN Gateway connections. Some of the main features include:
-This section describes networking services in Azure that help deliver applications - Content Delivery Network, Azure Front Door Service, Traffic Manager, Load Balancer, and Application Gateway.
+* Site-to-site VPN connectivity
+* Point-to-site VPN connectivity
+* VNet-to-VNet VPN connectivity
-### <a name="frontdoor"></a>Azure Front Door
+The following diagram illustrates multiple site-to-site VPN connections to the same virtual network. To view more connection diagrams, see [VPN Gateway - design](../../vpn-gateway/design.md).
-[Azure Front Door](../../frontdoor/front-door-overview.md) enables you to define, manage, and monitor the global routing for your web traffic by optimizing for best performance and instant global failover for high availability. With Front Door, you can transform your global (multi-region) consumer and enterprise applications into robust, high-performance personalized modern applications, APIs, and content that reach a global audience with Azure.
+### <a name="expressroute"></a>ExpressRoute
-### <a name="trafficmanager"></a>Traffic Manager
+[ExpressRoute](../../expressroute/expressroute-introduction.md) enables you to extend your on-premises networks into the Microsoft cloud over a private connection facilitated by a connectivity provider. This connection is private. Traffic doesn't go over the internet. With ExpressRoute, you can establish connections to Microsoft cloud services, such as Microsoft Azure, Microsoft 365, and Dynamics 365.
-[Azure Traffic Manager](../../traffic-manager/traffic-manager-routing-methods.md). is a DNS-based traffic load balancer that enables you to distribute traffic optimally to services across global Azure regions, while providing high availability and responsiveness. Traffic Manager provides a range of traffic-routing methods to distribute traffic such as priority, weighted, performance, geographic, multi-value, or subnet.
-The following diagram shows endpoint priority-based routing with Traffic
+### <a name="virtualwan"></a>Virtual WAN
+[Azure Virtual WAN](../../virtual-wan/virtual-wan-about.md) is a networking service that brings many networking, security, and routing functionalities together to provide a single operational interface. Connectivity to Azure VNets is established by using virtual network connections. Some of the main features include:
-For more information about Traffic Manager, see [What is Azure Traffic Manager?](../../traffic-manager/traffic-manager-overview.md)
+* Branch connectivity (via connectivity automation from Virtual WAN Partner devices such as SD-WAN or VPN CPE)
+* Site-to-site VPN connectivity
+* Remote user VPN connectivity (point-to-site)
+* Private connectivity (ExpressRoute)
+* Intra-cloud connectivity (transitive connectivity for virtual networks)
+* VPN ExpressRoute inter-connectivity
+* Routing, Azure Firewall, and encryption for private connectivity
-### <a name="loadbalancer"></a>Load Balancer
-[Azure Load Balancer](../../load-balancer/load-balancer-overview.md) provides high-performance, low-latency Layer 4 load-balancing for all UDP and TCP protocols. It manages inbound and outbound connections. You can configure public and internal load-balanced endpoints. You can define rules to map inbound connections to back-end pool destinations by using TCP and HTTP health-probing options to manage service availability.
+### <a name="azurepeeringservice"></a>Peering Service
-Azure Load Balancer is available in Standard, Regional, and Gateway SKUs.
+[Azure Peering Service](../../peering-service/about.md) enhances customer connectivity to Microsoft cloud services such as Microsoft 365, Dynamics 365, software as a service (SaaS) services, Azure, or any Microsoft services accessible via the public internet.
-The following picture shows an Internet-facing multi-tier application that utilizes both external and internal load balancers:
+This section describes networking services in Azure that help protect your network resources - Protect your applications using any or a combination of these networking services in Azure - DDoS protection, Private Link, Firewall, Web Application Firewall, Network Security Groups, and Virtual Network Service Endpoints.
+## <a name="security"></a>Network security
-### <a name="applicationgateway"></a>Application Gateway
+This section describes networking services in Azure that protects and monitor your network resources - Firewall Manager, Firewall, Web Application Firewall, and DDoS Protection.
-[Azure Application Gateway](../../application-gateway/overview.md) is a web traffic load balancer that enables you to manage traffic to your web applications. It's an Application Delivery Controller (ADC) as a service, offering various layer 7 load-balancing capabilities for your applications.
+### <a name="security-center"></a>Firewall Manager
-The following diagram shows url path-based routing with Application Gateway.
+[Azure Firewall Manager](../../firewall-manager/overview.md) is a security management service that provides central security policy and routing management for cloud based security perimeters. Firewall manager can provide security management for two different types of network architecture: secure virtual hub and hub virtual network. With Azure Firewall Manager, you can deploy multiple Azure Firewall instances across Azure regions and subscriptions, implement DDoS protection plans, manage web application firewall policies, and integrate with partner security-as-a-service for enhanced security.
-## <a name="monitor"></a>Network monitoring services
+### <a name="firewall"></a>Azure Firewall
-This section describes networking services in Azure that help monitor your network resources - Azure Network Watcher, Azure Monitor Network Insights, Azure Monitor, and ExpressRoute Monitor.
+[Azure Firewall](../../firewall/overview.md) is a managed, cloud-based network security service that protects your Azure Virtual Network resources. Using Azure Firewall, you can centrally create, enforce, and log application and network connectivity policies across subscriptions and virtual networks. Azure Firewall uses a static public IP address for your virtual network resources allowing outside firewalls to identify traffic originating from your virtual network.
-### <a name="networkwatcher"></a>Azure Network Watcher
-[Azure Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md?toc=%2fazure%2fnetworking%2ftoc.json) provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. For more information, see [What is Network Watcher?
+### <a name="waf"></a>Web Application Firewall
-### <a name="azuremonitor"></a>Azure Monitor
+[Azure Web Application Firewall](../../web-application-firewall/overview.md) (WAF) provides protection to your web applications from common web exploits and vulnerabilities such as SQL injection, and cross site scripting. Azure WAF provides out of box protection from OWASP top 10 vulnerabilities via managed rules. Additionally customers can also configure custom rules, which are customer managed rules to provide extra protection based on source IP range, and request attributes such as headers, cookies, form data fields, or query string parameters.
-[Azure Monitor](../../azure-monitor/overview.md?toc=%2fazure%2fnetworking%2ftoc.json) maximizes the availability and performance of your applications by delivering a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. It helps you understand how your applications are performing and proactively identifies issues affecting them and the resources they depend on. For more information, see [Azure Monitor Overview
+Customers can choose to deploy [Azure WAF with Application Gateway](../../web-application-firewall/ag/ag-overview.md), which provides regional protection to entities in public and private address space. Customers can also choose to deploy [Azure WAF with Front Door](../../web-application-firewall/afds/afds-overview.md) which provides protection at the network edge to public endpoints.
-### <a name="expressroutemonitor"></a>ExpressRoute Monitor
-To learn about how to view ExpressRoute circuit metrics, resource logs and alerts, see [ExpressRoute monitoring, metrics, and alerts](../../expressroute/expressroute-monitoring-metrics-alerts.md?toc=%2fazure%2fnetworking%2ftoc.json).
+### <a name="ddosprotection"></a>DDoS Protection
+
+[Azure DDoS Protection](../../ddos-protection/manage-ddos-protection.md) provides countermeasures against the most sophisticated DDoS threats. The service provides enhanced DDoS mitigation capabilities for your application and resources deployed in your virtual networks. Additionally, customers using Azure DDoS Protection have access to DDoS Rapid Response support to engage DDoS experts during an active attack.
-### <a name="insights"></a>Network Insights
+Azure DDoS Protection consists of two tiers:
-Azure Monitor for Networks [(Network Insights)](../../network-watcher/network-insights-overview.md?toc=%2fazure%2fnetworking%2ftoc.json).
- provides a comprehensive view of health and metrics for all deployed network resources, without requiring any configuration.
+- [DDoS Network Protection](../../ddos-protection/ddos-protection-overview.md#ddos-network-protection), combined with application design best practices, provides enhanced DDoS mitigation features to defend against DDoS attacks. It's automatically tuned to help protect your specific Azure resources in a virtual network.
+- [DDoS IP Protection](../../ddos-protection/ddos-protection-overview.md#ddos-ip-protection) is a pay-per-protected IP model. DDoS IP Protection contains the same core engineering features as DDoS Network Protection, but differs in the following value-added
+ ## Next steps
notification-hubs Notification Hubs Gcm To Fcm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-gcm-to-fcm.md
Azure Notification Hubs is working on a solution that reduces the number of time
### How can Xamarin customers migrate to FCM v1?
-Xamarin is now deprecated. Xamarin customers should migrate to .NET MAUI, but MAUI is not currently supported by Azure Notification Hubs. MAUI apps can use the native Android Notification Hub SDK or [REST API](firebase-migration-rest.md#step-2-manage-registration-and-installation). It's recommended that Xamarin customers move away from Notification Hubs if they need FCM v1 sends.
+Xamarin is now deprecated and Xamarin customers should migrate to .NET Multi-platform App UI (.NET MAUI). While specific Azure Notification Hub SDKs aren't provided for .NET for Android, .NET for iOS, and .NET MAUI, the [.NET SDK](https://www.nuget.org/packages/Microsoft.Azure.NotificationHubs) can be used by apps built with .NET, including .NET MAUI. For more information, including sending push notifications to a .NET MAUI app via FCM v1, see [Send push notifications to .NET MAUI apps using Azure Notification Hubs via a backend service](/dotnet/maui/data-cloud/push-notifications).
## Next steps
openshift Concepts Egress Lockdown https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/concepts-egress-lockdown.md
keywords: egress lockdown, aro cluster, aro, networking, azure, openshift, red hat-+ Last updated 02/28/2022 #Customer intent: I need to understand how egress lockdown provides access to URLs and endpoints that a Red Hat OpenShift cluster needs to function efficiently.
openshift Concepts Ovn Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/concepts-ovn-kubernetes.md
Last updated 04/17/2023
-topic: how-to
+topic: overview
keywords: azure, openshift, aro, red hat, azure CLI, azure portal, ovn, ovn-kubernetes, CNI, Container Network Interface #Customer intent: I need to configure OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters.
openshift Dns Forwarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/dns-forwarding.md
description: Configure DNS Forwarding for Azure Red Hat OpenShift 4
-+ Last updated 07/14/2024 # Configure DNS forwarding on an Azure Red Hat OpenShift 4 Cluster
openshift Howto Create A Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-a-backup.md
Title: Create an Azure Red Hat OpenShift 4 cluster application backup using Velero description: Learn how to create a backup of your Azure Red Hat OpenShift cluster applications using Velero -+ Last updated 06/22/2020
openshift Howto Create A Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-a-restore.md
Title: Create an Azure Red Hat OpenShift 4 cluster application restore using Velero description: Learn how to create a restore of your Azure Red Hat OpenShift cluster applications using Velero -+ Last updated 06/22/2020
openshift Howto Create A Storageclass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-a-storageclass.md
Title: Create an Azure Files StorageClass on Azure Red Hat OpenShift 4 description: Learn how to create an Azure Files StorageClass on Azure Red Hat OpenShift -+ Last updated 08/28/2023
openshift Howto Create Private Cluster 4X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-private-cluster-4x.md
Title: Create an Azure Red Hat OpenShift 4 private cluster description: Learn how to create an Azure Red Hat OpenShift private cluster running OpenShift 4 -+ Last updated 07/15/2024
openshift Howto Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-custom-dns.md
description: Discover how to add a custom DNS resolver on all of your nodes in A
-+ Last updated 06/02/2021 #Customer intent: As an operator or developer, I need a custom DNS configured for an Azure Red Hat OpenShift cluster
openshift Howto Enable Fips Openshift https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-enable-fips-openshift.md
Title: Enable FIPS on an Azure Red Hat OpenShift cluster description: Learn how to enable FIPS on an Azure Red Hat OpenShift cluster. -+ Last updated 5/5/2022
openshift Howto Encrypt Data Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-encrypt-data-disks.md
- Title: Encrypt persistent volume claims with a customer-managed key (CMK) on Azure Red Hat OpenShift (ARO)
-description: Bring your own key (BYOK) / Customer-managed key (CMK) deploy instructions for Azure Red Hat OpenShift
- Previously updated : 02/24/2021----
-keywords: encryption, byok, aro, cmk, openshift, red hat
-# Customer intent: My security policies dictate that data at rest must be encrypted. Beyond this, the key used to encrypt data must be able to be changed at-will by a person authorized to do so.
--
-# Encrypt persistent volume claims with a customer-managed key (CMK) on Azure Red Hat OpenShift (ARO) (preview)
-
-Azure Storage uses server-side encryption (SSE) to automatically [encrypt](../storage/common/storage-service-encryption.md) your data when it is persisted to the cloud. By default, data is encrypted with Microsoft platform-managed keys. For additional control over encryption keys, you can supply your own customer-managed keys to encrypt data in your Azure Red Hat OpenShift clusters.
-
-> [!NOTE]
-> At this stage, support exists only for encrypting ARO persistent volumes with customer-managed keys. This feature is not presently available for master or worker node operating system disks.
-
-> [!IMPORTANT]
-> ARO preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they are excluded from the service-level agreements and limited warranty. ARO previews are partially covered by customer support on a best-effort basis. As such, these features are not meant for production use.
-
-## Before you begin
-This article assumes that:
-
-* You have a pre-existing ARO cluster at OpenShift version 4.4 (or greater).
-
-* You have the **oc** OpenShift command-line tool, base64 (part of coreutils) and the **az** Azure CLI installed.
-
-* You are logged in to your ARO cluster using **oc** as a global cluster-admin user (kubeadmin).
-
-* You are logged in to the Azure CLI using **az** with an account authorized to grant "Contributor" access in the same subscription as the ARO cluster.
-
-## Limitations
-
-* Availability for customer-managed key encryption is region-specific. To see the status for a specific Azure region, check [Azure regions][supported-regions].
-* If you wish to use Ultra Disks, you must first enable them on your subscription before getting started.
-
-## Declare Cluster & Encryption Variables
-You should configure the variables below to whatever may be appropriate for your the ARO cluster in which you wish you enable customer-managed encryption keys:
-```
-aroCluster="mycluster" # The name of the ARO cluster that you wish to enable CMK on. This may be obtained from **az aro list -o table**
-buildRG="mycluster-rg" # The name of the resource group used when you initially built the ARO cluster. This may be obtained from **az aro list -o table**
-desName="aro-des" # Your Azure Disk Encryption Set name. This must be unique in your subscription.
-vaultName="aro-keyvault-1" # Your Azure Key Vault name. This must be unique in your subscription.
-vaultKeyName="myCustomAROKey" # The name of the key to be used within your Azure Key Vault. This is the name of the key, not the actual value of the key that you will rotate.
-```
-
-## Obtain your subscription ID
-Your Azure subscription ID is used multiple times in the configuration of CMK. Obtain it and store it as a variable:
-```azurecli-interactive
-# Obtain your Azure Subscription ID and store it in a variable
-subId="$(az account list -o tsv | grep True | awk '{print $3}')"
-```
-
-## Create an Azure Key Vault instance
-An Azure Key Vault instance must be used to store your keys. Create a new Key Vault with purge protection enabled. Then, create a new key within the vault to store your own custom key:
-
-```azurecli-interactive
-# Create an Azure Key Vault resource in a supported Azure region
-az keyvault create -n $vaultName -g $buildRG --enable-purge-protection true -o table
-
-# Create the actual key within the Azure Key Vault
-az keyvault key create --vault-name $vaultName --name $vaultKeyName --protection software -o jsonc
-```
-
-## Create an Azure disk encryption set
-
-The Azure Disk Encryption Set is used as the reference point for disks in ARO. It is connected to the Azure Key Vault we created in the previous step and will pull customer-managed keys from that location.
-
-```azurecli-interactive
-# Retrieve the Key Vault Id and store it in a variable
-keyVaultId="$(az keyvault show --name $vaultName --query [id] -o tsv)"
-
-# Retrieve the Key Vault key URL and store it in a variable
-keyVaultKeyUrl="$(az keyvault key show --vault-name $vaultName --name $vaultKeyName --query [key.kid] -o tsv)"
-
-# Create an Azure disk encryption set
-az disk-encryption-set create -n $desName -g $buildRG --source-vault $keyVaultId --key-url $keyVaultKeyUrl -o table
-```
-
-## Grant the Disk Encryption Set access to Key Vault
-Use the disk encryption set we created in the prior steps and grant the disk encryption set access to Azure Key Vault:
-
-```azurecli-interactive
-# First, find the disk encryption set's Azure Application ID value.
-desIdentity="$(az disk-encryption-set show -n $desName -g $buildRG --query [identity.principalId] -o tsv)"
-
-# Next, update the Key Vault security policy settings to allow access to the disk encryption set.
-az keyvault set-policy -n $vaultName -g $buildRG --object-id $desIdentity --key-permissions wrapkey unwrapkey get -o table
-
-# Now, ensure the Disk Encryption Set can read the contents of the Azure Key Vault.
-az role assignment create --assignee $desIdentity --role Reader --scope $keyVaultId -o jsonc
-```
-
-### Obtain other IDs required for role assignments
-We need to allow the ARO cluster to use the disk encryption set to encrypt the persistent volume claims (PVCs) in the ARO cluster. To do this, we will create a new Managed Service Identity (MSI). We will also set other permissions for the ARO MSI and for the Disk Encryption Set.
-
-```azurecli-interactive
-# First, get the Azure Application ID of the service principal used in the ARO cluster.
-aroSPAppId="$(az aro show -n $aroCluster -g $buildRG -o tsv --query servicePrincipalProfile.clientId)"
-
-# Next, get the object ID of the service principal used in the ARO cluster.
-aroSPObjId="$(az ad sp show --id $aroSPAppId -o tsv --query [objectId])"
-
-# Set the name of the ARO Managed Service Identity.
-msiName="$aroCluster-msi"
-
-# Create the Managed Service Identity (MSI) required for disk encryption.
-az identity create -g $buildRG -n $msiName -o jsonc
-
-# Get the ARO Managed Service Identity Azure Application ID.
-aroMSIAppId="$(az identity show -n $msiName -g $buildRG -o tsv --query [clientId])"
-
-# Get the resource ID for the disk encryption set and the Key Vault resource group.
-buildRGResourceId="$(az group show -n $buildRG -o tsv --query [id])"
-```
-
-### Implement other role assignments required for CMK encryption
-Apply the required role assignments using the variables obtained in the previous step:
-
-```azurecli-interactive
-# Ensure the Azure Disk Encryption Set can read the contents of the Azure Key Vault.
-az role assignment create --assignee $desIdentity --role Reader --scope $keyVaultId -o jsonc
-
-# Assign the MSI AppID 'Reader' permission over the disk encryption set & Key Vault resource group.
-az role assignment create --assignee $aroMSIAppId --role Reader --scope $buildRGResourceId -o jsonc
-
-# Assign the ARO Service Principal 'Contributor' permission over the disk encryption set & Key Vault Resource Group.
-az role assignment create --assignee $aroSPObjId --role Contributor --scope $buildRGResourceId -o jsonc
-```
-
-## Create a k8s Storage Class for encrypted Premium & Ultra disks
-Generate storage classes to be used for CMK for Premium_LRS and UltraSSD_LRS disks:
-
-```azurecli-interactive
-# Premium Disks
-cat > managed-premium-encrypted-cmk.yaml<< EOF
-kind: StorageClass
-apiVersion: storage.k8s.io/v1
-metadata:
- name: managed-premium-encrypted-cmk
-provisioner: kubernetes.io/azure-disk
-parameters:
- skuname: Premium_LRS
- kind: Managed
- diskEncryptionSetID: "/subscriptions/$subId/resourceGroups/$buildRG/providers/Microsoft.Compute/diskEncryptionSets/$desName"
- resourceGroup: $buildRG
-reclaimPolicy: Delete
-allowVolumeExpansion: true
-volumeBindingMode: WaitForFirstConsumer
-EOF
-
-# Ultra Disks
-cat > managed-ultra-encrypted-cmk.yaml<< EOF
-kind: StorageClass
-apiVersion: storage.k8s.io/v1
-metadata:
- name: managed-ultra-encrypted-cmk
-provisioner: kubernetes.io/azure-disk
-parameters:
- skuname: UltraSSD_LRS
- kind: Managed
- diskEncryptionSetID: "/subscriptions/$subId/resourceGroups/$buildRG/providers/Microsoft.Compute/diskEncryptionSets/$desName"
- resourceGroup: $buildRG
- cachingmode: None
- diskIopsReadWrite: "2000" # minimum value: 2 IOPS/GiB
- diskMbpsReadWrite: "320" # minimum value: 0.032/GiB
-reclaimPolicy: Delete
-allowVolumeExpansion: true
-volumeBindingMode: WaitForFirstConsumer
-EOF
-```
-
-Next, run this deployment in your ARO cluster to apply the storage class configuration:
-
-```azurecli-interactive
-# Update cluster with the new storage classes
-oc apply -f managed-premium-encrypted-cmk.yaml
-oc apply -f managed-ultra-encrypted-cmk.yaml
-```
-
-## Test encryption with customer-managed keys (optional)
-To check if your cluster is using a customer-managed key for PVC encryption, we will create a persistent volume claim using the new storage class. The code snippet below creates a pod and mounts a persistent volume claim using Premium disks.
-```
-# Create a pod which uses a persistent volume claim referencing the new storage class
-cat > test-pvc.yaml<< EOF
-apiVersion: v1
-kind: PersistentVolumeClaim
-metadata:
- name: mypod-with-cmk-encryption-pvc
-spec:
- accessModes:
- - ReadWriteOnce
- storageClassName: managed-premium-encrypted-cmk
- resources:
- requests:
- storage: 1Gi
-
-kind: Pod
-apiVersion: v1
-metadata:
- name: mypod-with-cmk-encryption
-spec:
- containers:
- - name: mypod-with-cmk-encryption
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - mountPath: "/mnt/azure"
- name: volume
- volumes:
- - name: volume
- persistentVolumeClaim:
- claimName: mypod-with-cmk-encryption-pvc
-EOF
-```
-### Apply the test pod configuration file (optional)
-Execute the commands below to apply the test Pod configuration and return the UID of the new persistent volume claim. The UID will be used to verify the disk is encrypted using CMK.
-```azurecli-interactive
-# Apply the test pod configuration file and set the PVC UID as a variable to query in Azure later.
-pvcUid="$(oc apply -f test-pvc.yaml -o jsonpath='{.items[0].metadata.uid}')"
-
-# Determine the full Azure Disk name.
-pvName="$(oc get pv pvc-$pvcUid -o jsonpath='{.spec.azureDisk.diskName}')"
-```
-> [!NOTE]
-> On occasion there may be a slight delay when applying role assignments within Microsoft Entra ID. Depending upon the speed that these instructions are executed, the command to "Determine the full Azure Disk name" may not succeed. If this occurs, review the output of **oc describe pvc mypod-with-cmk-encryption-pvc** to ensure that the disk was successfully provisioned. If the role assignment propagation has not completed you may need to *delete* and re-*apply* the Pod & PVC YAML.
-
-### Verify PVC disk is configured with "EncryptionAtRestWithCustomerKey" (Optional)
-The Pod should create a persistent volume claim that references the CMK storage class. Running the following command will validate that the PVC has been deployed as expected:
-```azurecli-interactive
-# Describe the OpenShift cluster-wide persistent volume claims
-oc describe pvc
-
-# Verify with Azure that the disk is encrypted with a customer-managed key
-az disk show -n $pvName -g $buildRG -o json --query [encryption]
-```
-
-<!-- LINKS - external -->
-
-<!-- LINKS - internal -->
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-extension-update]: /cli/azure/extension#az_extension_update
-[best-practices-security]: ../aks/operator-best-practices-cluster-security.md
-[byok-azure-portal]: ../storage/common/customer-managed-keys-configure-key-vault.md
-[customer-managed-keys]: ../virtual-machines/disk-encryption.md#customer-managed-keys
-[key-vault-generate]: ../key-vault/general/manage-with-cli2.md
-[supported-regions]: ../virtual-machines/disk-encryption.md#supported-regions
openshift Howto Multiple Ips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-multiple-ips.md
description: Discover how to configure multiple IP addresses for ARO cluster loa
-+ Last updated 03/05/2024 #Customer intent: As an ARO SRE, I need to configure multiple outbound IP addresses per ARO cluster load balancers
openshift Howto Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-tag-resources.md
Title: Tag ARO resources using Azure Policy description: Learn how to tag ARO resources in a cluster's resource group using Azure Policy -+ Last updated 08/30/2023
openshift Howto Update Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-update-certificates.md
Title: Update ARO cluster certificates description: Learn how to manually update Azure Red Hat OpenShift cluster certificates -+ Last updated 10/05/2022
openshift Howto Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-upgrade.md
Title: Upgrade an Azure Red Hat OpenShift cluster description: Learn how to upgrade an Azure Red Hat OpenShift cluster running OpenShift 4 -+ Last updated 6/12/2023
openshift Openshift Service Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/openshift-service-definitions.md
Title: Azure Red Hat OpenShift service definition description: Azure Red Hat OpenShift service definition -+ Last updated 04/15/2024
openshift Responsibility Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/responsibility-matrix.md
Title: Azure Red Hat OpenShift Responsibility Assignment Matrix description: Learn about the ownership of responsibilities for the operation of an Azure Red Hat OpenShift cluster -+ Last updated 4/17/2024
operator-nexus Howto Baremetal Bmm Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-bmm-ssh.md
# Manage emergency access to a bare metal machine using the `az networkcloud cluster baremetalmachinekeyset` > [!CAUTION]
-> Please note this process is used in emergency situations when all other troubleshooting options using Azure are exhausted. SSH access to these bare metal machines is restricted to users managed via this method from the specified jump host list.
+> Please note this process is used in emergency situations when all other troubleshooting options using Azure have been exhausted. Any write or edit actions executed on the BMM node(s) will require users to ['reimage'](./howto-baremetal-functions.md) in order to restore Microsoft support to the impacted BMM node(s).
+Please note that SSH access to these bare metal machines is restricted to users managed via this method from the specified jump host list.
There are rare situations where a user needs to investigate & resolve issues with a bare metal machine and all other ways via Azure are exhausted. Azure Operator Nexus provides the `az networkcloud cluster baremetalmachinekeyset` command so users can manage SSH access to these bare metal machines. On keyset creation, users are validated against Microsoft Entra ID for proper authorization by cross referencing the User Principal Name provided for a user against the supplied Microsoft Entra Group ID `--azure-group-id <Entra Group ID>`.
operator-nexus Howto Baremetal Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-functions.md
Previously updated : 04/30/2024 Last updated : 07/19/2024
az networkcloud baremetalmachine uncordon \
## Reimage a BMM
-You can restore the runtime version on a BMM by executing the `reimage` command. This process **redeploys** the runtime image on the target BMM and executes the steps to rejoin the cluster with the same identifiers. This action doesn't affect the tenant workload files on this BMM.
+You can restore the runtime version on a BMM by executing `reimage` command. This process **redeploys** the runtime image on the target BMM and executes the steps to rejoin the cluster with the same identifiers. This action doesn't impact the tenant workload files on this BMM. In the event of a write or edit action being performed on the node via BMM access, this 'reimage' action is required to restore Microsoft support and the changes will be lost, restoring the node to it's expected state.
As a best practice, make sure the BMM's workloads are drained using the [`cordon`](#make-a-bmm-unschedulable-cordon) command, with `evacuate "True"`, before executing the `reimage` command.
operator-nexus Howto Platform Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-platform-prerequisites.md
Interface: net1, via: LLDP, RID: 1, Time: 0 day, 20:28:36
- Installation Address: - FIC/Rack/Grid Location: 4. Data provided to the operator and shared with storage array technician, which will be common to all installations:
- - Purity Code Level: 6.5.1
+ - Purity Code Level: Refer to [supported Purity versions](./reference-near-edge-storage-supported-versions.md)
- Safe Mode: Disabled - Array Time zone: UTC - DNS (Domain Name System) Server IP Address: 172.27.255.201
operator-nexus Reference Near Edge Storage Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-near-edge-storage-supported-versions.md
Title: Supported Storage Software Versions in Azure Operator Nexus
-description: Learn the storage appliance software versions supported by Azure Operator Nexus versions
+description: Learn the storage appliance software versions supported by Azure Operator Nexus versions.
Last updated 05/23/2024
This document provides an overview of the storage appliance software versions supported by Azure Operator Nexus. The document also covers the version support lifecycle and end of life for each version of Storage Appliance Software.
-Minor version releases include new features and improvements. Patch releases are made available frequently and are intended for critical bug fixes within a minor version. Patch releases include fixes for security vulnerabilities or major bugs.
- Azure Nexus Supports Pure x70r3 and x70r4 each being deployed with a version of the Purity Operating System (PurityOS) that is compatible with the Azure Nexus version.
-PurityOS uses the standard [Semantic Versioning](https://semver.org/) versioning scheme for each version:
+PurityOS uses the standard [Semantic Versioning](https://semver.org/) scheme for each version:
```bash [major].[minor].[patch]
Examples:
Each number in the version indicates general compatibility with the previous version: * **Major version numbers** change when breaking changes to the API might be introduced
-* **Minor version numbers** change when functionality updates are made that are backwards compatible to the other minor releases.
-* **Patch version numbers** change when backwards-compatible bug fixes are made.
+* **Minor version numbers** change when functionality updates are made that are backwards compatible to the other minor releases. These versions involve new features and improvements.
+* **Patch version numbers** change when backwards-compatible bug fixes are made. Patch releases are made available frequently and are intended for critical bug fixes within a minor version. The type of issues fixed under patch release includes fixes for security vulnerabilities or major bugs.
+
+## Version support guidelines
+- All changes to version support and any version specific upgrade instructions will be communicated in release notes.
+- Nexus will only support Long Term Support (LTS) storage versions. Purity LTS versions contain an odd number minor version, such as 6.1.x, 6.5.x etc.
+- Nexus will support up to two LTS versions at any time.
+- Support shall be provided for all patch releases documented in Nexus public documentation. Which means that Nexus will handle and resolve issue tickets where storage appliance is running a supported release version. These tickets may require a fix to Nexus software or be referred to the storage vendor support team depending on the specific details. If a fix requires inclusion of new Pure patch release, it will be appropriately tested and documented.
+- Each Pure LTS release listed as supported is tested equally with each new Nexus release to ensure comprehensive compatibility.
++
+## Release process
+1. **End of support:**
+ - Nexus will announce end of support for the oldest supported LTS version via release notes once the timeline for the new LTS version is available.
+ - Nexus will stop supporting the oldest supported LTS version shortly before adding support for new LTS version (that is before the next LTS version is ready for testing in labs).
+3. **Introduction:** Nexus typically declares support for a new LTS release once the first patch release is available. This is to benefit from any critical fixes. Release cadence:
+ - By default, the introduction of any new release support (LTS/patch) will be combined with Nexus runtime release.
+ - Introduction of a new LTS release may, in rare cases, require a specific upgrade ordering and a timeline.
+ - Depending on severity of Common Vulnerabilities & Exposures (CVE) fixes or blocker issues, a Purity version may be verified and introduced outside of a runtime release.
+
+## Supported Storage Software Versions (Purity)
+
+| PurityOS version | Support added in | End of support | Remarks |
+|-||-||
+| 6.1.x | Year 2021 | Jul 2024 | End of support in Nexus 2406.2 |
+| 6.5.1 | Nexus 2403.x | Dec 2025* | |
+| 6.5.4 | Nexus 2404.x | Dec 2025* | |
+| 6.5.6 | Nexus 2406.2 | Dec 2025* | Aligned with Nexus runtime release |
-We strongly recommend staying up to date with the latest available patches. For example, if your production cluster is on **`6.5.1`**, and **`6.5.4`** is the latest available patch version available for the *6.5* series. You should upgrade to **`6.5.4`** as soon as possible to ensure your cluster is fully patched and supported.
+> [!IMPORTANT]
+> \* At max 2 LTS versions will be supported. The dates are tentative assuming that by this timeframe we will have another set of LTS versions released, making this version deprecate per our support guidelines.
-## Supported Storage Software Versions
+## Supported Pure HW Controller versions
-| PurityOS version | Nexus GA | End of support |
-|-||-|
-| 6.1.x | Year 2021 | Jul 2024 |
-| 6.5.1 | Nexus 2403.x | Dec 2025 |
-| 6.5.4 | Nexus 2404.x | Dec 2025 |
+| Pure HW Controller version | Support added in |
+|-|-|
+| R3 | Year 2021 |
+| R4 | Nexus 2404.x |
## FAQ
-### How does Microsoft notify me of new Kubernetes versions?
+### How does Microsoft notify me of a new supported Purity version?
-This document is updated periodically with planned dates of the new Storage Software versions supported.
+This document is updated periodically with planned dates of the new Storage software versions supported and for retiring versions. All new versions and end of support announcements are also communicated in release notes.
-### What happens when a version reaches end of support?
+### What happens when a version reaches the end of support?
-When a version reaches end of support, it will no longer receive patches or updates. We recommend upgrading to a supported version as soon as possible.
+Only the documented versions receive appropriate support. When a version reaches the end of support, it will no longer receive patches or updates. We recommend upgrading to a supported version as soon as possible.
### What happens if I don't upgrade my storage appliance software?
-If you don't upgrade your storage appliance software, you continue to receive support for the software version you're running until the end of the support period. After that, you'll no longer receive support for your storage appliance. You need to upgrade your cluster to a supported version to continue receiving support.
+If you don't upgrade your storage appliance software, you continue to receive support for the software version you're running until the end of the support period. After that, you'll no longer receive support for your storage appliance. You need to upgrade your storage appliance to a supported version to continue receiving support.
### What does 'Outside of Support' mean? 'Outside of Support' means that: * The version you're running is outside of the supported versions list.
-* You're asked to upgrade the storage appliance software to a supported version when requesting support.
+* Per the guidance, any support tickets reported with unsupported versions wonΓÇÖt be triaged until customer upgrades the storage appliance software to a supported version.
operator-nexus Reference Supported Software Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-supported-software-versions.md
Title: Supported software versions in Azure Operator Nexus description: Learn about supported software versions in Azure Operator Nexus. Previously updated : 05/06/2024 Last updated : 07/18/2024
# Supported Kubernetes versions in Azure Operator Nexus
-This document provides the list of software versioning supported in Release 2404.2 of Azure Operator Nexus.
+This document provides the list of software versioning supported in Release 2407.1 of Azure Operator Nexus.
## Supported software versions
This document provides the list of software versioning supported in Release 2404
| | MD5 checksum: 53899348f586d95effb8ab097837d32d | | | | **4.31.2FX-NX** | 3.0.0 | | | MD5 Checksum: e5ee34d50149749c177bbeef3d10e363 | |
-| **Instance Cluster AKS** | 1.28.3 | |
-| **Azure Linux** | 2.0.20240301 | |
-| **Purity** | 6.5.1, 6.5.4 | |
+| **Instance Cluster AKS** | 1.29.4 | |
+| **Azure Linux** | 2.0.20240425 | |
+| **Purity** | 6.1.x, 6.5.1, 6.5.4, 6.5.6 | |
### Supported K8S versions Versioning schema used for the Operator Nexus Kubernetes service, including the supported Kubernetes versions, are listed at [Supported Kubernetes versions in Azure Operator Nexus Kubernetes service](./reference-nexus-kubernetes-cluster-supported-versions.md).
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/get-started.md
Previously updated : 07/18/2024 Last updated : 07/19/2024
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- July 19, 2024: Change in [Setting up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md) to add a statement around clusters spanning Virtual networks(VNets)/subnets
- July 18, 2024: Add note about metadata heavy workload to Azure Premium Files in [Azure Storage types for SAP workload](./planning-guide-storage.md) - June 26, 2024: Adapt [Azure Storage types for SAP workload](./planning-guide-storage.md) to latest features, like snapshot capabilities for Premium SSD v2 and Ultra disk. Adapt ANF to support of mix of NFS and block storage between /hana/data and /hana/log - June 26, 2024: Fix wrong memory stated for some VMs in [SAP HANA Azure virtual machine Premium SSD storage configurations](./hana-vm-premium-ssd-v1.md) and [SAP HANA Azure virtual machine Premium SSD v2 storage configurations](./hana-vm-premium-ssd-v2.md)
sap High Availability Guide Rhel Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-pacemaker.md
vm-windows Previously updated : 10/09/2023 Last updated : 07/19/2024
Read the following SAP Notes and papers first:
> > The only supported fencing mechanism for Pacemaker RHEL clusters on Azure is an Azure fence agent.
+> [!IMPORTANT]
+> Pacemaker clusters that span multiple Virtual networks(VNets)/subnets are not covered by standard support policies.
+ The following items are prefixed with: - **[A]**: Applicable to all nodes
sap Rise Integration Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/rise-integration-security.md
Single sign-On (SSO) is configured for many SAP environments. With SAP workloads
SSO against Active Directory (AD) of your Windows domain for ECS/RISE managed SAP environment, with SAP SSO Secure Login Client requires AD integration for end user devices. With SAP RISE, any Windows systems are not integrated with the customer's active directory domain. The domain integration isn't necessary for SSO with AD/Kerberos as the domain security token is read on the client device and exchanged securely with SAP system. Contact SAP if you require any changes to integrate AD based SSO or using third party products other than SAP SSO Secure Login Client, as some configuration on RISE managed systems might be required.
+## Copilot for Security with SAP RISE
+
+[Copilot for Security](/copilot/security/microsoft-security-copilot) is a generative AI security product that empowers security and IT professionals respond to cyber threats, process signals, and assess risk exposure at the speed and scale of AI. It has its own [portal](https://securitycopilot.microsoft.com/) and embedded experiences in Microsoft Defender XDR, Microsoft Sentinel, and Intune.
+
+It can be used with any data source that Defender XDR and Sentinel support, including SAP RISE/ECS. Below shows the stand-alone experience.
+
+ This image shows an example of the Microsoft Copilot for Security experience using a prompt to investigate an SAP incident.
+
+In addition to that the Copilot for Security experience is embedded on the Defender XDR portal. Next to an AI-generated summary, recommendations and remediation like password reset for SAP are provided out-of-the-box. Learn more about automatic SAP attack disruption [here](../../sentinel/sap/deployment-attack-disrupt.md).
+
+ This image shows an example of Microsoft Copilot for Security analyzing an incident detected on SAP RISE through Defender XDR. Data ingestion is done through the Microsoft Sentinel solution for SAP applications.
+ ## Microsoft Sentinel with SAP RISE
-The [SAP RISE certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) Microsoft Sentinel solution for SAP applications allows you to monitor, detect, and respond to suspicious activities. Microsoft Sentinel guards your critical data against sophisticated cyberattacks for SAP systems hosted on Azure, other clouds, or on-premises infrastructure.
+The [SAP RISE certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) Microsoft Sentinel solution for SAP applications allows you to monitor, detect, and respond to suspicious activities. Microsoft Sentinel guards your critical data against sophisticated cyberattacks for SAP systems hosted on Azure, other clouds, or on-premises infrastructure. [Microsoft Sentinel Solution for SAP BTP](../../sentinel/sap/sap-btp-solution-overview.md) expands that coverage to SAP Business Technology Platform (BTP).
The solution allows you to gain visibility to user activities on SAP RISE/ECS and the SAP business logic layers and apply SentinelΓÇÖs built-in content. - Use a single console to monitor all your enterprise estate including SAP instances in SAP RISE/ECS on Azure and other clouds, SAP Azure native and on-premises estate - Detect and automatically respond to threats: detect suspicious activity including privilege escalation, unauthorized changes, sensitive transactions, data exfiltration and more with out-of-the-box detection capabilities - Correlate SAP activity with other signals: more accurately detect SAP threats by cross-correlating across endpoints, Microsoft Entra data and more - Customize based on your needs - build your own detections to monitor sensitive transactions and other business risks-- Visualize the data with built-in workbooks
+- Visualize the data with [built-in workbooks](../../sentinel/sap/sap-audit-log-workbook.md)
- This diagram shows an example of Microsoft Sentinel connected through an intermediary VM or container to SAP managed SAP system. The intermediary VM or container runs in customer's own subscription with configured SAP data connector agent.
+ This diagram shows an example of Microsoft Sentinel connected through an intermediary VM or container to SAP managed SAP system. The intermediary VM or container runs in customer's own subscription with configured SAP data connector agent. Connection to SAP Business Technology Platform (BTP) uses SAP's public APIs for the Audit Log Management Service.
:::image-end::: For SAP RISE/ECS, the Microsoft Sentinel solution must be deployed in customer's Azure subscription. All parts of the Sentinel solution are managed by customer and not by SAP. Private network connectivity from customer's vnet is needed to reach the SAP landscapes managed by SAP RISE/ECS. Typically, this connection is over the established vnet peering or through alternatives described in this document.
To enable the solution, only an authorized RFC user is required and nothing need
- Authentication methods supported in SAP RISE: SAP username and password or X509/SNC certificates - Only RFC based connections are possible currently with SAP RISE/ECS environments
-Note for running Microsoft Sentinel in an SAP RISE/ECS environment:
-- The following log fields/source require an SAP transport change request: Client IP address information from SAP security audit log, DB table logs (preview), spool output log. Sentinel's built-in content (detections, workbooks and playbooks) provides extensive coverage and correlation without those log sources.-- SAP infrastructure and operating system logs aren't available to Sentinel in RISE, including VMs running SAP, SAPControl data sources, network resources placed within ECS. SAP monitors elements of the Azure infrastructure and operation system independently.
+> [!IMPORTANT]
+>
+> - Running Microsoft Sentinel in an SAP RISE/ECS environment requires: Importing an SAP transport change request for the following log fields/source: Client IP address information from SAP security audit log, DB table logs (preview), spool output log. Sentinel's built-in content (detections, workbooks and playbooks) provides extensive coverage and correlation without those log sources.
+> - SAP infrastructure and operating system logs aren't available to Sentinel in RISE, due to shared responsibility model.
+
+### Automatic response with Sentinel's SOAR capabilities
Use prebuilt playbooks for security, orchestration, automation and response capabilities (SOAR) to react to threats quickly. A popular first scenario is SAP user blocking with intervention option from Microsoft Teams. The integration pattern can be applied to any incident type and target service spanning towards SAP Business Technology Platform (BTP) or Microsoft Entra ID with regard to reducing the attack surface. For more information on Microsoft Sentinel and SOAR for SAP, see the blog series [From zero to hero security coverage with Microsoft Sentinel for your critical SAP security signals](https://blogs.sap.com/2023/05/22/from-zero-to-hero-security-coverage-with-microsoft-sentinel-for-your-critical-sap-security-signals-blog-series/). This image shows an SAP incident detected by Sentinel offering the option to block the suspicious user on the SAP ERP, SAP Business Technology Platform or Microsoft Entra ID. :::image-end:::
Check out the documentation:
- [Integrating Azure with SAP RISE overview](./rise-integration.md) - [Network connectivity options in Azure with SAP RISE](./rise-integration-network.md) - [Integrating Azure services with SAP RISE](./rise-integration-services.md)-- [Deploy Microsoft Sentinel solution for SAP® applicationsE](../../sentinel/sap/deployment-overview.md)
+- [Deploy Microsoft Sentinel solution for SAP® applications](../../sentinel/sap/deployment-overview.md)
+- [Deploy Microsoft Sentinel Solution for SAP® BTP](../../sentinel/sap/deploy-sap-btp-solution.md)
scheduler Migrate From Scheduler To Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/scheduler/migrate-from-scheduler-to-logic-apps.md
ms.suite: infrastructure-services -+ Previously updated : 02/15/2022 Last updated : 07/11/2024 # Migrate Azure Scheduler jobs to Azure Logic Apps > [!IMPORTANT]
+>
> [Azure Logic Apps](../logic-apps/logic-apps-overview.md) has replaced Azure Scheduler, which is fully > retired since January 31, 2022. Please migrate your Azure Scheduler jobs by recreating them as workflows > in Azure Logic Apps following the steps in this article. Azure Scheduler is longer available in the Azure portal. > The [Azure Scheduler REST API](/rest/api/scheduler) and [Azure Scheduler PowerShell cmdlets](scheduler-powershell-reference.md) no longer work.
-This article shows how you can schedule one-time and recurring jobs by creating automated workflows with Azure Logic Apps, rather than with Azure Scheduler. When you create scheduled jobs with Azure Logic Apps, you get the following benefits:
+This guide shows how to schedule one-time and recurring jobs by creating automated workflows with Azure Logic Apps, rather than with Azure Scheduler. When you create scheduled jobs with Azure Logic Apps, you get the following benefits:
-* Build your job by using a visual designer and [ready-to-use connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) from hundreds of services, such as Azure Blob Storage, Azure Service Bus, Office 365 Outlook, and SAP.
+* Build your job by using a visual designer and select from [1000+ ready-to-use connectors](/connectors/connector-reference/connector-reference-logicapps-connectors), such as Azure Blob Storage, Azure Service Bus, Office 365 Outlook, SAP, and more.
* Manage each scheduled workflow as a first-class Azure resource. You don't have to worry about the concept of a *job collection* because each logic app is an individual Azure resource.
This article shows how you can schedule one-time and recurring jobs by creating
* Set schedules that support time zones and automatically adjust to daylight savings time (DST).
-To learn more, see [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md) or try creating your first logic app workflow by following the [Quickstart: Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](../logic-apps/quickstart-create-example-consumption-workflow.md).
+For more information, see [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md) or try creating your first logic app workflow by following either of the following steps:
+
+* [Create an example Consumption logic app workflow in multitenant Azure Logic Apps](../logic-apps/quickstart-create-example-consumption-workflow.md)
+
+* [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](../logic-apps/create-single-tenant-workflows-azure-portal.md)
## Prerequisites * An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* To trigger your logic app workflow by sending HTTP requests, use a tool such as the [Postman desktop app](https://www.getpostman.com/apps).
## Migrate by using a script Each Scheduler job is unique, so no one-size-fits-all tool exists for migrating Azure Scheduler jobs to Azure Logic Apps. However, you can [edit this script](https://github.com/Azure/logicapps/tree/master/scripts/scheduler-migration) to meet your needs.
-## Schedule one-time jobs
+## Schedule a one-time job
You can run multiple one-time jobs by creating just a single logic app workflow.
-1. In the [Azure portal](https://portal.azure.com), create a blank logic app workflow using the designer.
-
- For the basic steps, follow [Create an example Consumption logic app workflow](../logic-apps/quickstart-create-example-consumption-workflow.md).
+1. In the [Azure portal](https://portal.azure.com), create a logic app resource and blank workflow.
-1. In the designer search box, enter **when a http request** to find the **Request** trigger. From the **Triggers** list, select the trigger named **When a HTTP request is received**.
+1. [Follow these general steps to add the **Request** trigger named **When a HTTP request is received**](../logic-apps/create-workflow-with-trigger-or-action.md#add-trigger).
- ![Screenshot showing the Azure portal and the workflow designer with the "Request" trigger selected.](./media/migrate-from-scheduler-to-logic-apps/request-trigger.png)
-
-1. For the Request trigger, you can optionally provide a JSON schema, which helps the workflow designer understand the structure for the inputs included in the inbound call to the Request trigger and makes the outputs easier for you to select later in your workflow.
+1. In the **Request** trigger, you can optionally provide a JSON schema, which helps the workflow designer understand the structure for the inputs included in the inbound call to the **Request** trigger and makes the outputs easier for you to select later in your workflow.
In the **Request Body JSON Schema** box, enter the schema, for example:
You can run multiple one-time jobs by creating just a single logic app workflow.
If you don't have a schema, but you have a sample payload in JSON format, you can generate a schema from that payload.
- 1. In the Request trigger, select **Use sample payload to generate schema**.
+ 1. In the **Request** trigger, select **Use sample payload to generate schema**.
1. Under **Enter or paste a sample JSON payload**, provide your sample payload, and select **Done**, for example:
You can run multiple one-time jobs by creating just a single logic app workflow.
} ```
-1. Under the trigger, select **Next step**.
-
-1. In the designer search box, enter **delay until**. From the **Actions** list, select the action named **Delay until**.
+1. Under the trigger, [add the **Schedule** action named **Delay until**](../logic-apps/create-workflow-with-trigger-or-action.md#add-action)
- This action pauses your logic app workflow until a specified date and time, for example:
+ This action pauses workflow execution until a specified date and time, for example:
![Screenshot showing the "Delay until" action.](./media/migrate-from-scheduler-to-logic-apps/delay-until.png)
-1. Enter the timestamp for when you want to start the logic app's workflow.
+1. Enter the timestamp for when you want to start the workflow.
- When you click inside the **Timestamp** box, the dynamic content list appears so that you can optionally select an output from the trigger.
+ 1. Select inside the **Timestamp** box, and then select the dynamic content list option (lightning icon), which lets you select an output from the previous operation, which is the **Request** trigger in this example.
![Screenshot showing the "Delay until" action details with the dynamic content list open and the "runAt" property selected.](./media/migrate-from-scheduler-to-logic-apps/delay-until-details.png)
-1. Add any other actions you want to run by selecting from [hundreds of ready-to-use connectors](/connectors/connector-reference/connector-reference-logicapps-connectors).
+1. Add any other actions you want to run by selecting from the [1000+ ready-to-use connectors](/connectors/connector-reference/connector-reference-logicapps-connectors).
- For example, you can include an HTTP action that sends a request to a URL or actions that work with Storage Queues, Service Bus queues, or Service Bus topics:
+ For example, you can include an **HTTP** action that sends a request to a URL or actions that work with Storage Queues, Service Bus queues, or Service Bus topics:
![Screenshot showing the "Delay until" action followed by an H T T P action with a POST method.](./media/migrate-from-scheduler-to-logic-apps/request-http-action.png)
-1. When you're done, save your logic app workflow.
-
- ![Screenshot showing the designer toolbar with "Save" selected.](./media/migrate-from-scheduler-to-logic-apps/save-logic-app.png)
+1. When you're done, on the designer toolbar, select **Save**.
- When you save your logic app workflow for the first time, the endpoint URL for your logic app workflow's Request trigger appears in the **HTTP POST URL** box. To trigger your logic app workflow and send inputs to your workflow for processing, send a request to the generated URL as the call destination, for example:
+ When you save your workflow for the first time, the endpoint URL for your workflow's **Request** trigger is generated and appears in the **HTTP POST URL** box, for example:
![Screenshot showing the generated Request trigger endpoint URL.](./media/migrate-from-scheduler-to-logic-apps/request-endpoint-url.png)
-1. Copy and save the endpoint URL so that you can later send a manual request to trigger your logic app workflow.
+ To manually trigger your workflow with the inputs that you want the workflow to process, you can send an HTTP request to the endpoint URL.
-## Start a one-time job
+1. Copy and save the endpoint URL so that you can test your workflow.
-To manually run or trigger a one-time job, send a call to the endpoint URL for your logic app's Request trigger. In this call, specify the input or payload to send, which you might have described earlier by specifying a schema.
+## Test your workflow
-For example, using the Postman app, you can create a POST request with the settings similar to this sample, and then select **Send** to make the request.
+To manually trigger your workflow, send an HTTP request to the endpoint URL in your workflow's **Request** trigger. With this request, include the input or payload to send, which you might have described earlier by specifying a schema. You can send this request by using your HTTP request tool and its instructions.
+
+For example, you can create and send an HTTP request that uses the method expected by the **Request** trigger, for example:
| Request method | URL | Body | Headers | |-|--|||
-| **POST** | <*endpoint-URL*> | **raw** <p>**JSON(application/json)** <p>In the **raw** box, enter the payload that you want to send in the request. <p>**Note**: This setting automatically configures the **Headers** values. | **Key**: Content-Type <br>**Value**: application/json |
-|||||
-
-![Screenshot showing the request to send for manually triggering your logic app workflow.](./media/migrate-from-scheduler-to-logic-apps/postman-send-post-request.png)
-
-After you send the call, the response from your logic app workflow appears under the **raw** box on the **Body** tab.
+| **POST** | <*endpoint-URL*> | **raw** <p>**JSON(application/json)** <br><br>In the **raw** box, enter the payload that you want to send in the request. **Note**: This setting automatically configures the **Headers** values. | **Key**: Content-Type <br>**Value**: application/json |
<a name="workflow-run-id"></a>-
-> [!IMPORTANT]
->
-> If you want to cancel the job later, select the **Headers** tab.
-> Find and copy the **x-ms-workflow-run-id** header value in the response.
->
-> ![Screenshot showing the response.](./media/migrate-from-scheduler-to-logic-apps/postman-response.png)
+<a name="cancel-one-time-job"></a>
## Cancel a one-time job
-In Azure Logic Apps, each one-time job executes as a single workflow run instance. To cancel a one-time job, you can use [Workflow Runs - Cancel](/rest/api/logic/workflowruns/cancel) in the Azure Logic Apps REST API. When you send a call to the trigger, provide the [workflow run ID](#workflow-run-id).
+In Azure Logic Apps, each one-time job executes as a single workflow run instance. To manually cancel a one-time job, you can find and copy the **x-ms-workflow-run-id** header value returned in the workflow's response, and send another HTTP request with this workflow run ID to the workflow's endpoint URL by using the following REST APIs, based on your logic app:
-## Schedule recurring jobs
+- Consumption workflows: [Workflow Runs - Cancel](/rest/api/logic/workflow-runs/cancel)
-1. In the [Azure portal](https://portal.azure.com), create a blank logic app workflow in the designer.
+- Standard workflows: [Workflow Runs - Cancel](/rest/api/appservice/workflow-runs/cancel)
- For the basic steps, follow [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](../logic-apps/quickstart-create-example-consumption-workflow.md).
+## Schedule recurring jobs
-1. In the designer search box, enter **recurrence**. From the **Triggers** list, select the trigger named **Recurrence**.
+1. In the [Azure portal](https://portal.azure.com), create a logic app resource and blank workflow.
- ![Screenshot showing the Azure portal and workflow designer with the "Recurrence" trigger selected.](./media/migrate-from-scheduler-to-logic-apps/recurrence-trigger.png)
+1. [Follow these general steps to add the **Schedule** trigger named **Recurrence**](../logic-apps/create-workflow-with-trigger-or-action.md#add-trigger).
1. If you want, set up a more advanced schedule.
- ![Screenshot showing the "Recurrence" trigger with an advanced schedule.](./media/migrate-from-scheduler-to-logic-apps/recurrence-advanced-schedule.png)
+ For more information about advanced scheduling options, see [Create and run recurring tasks and workflows with Azure Logic Apps](../connectors/connectors-native-recurrence.md).
- For more information about advanced scheduling options, review [Create and run recurring tasks and workflows with Azure Logic Apps](../connectors/connectors-native-recurrence.md).
+1. Add any other actions you want to run by selecting from the [1000+ ready-to-use connectors](/connectors/connector-reference/connector-reference-logicapps-connectors).
-1. Add other actions you want by selecting from [hundreds of ready-to-use connectors](/connectors/connector-reference/connector-reference-logicapps-connectors). Under the trigger, select **Next step**. Find and select the actions you want.
-
- For example, you can include an HTTP action that sends a request to a URL, or actions that work with Storage Queues, Service Bus queues, or Service Bus topics:
+ For example, you can include an **HTTP** action that sends a request to a URL or actions that work with Storage Queues, Service Bus queues, or Service Bus topics:
![Screenshot showing an H T T P action with a POST method.](./media/migrate-from-scheduler-to-logic-apps/recurrence-http-action.png)
-1. When you're done, save your logic app workflow.
-
- ![Screenshot showing the designer toolbar with the "Save" button selected.](./media/migrate-from-scheduler-to-logic-apps/save-logic-app.png)
+1. When you're done, on the designer toolbar, select **Save**.
## Advanced setup
The following sections describe other ways that you can customize your jobs.
### Retry policy
-To control the way that an action tries to rerun in your logic app workflow when intermittent failures happen, you can set the [retry policy](../logic-apps/logic-apps-exception-handling.md#retry-policies) in each action's settings, for example:
-
-1. Open the action's ellipses (**...**) menu, and select **Settings**.
-
- ![Screenshot showing an action's "Settings" selected.](./media/migrate-from-scheduler-to-logic-apps/action-settings.png)
-
-1. Select the retry policy that you want. For more information about each policy, review [Retry policies](../logic-apps/logic-apps-exception-handling.md#retry-policies).
-
- ![Screenshot showing the selected "Default" retry policy.](./media/migrate-from-scheduler-to-logic-apps/retry-policy.png)
+To control the way that an action tries to rerun in your workflow when intermittent failures happen, you can set the [retry policy](../logic-apps/logic-apps-exception-handling.md#retry-policies) in each action's settings.
## Handle exceptions and errors
-In Azure Scheduler, if the default action fails to run, you can run an alterative action that addresses the error condition. In Azure Logic Apps, you can also perform the same task.
-
-1. In the workflow designer, above the action that you want to handle, move your pointer over the arrow between steps, and select **Add a parallel branch**.
+In Azure Scheduler, if the default action fails to run, you can run an alterative action that addresses the error condition. In Azure Logic Apps, you can also perform the same task. For more information about exception handling in Azure Logic Apps, see [Handle errors and exceptions - RunAfter property](../logic-apps/logic-apps-exception-handling.md#control-run-after-behavior).
- ![Screenshot showing "Add a parallel branch" selected.](./media/migrate-from-scheduler-to-logic-apps/add-parallel-branch.png)
+1. In the designer, above the action that you want to handle, [add a parallel branch](../logic-apps/logic-apps-control-flow-branches.md).
1. Find and select the action you want to run instead as the alternative action.
- ![Screenshot showing the selected parallel action.](./media/migrate-from-scheduler-to-logic-apps/add-parallel-action.png)
-
-1. On the alternative action, open the ellipses (**...**) menu, and select **Configure run after**.
-
- ![Screenshot showing "Configure run after" selected.](./media/migrate-from-scheduler-to-logic-apps/configure-run-after.png)
+1. On the alternative action, find and select the **Configure run after** option.
1. Clear the box for the **is successful** property. Select the properties named **has failed**, **is skipped**, and **has timed out**.
- ![Screenshot showing the selected "run after" properties.](./media/migrate-from-scheduler-to-logic-apps/select-run-after-properties.png)
- 1. When you're finished, select **Done**.
-To learn more about exception handling, see [Handle errors and exceptions - RunAfter property](../logic-apps/logic-apps-exception-handling.md#control-run-after-behavior).
- ## FAQ <a name="retire-date"></a>
-**Q**: When is Azure Scheduler retiring? <br>
+**Q**: When did Azure Scheduler retire? <br>
**A**: Azure Scheduler fully retired on January 31, 2022. For general updates, see [Azure updates - Scheduler](https://azure.microsoft.com/updates/?product=scheduler). **Q**: What happens to my job collections and jobs after Azure Scheduler retires? <br> **A**: All Azure Scheduler job collections and jobs stop running and are deleted from the system. **Q**: Do I have to back up or perform any other tasks before migrating my Azure Scheduler jobs to Azure Logic Apps? <br>
-**A**: As a best practice, always back up your work. Check that the logic app workflows that you created are running as expected before deleting or disabling your Azure Scheduler jobs.
+**A**: As a best practice, always back up your work. Check that the workflows you created are running as expected before deleting or disabling your Azure Scheduler jobs.
-**Q**: What will happen to my scheduled Azure Web Jobs from Azure Scheduler? <br>
+**Q**: What happens to my scheduled Azure Web Jobs from Azure Scheduler? <br>
**A**: Web Jobs that use this way of [Scheduling Web Jobs](https://github.com/projectkudu/kudu/wiki/WebJobs#scheduling-a-triggered-webjob) aren't internally using Azure Scheduler: "For the schedule to work it requires the website to be configured as Always On and is not an Azure Scheduler but an internal implementation of a scheduler." The only affected Web Jobs are those that specifically use Azure Scheduler to run the Web Job using the Web Jobs API. You can trigger these WebJobs from a logic app workflow by using the **HTTP** action. **Q**: Is there a tool that can help me migrate my jobs from Azure Scheduler to Azure Logic Apps? <br>
If your Azure subscription has a paid support plan, you can create a technical s
| **Issue type** | **Technical** | | **Subscription** | <*your-Azure-subscription*> | | **Service** | Under **Monitoring & Management**, select **Scheduler**. If you can't find **Scheduler**, select **All services** first. |
- |||
1. Select the support option that you want. If you have a paid support plan, select **Next**. ## Next steps
-* [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](../logic-apps/quickstart-create-example-consumption-workflow.md)
+* [Create an example Consumption logic app workflow in multitenant Azure Logic Apps](../logic-apps/quickstart-create-example-consumption-workflow.md)
search Search Api Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-migration.md
- ignite-2023 - build-2024 Previously updated : 06/24/2024 Last updated : 07/19/2024 # Upgrade to the latest REST API in Azure AI Search
Azure AI Search breaks backward compatibility as a last resort. Upgrade is neces
+ Your code persists API requests and tries to resend them to the new API version. For example, this might happen if your application persists continuation tokens returned from the Search API (for more information, look for `@search.nextPageParameters` in the [Search API Reference](/rest/api/searchservice/Search-Documents)).
+## How to upgrade
+
+In your application code that makes direct calls to the REST APIs, modify the `api-version` parameter on the requst header. For more information about structuring a REST call, see [Quickstart: using REST](search-get-started-rest.md#set-up-visual-studio-code).
+
+If you're using an Azure SDK, those packages target specific versions of the REST API. Package updates might coincide with a REST API update, but each SDK is on it's own release schedule that ships independently of Azure AI Search REST API versions. Check the change log of your SDK package to determine whether a package release targets the latest REST API version.
+ ## Breaking change for client code that reads connection information Effective March 29, 2024 and applicable to all [supported REST APIs](/rest/api/searchservice/search-service-api-versions):
search Search Get Started Portal Import Vectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal-import-vectors.md
- build-2024 Previously updated : 06/17/2024 Last updated : 07/19/2024 # Quickstart: Vectorize text and images by using the Azure portal
Last updated 06/17/2024
> [!IMPORTANT] > The **Import and vectorize data** wizard is in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). By default, it targets the [2024-05-01-Preview REST API](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true).
-This quickstart helps you get started with [integrated vectorization (preview)](vector-search-integrated-vectorization.md) by using the **Import and vectorize data** wizard in the Azure portal. This wizard calls a user-specified embedding model to vectorize content during indexing and for queries.
+This quickstart helps you get started with [integrated vectorization (preview)](vector-search-integrated-vectorization.md) by using the **Import and vectorize data** wizard in the Azure portal. This wizard chunks your content and calls a user-specified embedding model to vectorize content during indexing and for queries.
-## Preview limitations
+Key points about the wizard:
-+ Source data is either Azure Blob Storage or OneLake files and shortcuts, using the default parsing mode (one search document per blob or file).
-+ The index schema is nonconfigurable. Source fields include `content` (chunked and vectorized), `metadata_storage_name` for the title, and `metadata_storage_path` for the document key. This key is represented as `parent_id` in the index.
++ Source data is either Azure Blob Storage or OneLake files and shortcuts.++ Document parsing mode is the default (one search document per blob or file).++ Index schema is nonconfigurable. It provides vector and nonvector fields for chunked data. + Chunking is nonconfigurable. The effective settings are: ```json
This quickstart helps you get started with [integrated vectorization (preview)](
pageOverlapLength: 500 ```
-For fewer limitations or more data source options, try a code-base approach. For more information, see the [integrated vectorization sample](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/integrated-vectorization/azure-search-integrated-vectorization-sample.ipynb).
- ## Prerequisites + An Azure subscription. [Create one for free](https://azure.microsoft.com/free/).
-+ For data, either [Azure Blob Storage](/azure/storage/common/storage-account-overview) or a [OneLake lakehouse](search-how-to-index-onelake-files.md).
-
- Azure Storage must be a standard performance (general-purpose v2) account. Access tiers can be hot, cool, and cold.
-
- Don't use Azure Data Lake Storage Gen2 (a storage account with a hierarchical namespace). This version of the wizard doesn't support Data Lake Storage Gen2.
++ [Azure AI Search service](search-create-service-portal.md) in the same region as Azure AI. We recommend the Basic tier or higher.
-+ For vectorization, an [Azure AI services multiservice account](/azure/ai-services/multi-service-resource) or [Azure OpenAI Service](https://aka.ms/oai/access) endpoint with deployments.
++ [Azure Blob Storage](/azure/storage/common/storage-account-overview) or a [OneLake lakehouse](search-how-to-index-onelake-files.md).
- For [multimodal with Azure AI Vision](/azure/ai-services/computer-vision/how-to/image-retrieval), create an Azure AI service in SwedenCentral, EastUS, NorthEurope, WestEurope, WestUS, SoutheastAsia, KoreaCentral, FranceCentral, AustraliaEast, WestUS2, SwitzerlandNorth, or JapanEast. [Check the documentation](/azure/ai-services/computer-vision/how-to/image-retrieval?tabs=csharp) for an updated list.
+ Azure Storage must be a standard performance (general-purpose v2) account. Access tiers can be hot, cool, and cold. Don't use Azure Data Lake Storage Gen2 (a storage account with a hierarchical namespace). This version of the wizard doesn't support Data Lake Storage Gen2.
- You can also use an [Azure AI Studio model catalog](/azure/ai-studio/what-is-ai-studio) (and hub and project) with model deployments.
++ An embedding model on a supported platform. [Deployment instructions](#set-up-embedding-models) are provided in this article.
-+ For indexing and queries, Azure AI Search. It must be in the same region as your Azure AI service. We recommend the Basic tier or higher.
+ | Provider | Supported models |
+ |||
+ | [Azure OpenAI Service](https://aka.ms/oai/access) | text-embedding-ada-002, text-embedding-3-large, or text-embedding-3-small. |
+ | [Azure AI Studio model catalog](/azure/ai-studio/what-is-ai-studio) | Azure, Cohere, and Facebook embedding models. |
+ | [Azure AI services multiservice account](/azure/ai-services/multi-service-resource) | [Azure AI Vision multimodal](/azure/ai-services/computer-vision/how-to/image-retrieval) for image and text vectorization. Azure AI Vision multimodal is available in selected regions: East US, West US, West US2, North Europe, West Europe, France Central, Sweden Central, Switzerland North, Southeast Asia, Korea Central, Australia East, or Japan East. [Check the documentation](/azure/ai-services/computer-vision/how-to/image-retrieval?tabs=csharp) for an updated list. |
-+ Role assignments or API keys for connections to embedding models and data sources. This article provides instructions for role-based access control (RBAC).
+### Public endpoint requirements
All of the preceding resources must have public access enabled so that the portal nodes can access them. Otherwise, the wizard fails. After the wizard runs, you can enable firewalls and private endpoints on the integration components for security. For more information, see [Secure connections in the import wizards](search-import-data-portal.md#secure-connections). If private endpoints are already present and you can't disable them, the alternative option is to run the respective end-to-end flow from a script or program on a virtual machine. The virtual machine must be on the same virtual network as the private endpoint. [Here's a Python code sample](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python/code/integrated-vectorization) for integrated vectorization. The same [GitHub repo](https://github.com/Azure/azure-search-vector-samples/tree/main) has samples in other programming languages.
-A free search service supports RBAC on connections to Azure AI Search, but it doesn't support managed identities on outbound connections to Azure Storage or Azure AI Vision. This level of support means you must use key-based authentication on connections between a free search service and other Azure services. For connections that are more secure:
+### Role-based access control requirements
-+ Use the Basic tier or higher.
-+ [Configure a managed identity](search-howto-managed-identities-data-sources.md) and role assignments to admit requests from Azure AI Search on other Azure services.
+We recommend role assignments for search service connections to other resources.
-> [!NOTE]
-> If you can't progress through the wizard because options aren't available (for example, you can't select a data source or an embedding model), revisit the role assignments. Error messages indicate that models or deployments don't exist, when in fact the real problem is that the search service doesn't have permission to access them.
+1. On Azure AI Search, [enable roles](search-security-enable-roles.md).
-## Check for space
+1. Configure your search service to [use a managed identity](search-howto-managed-identities-data-sources.md#create-a-system-managed-identity).
-If you're starting with the free service, you're limited to three indexes, three data sources, three skillsets, and three indexers. Make sure you have room for extra items before you begin. This quickstart creates one of each object.
+1. On your data source platform and embedding model provider, create role assignments that allow search service to access data and models. [Prepare sample data](#prepare-sample-data) provides instructions for setting up roles.
-## Check for service identity
+A free search service supports RBAC on connections to Azure AI Search, but it doesn't support managed identities on outbound connections to Azure Storage or Azure AI Vision. This level of support means you must use key-based authentication on connections between a free search service and other Azure services.
-We recommend role assignments for search service connections to other resources.
+For more secure connections:
+++ Use the Basic tier or higher.++ [Configure a managed identity](search-howto-managed-identities-data-sources.md) and use roles for authorized access.
-1. On Azure AI Search, [enable RBAC](search-security-enable-roles.md).
+> [!NOTE]
+> If you can't progress through the wizard because options aren't available (for example, you can't select a data source or an embedding model), revisit the role assignments. Error messages indicate that models or deployments don't exist, when in fact the real cause is that the search service doesn't have permission to access them.
-1. Configure your search service to [use a system-assigned or user-assigned managed identity](search-howto-managed-identities-data-sources.md#create-a-system-managed-identity).
+### Check for space
-In the following sections, you can assign the search service's managed identity to roles in other services. The sections provide steps for role assignments where applicable.
+If you're starting with the free service, you're limited to 3 indexes, data sources, skillsets, and indexers. Basic limits you to 15. Make sure you have room for extra items before you begin. This quickstart creates one of each object.
-## Check for semantic ranking
+### Check for semantic ranking
The wizard supports semantic ranking, but only on the Basic tier and higher, and only if semantic ranking is already [enabled on your search service](semantic-how-to-enable-disable.md). If you're using a billable tier, check whether semantic ranking is enabled.
service-bus-messaging Automate Update Messaging Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/automate-update-messaging-units.md
The previous section shows you how to add a default condition for the autoscale
> [!NOTE] > - The metrics you review to make decisions on autoscaling may be 5-10 minutes old. When you are dealing with spiky workloads, we recommend that you have shorter durations for scaling up and longer durations for scaling down (> 10 minutes) to ensure that there are enough messaging units to process spiky workloads. >
- > - If you see failures due to lack of capacity (no messaging units available), raise a support ticket with us.
+ > - If you see failures due to lack of capacity (no messaging units available), raise a support ticket with us. Capacity fulfillment is subject to the constraints of the environment and is carried out to our best effort.
## Run history Switch to the **Run history** tab on the **Scale** page to see a chart that plots number of messaging units as observed by the autoscale engine. If the chart is empty, it means either autoscale wasn't configured or configured but disabled, or is in a cool down period.
service-bus-messaging Service Bus Performance Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-performance-improvements.md
The benchmarking sample doesn't use any advanced features, so the throughput you
#### Compute considerations
-Using certain Service Bus features require compute utilization that can decrease the expected throughput. Some of these features are -
+Service Bus operates several background processes that can affect compute utilization. These include, but are not limited to, timers, schedules, and metrics emission. Additionally, using certain Service Bus features require compute utilization that can decrease the expected throughput. Some of these features are -
1. Sessions. 2. Fanning out to multiple subscriptions on a single topic.
Using certain Service Bus features require compute utilization that can decrease
7. Deduplication & look back time window. 8. Forward to (forwarding from one entity to another).
-If your application uses any of the above features and you aren't receiving the expected throughput, you can review the **CPU usage** metrics and consider scaling up your Service Bus Premium namespace.
-
-You can also utilize Azure Monitor to [automatically scale the Service Bus namespace](automate-update-messaging-units.md).
+If your application uses any of the above features and you aren't receiving the expected throughput, you can review the **CPU usage** metrics and consider scaling up your Service Bus Premium namespace. You can also utilize Azure Monitor to [automatically scale the Service Bus namespace](automate-update-messaging-units.md). It is recommended to increase the number of Message Units (MUs) when CPU usage exceeds 70% to ensure optimal performance.
### Sharding across namespaces
Goal: Maximize the throughput of a single queue. The number of senders and recei
* To increase the overall send rate into the queue, use multiple message factories to create senders. For each sender, use asynchronous operations or multiple threads. * To increase the overall receive rate from the queue, use multiple message factories to create receivers.
-* Use asynchronous operations to take advantage of client-side batching.
-* Leave batched store access enabled. This access increases the overall rate at which messages can be written into the queue.
* Set the prefetch count to 20 times the maximum processing rates of all receivers of a factory. This count reduces the number of Service Bus client protocol transmissions. ### Multiple high-throughput queues
To obtain maximum throughput across multiple queues, use the settings outlined t
Goal: Minimize latency of a queue or topic. The number of senders and receivers is small. The throughput of the queue is small or moderate.
-* Disable client-side batching. The client immediately sends a message.
-* Disable batched store access. The service immediately writes the message to the store.
* If using a single client, set the prefetch count to 20 times the processing rate of the receiver. If multiple messages arrive at the queue at the same time, the Service Bus client protocol transmits them all at the same time. When the client receives the next message, that message is already in the local cache. The cache should be small. * If using multiple clients, set the prefetch count to 0. By setting the count, the second client can receive the second message while the first client is still processing the first message.
Service Bus enables up to 1,000 concurrent connections to a messaging entity. Th
To maximize throughput, follow these steps: * If each sender is in a different process, use only a single factory per process.
-* Use asynchronous operations to take advantage of client-side batching.
-* Leave batched store access enabled. This access increases the overall rate at which messages can be written into the queue or topic.
* Set the prefetch count to 20 times the maximum processing rates of all receivers of a factory. This count reduces the number of Service Bus client protocol transmissions. ### Queue with a large number of receivers
Service Bus enables up to 1,000 concurrent connections to an entity. If a queue
To maximize throughput, follow these guidelines: * If each receiver is in a different process, use only a single factory per process.
-* Receivers can use synchronous or asynchronous operations. Given the moderate receive rate of an individual receiver, client-side batching of a Complete request doesn't affect receiver throughput.
-* Leave batched store access enabled. This access reduces the overall load of the entity. It also increases the overall rate at which messages can be written into the queue or topic.
* Set the prefetch count to a small value (for example, PrefetchCount = 10). This count prevents receivers from being idle while other receivers have large numbers of messages cached. ### Topic with a few subscriptions
To maximize throughput, follow these guidelines:
* To increase the overall send rate into the topic, use multiple message factories to create senders. For each sender, use asynchronous operations or multiple threads. * To increase the overall receive rate from a subscription, use multiple message factories to create receivers. For each receiver, use asynchronous operations or multiple threads.
-* Use asynchronous operations to take advantage of client-side batching.
-* Leave batched store access enabled. This access increases the overall rate at which messages can be written into the topic.
* Set the prefetch count to 20 times the maximum processing rates of all receivers of a factory. This count reduces the number of Service Bus client protocol transmissions. ### Topic with a large number of subscriptions
Topics with a large number of subscriptions typically expose a low overall throu
To maximize throughput, try the following steps:
-* Use asynchronous operations to take advantage of client-side batching.
-* Leave batched store access enabled. This access increases the overall rate at which messages can be written into the topic.
* Set the prefetch count to 20 times the expected rate at which messages are received. This count reduces the number of Service Bus client protocol transmissions.
storage-mover Agent Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/agent-deploy.md
The agent is delivered with a default user account and password. Connect to the
## Bandwidth throttling
-Take time to consider the amount of bandwidth a new machine uses before you deploy it to your network. An Azure Storage Mover agent communicates with a source share using the local network, and the Azure Storage service on the wide area network (WAN) link. In both cases, the agent uses all available network bandwidth.
+Take time to consider the amount of bandwidth a new machine uses before you deploy it to your network. An Azure Storage Mover agent communicates with a source share using the local network, and the Azure Storage service on the wide area network (WAN) link. In both cases, the agent is designed to make full use of the network's bandwidth by default. However, you can now [set bandwidth management schedules](./bandwidth-management.md) for your Storage Mover agents.
-> [!IMPORTANT]
-> The current Azure Storage Mover agent does not support bandwidth throttling schedules.
-
-If bandwidth throttling is important to you, create a local virtual network with an internet connection and configure quality of service (QoS) settings. This approach allows you to expose the agent through the virtual network and to locally configure an unauthenticated network proxy server on the agent if needed.
+Alternatively, you can create a local virtual network with an internet connection and configure quality of service (QoS) settings. This approach allows you to expose the agent through the virtual network and to locally configure an unauthenticated network proxy server on the agent if needed.
## Decommissioning an agent
stream-analytics Data Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/data-protection.md
Previously updated : 03/13/2023 Last updated : 07/19/2024 # Data protection in Azure Stream Analytics
Azure Stream Analytics persists the following metadata and data in order to run:
## In-Region Data Residency
-Azure Stream Analytics stores customer data and other metadata described above. Customer data is stored by Azure Stream Analytics in a single region by default, so this service automatically satisfies in region data residency requirements including those specified in the [Trust Center](https://azuredatacentermap.azurewebsites.net/).
+Azure Stream Analytics stores customer data and other metadata described earlier. Azure Stream Analytics stores customer data in a single region by default, so this service automatically satisfies region data residency requirements including the ones specified in the [Trust Center](https://azuredatacentermap.azurewebsites.net/).
Additionally, you can choose to store all data assets (customer data and other metadata) related to your stream analytics job in a single region by encrypting them in a storage account of your choice. ## Encrypt your data
-Stream Analytics automatically employs best-in-class encryption standards across its infrastructure to encrypt and secure your data. You can simply trust Stream Analytics to securely store all your data so that you don't have to worry about managing the infrastructure.
+Stream Analytics automatically employs best-in-class encryption standards across its infrastructure to encrypt and secure your data. You can trust Stream Analytics to securely store all your data so that you don't have to worry about managing the infrastructure.
If you want to use customer-managed keys to encrypt your data, you can use your own storage account (general purpose V1 or V2) to store any private data assets that are required by the Stream Analytics runtime. Your storage account can be encrypted as needed. None of your private data assets are stored permanently by the Stream Analytics infrastructure.
-This setting must be configured at the time of Stream Analytics job creation, and it can't be modified throughout the job's life cycle. Modification or deletion of storage that is being used by your Stream Analytics is not recommended. If you delete your storage account, you will permanently delete all private data assets, which will cause your job to fail.
+This setting must be configured at the time of Stream Analytics job creation, and it can't be modified throughout the job's life cycle. Modification or deletion of storage that is being used by your Stream Analytics isn't recommended. If you delete your storage account, you permanently delete all private data assets, and it causes your job to fail.
-Updating or rotating keys to your storage account is not possible using the Stream Analytics portal. You can update the keys using the REST APIs. You can also connect to your job storage account using managed identity authentication with allow trusted services.
+Updating or rotating keys to your storage account isn't possible using the Stream Analytics portal. You can update the keys using the REST APIs. You can also connect to your job storage account using managed identity authentication with allow trusted services.
-If the storage account you want to use is in an Azure Virtual Network, you must use managed identity authentication mode with **Allow trusted services**. For more information, visit: [Connect Stream Analytics jobs to resources in an Azure Virtual Network (VNet)](connect-job-to-vnet.md).
+If the storage account you want to use is in an Azure Virtual Network, you must use managed identity authentication mode with **Allow trusted services**. For more information, visit: [Connect Stream Analytics jobs to resources in an Azure virtual network](connect-job-to-vnet.md).
### Configure storage account for private data
Use the following steps to configure your storage account for private data asset
1. Select the check box that says *Secure all private data assets needed by this job in my Storage account*.
-1. Select a storage account from your subscription. Note that this setting cannot be modified throughout the life cycle of the job. You also cannot add this option once the job is created.
+1. Select a storage account from your subscription. This setting can't be modified throughout the life cycle of the job. You also can't add this option once the job is created.
1. To authenticate with a connection string, select **Connection string** from the Authentication mode dropdown. The storage account key is automatically populated from your subscription. ![Private data storage account settings](./media/data-protection/storage-account-create.png)
-1. To authenticate with Managed Identity, select **Managed Identity** from the Authentication mode dropdown. If you choose Managed Identity, you need to add your Stream Analytics job to the storage account's access control list with the *Storage Blob Data Contributor* role. If you do not give your job access, the job will not be able to perform any operations. For more information on how to grant access, see [Use Azure RBAC to assign a managed identity access to another resource](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md#use-azure-rbac-to-assign-a-managed-identity-access-to-another-resource).
+1. To authenticate with Managed Identity, select **Managed Identity** from the Authentication mode dropdown. If you choose Managed Identity, you need to add your Stream Analytics job to the storage account's access control list with the *Storage Blob Data Contributor* role. If you don't give your job access, the job can't perform any operations. For more information on how to grant access, see [Assign an Azure role for access to blob data](../storage/blobs/assign-azure-role-data-access.md).
:::image type="content" source="media/data-protection/storage-account-create-msi.png" alt-text="Private data storage account settings with managed identity authentication":::
Any private data that is required to be persisted by Stream Analytics is stored
Connection details of your resources, which are used by your Stream Analytics job, are also stored. Encrypt your storage account to secure all of your data. ## Enables Data Residency
-You may use this feature to enforce any data residency requirements you may have by providing a storage account accordingly.
+You can use this feature to enforce any data residency requirements you have by providing a storage account accordingly.
## Next steps
update-manager Assessment Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/assessment-options.md
Update Manager provides you with the flexibility to assess the status of availab
## Periodic assessment
- Periodic assessment is an update setting on a machine that allows you to enable automatic periodic checking of updates by Update Manager. We recommend that you enable this property on your machines as it allows Update Manager to fetch latest updates for your machines every 24 hours and enables you to view the latest compliance status of your machines. You can enable this setting using update settings flow as detailed [here](manage-update-settings.md#configure-settings-on-a-single-vm) or enable it at scale by using [Policy](periodic-assessment-at-scale.md). Learn more on [Azure VM extensions](overview.md#vm-extensions).
+ Periodic assessment is an update setting on a machine that allows you to enable automatic periodic checking of updates by Update Manager. We recommend that you enable this property on your machines as it allows Update Manager to fetch latest updates for your machines every 24 hours and enables you to view the latest compliance status of your machines. You can enable this setting using update settings flow as detailed [here](manage-update-settings.md#configure-settings-on-a-single-vm) or enable it at scale by using [Policy](periodic-assessment-at-scale.md). Learn more on [Azure VM extensions](prerequisites.md#vm-extensions).
:::image type="content" source="media/updates-maintenance/periodic-assessment-inline.png" alt-text="Screenshot showing periodic assessment option." lightbox="media/updates-maintenance/periodic-assessment-expanded.png":::
update-manager Configure Wu Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/configure-wu-agent.md
The Windows update client on Windows servers can get their patches from either o
### Edit the registry
-If scheduled patching is configured on your machine using the Azure Update Manager, the Auto update on the client is disabled. To edit the registry and configure the setting, see [First party updates on Windows](support-matrix.md#first-party-updates-on-windows).
+If scheduled patching is configured on your machine using the Azure Update Manager, the Auto update on the client is disabled. To edit the registry and configure the setting, see [First party updates on Windows](support-matrix.md).
### Patching using group policy on Azure Update Manager
update-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/overview.md
Previously updated : 02/21/2024 Last updated : 07/14/2024 # About Azure Update Manager > [!Important]
-> Azure Log Analytics agent, also known as the Microsoft Monitoring Agent (MMA) will be [retired in August 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). Azure Automation Update Management solution relies on this agent and may encounter issues once the agent is retired as it does not work with Azure Monitoring Agent (AMA). Therefore, if you are using the Azure Automation Update Management solution, we recommend that you move to Azure Update Manager for your software update needs. All the capabilities of Azure Automation Update management solution will be available on Azure Update Manager before the retirement date. Follow the [guidance](guidance-migration-automation-update-management-azure-update-manager.md) to move your machines and schedules from Automation Update Management to Azure Update Manager.
+> On 31 August 2024, both Azure Automation Update Management and the Log Analytics agent it uses [will be retired](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). Therefore, if you are using the Automation Update Management solution, we recommend that you move to Azure Update Manager for your software update needs. Follow the [guidance](guidance-migration-automation-update-management-azure-update-manager.md#migration-scripts) to move your machines and schedules from Automation Update Management to Azure Update Manager.
+> For more information, see the [FAQs on retirement](update-manager-faq.md#impact-of-log-analytics-agent-retirement). You can [sign up](https://developer.microsoft.com/reactor/?search=Azure+Update+Manager&page=1) for monthly live sessions on migration including Q&A sessions.
-Update Manager is a unified service to help manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on other cloud platforms from a single dashboard. You can also use Update Manager to make real-time updates or schedule them within a defined maintenance window.
-You can use Update Manager in Azure to:
--- Oversee update compliance for your entire fleet of machines in Azure, on-premises, and in other cloud environments.-- Instantly deploy critical updates to help secure your machines.-- Use flexible patching options such as [automatic virtual machine (VM) guest patching](../virtual-machines/automatic-vm-guest-patching.md) in Azure, [hotpatching](../automanage/automanage-hotpatch.md), and customer-defined maintenance schedules.
+Update Manager is a unified service to help manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your machines in Azure and on-premises/on other cloud platforms (connected by [Azure Arc](https://learn.microsoft.com/azure/azure-arc/)) from a single pane of management. You can also use Update Manager to make real-time updates or schedule them within a defined maintenance window.
-We also offer other capabilities to help you manage updates for your Azure VMs that you should consider as part of your overall update management strategy. To learn more about the options that are available, see the Azure VM [update options](../virtual-machines/updates-maintenance-overview.md).
+You can use Update Manager in Azure to:
-Before you enable your machines for Update Manager, make sure that you understand the information in the following sections.
+- Instantly check for updates or [deploy security or critical updates](https://aka.ms/on-demand-patching) to help secure your machines.
+- Enable [periodic assessment](https://aka.ms/umc-periodic-assessment-policy) to check for updates every 24 hours.
+- Use flexible patching options such as:
+ - [Customer-defined maintenance schedules](https://aka.ms/umc-scheduled-patching) for both Azure and Arc-connected machines.
+ - [Automatic virtual machine (VM) guest patching](../virtual-machines/automatic-vm-guest-patching.md) and [hot patching](https://learn.microsoft.com/azure/automanage/automanage-hotpatch) for Azure VMs.
+- Build custom reporting dashboards for reporting update status and [configure alerts](https://aka.ms/aum-alerts) on certain conditions.
+- Oversee update compliance for your entire fleet of machines in Azure and on-premises/in other cloud environments connected by [Azure Arc](https://learn.microsoft.com/azure/azure-arc/) through a single pane. The different types of machines that can be managed are:
+ - [Hybrid machines](https://learn.microsoft.com/azure/azure-arc/servers/)
+ - [VMWare machines](https://learn.microsoft.com/azure/azure-arc/vmware-vsphere/)
+ - [SCVMM machines](https://learn.microsoft.com/azure/azure-arc/system-center-virtual-machine-manager/)
+ - [Azure Stack HCI VMs](https://learn.microsoft.com/azure-stack/hci/)
## Key benefits
-Update Manager has been redesigned and doesn't depend on Azure Automation or Azure Monitor Logs, as required by the [Azure Automation Update Management feature](../automation/update-management/overview.md). Update Manager offers many new features and provides enhanced functionality over the original version available with Azure Automation. Some of those benefits are listed here:
+Update Manager offers many new features and provides enhanced and native functionalities. Following are some of the benefits:
- Provides native experience with zero on-boarding.
- - Built as native functionality on Azure compute and the Azure Arc for Servers platform for ease of use.
- - No dependency on Log Analytics and Azure Automation.
- - Azure Policy support.
- - Global availability in all Azure compute and Azure Arc regions.
-- Works with Azure roles and identity.
- - Granular access control at the per-resource level instead of access control at the level of the Azure Automation account and Log Analytics workspace.
- - Update Manager now has Azure Resource Manager-based operations. It allows role-based access control and roles based on Azure Resource Manager in Azure.
-- Offers enhanced flexibility.
- - Ability to take immediate action either by installing updates immediately or scheduling them for a later date.
- - Check updates automatically or on demand.
- - Helps secure machines with new ways of patching, such as [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md) in Azure, [hot patching](../automanage/automanage-hotpatch.md), or custom maintenance schedules.
- - Sync patch cycles in relation to "patch Tuesday," the unofficial term for Microsoft's scheduled security fix release on every second Tuesday of each month.
-
-The following diagram illustrates how Update Manager assesses and applies updates to all Azure machines and Azure Arc-enabled servers for both Windows and Linux.
-
-![Diagram that shows the Update Manager workflow.](./media/overview/update-management-center-overview.png)
-
-To support management of your Azure VM or non-Azure machine, Update Manager relies on a new [Azure extension](../virtual-machines/extensions/overview.md) designed to provide all the functionality required to interact with the operating system to manage the assessment and application of updates. This extension is automatically installed when you initiate any Update Manager operations, such as **Check for updates**, **Install one-time update**, and **Periodic Assessment** on your machine. The extension supports deployment to Azure VMs or Azure Arc-enabled servers by using the extension framework. The Update Manager extension is installed and managed by using:
--- [Azure VM Windows agent](../virtual-machines/extensions/agent-windows.md) or the [Azure VM Linux agent](../virtual-machines/extensions/agent-linux.md) for Azure VMs.-- [Azure Arc-enabled servers agent](../azure-arc/servers/agent-overview.md) for non-Azure Linux and Windows machines or physical servers.-
- Update Manager manages the extension agent installation and configuration. Manual intervention isn't required as long as the Azure VM agent or Azure Arc-enabled server agent is functional. The Update Manager extension runs code locally on the machine to interact with the operating system, and it includes:
--- Retrieving the assessment information about status of system updates for it specified by the Windows Update client or Linux package manager.-- Initiating the download and installation of approved updates with the Windows Update client or Linux package manager.-
-All assessment information and update installation results are reported to Update Manager from the extension and is available for analysis with [Azure Resource Graph](../governance/resource-graph/overview.md). You can view up to the last seven days of assessment data, and up to the last 30 days of update installation results.
-
-The machines assigned to Update Manager report how up to date they are based on what source they're configured to synchronize with. You can configure [Windows Update Agent (WUA)](/windows/win32/wua_sdk/updating-the-windows-update-agent) on Windows machines to report to [Windows Server Update Services](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus) or Microsoft Update, which is by default. You can configure Linux machines to report to a local or public YUM or APT package repository. If the Windows Update Agent is configured to report to WSUS, depending on when WSUS last synchronized with Microsoft Update, the results in Update Manager might differ from what Microsoft Update shows. This behavior is the same for Linux machines that are configured to report to a local repository instead of a public package repository.
-
-> [!NOTE]
-> WSUS isn't available in Azure China operated by 21 Vianet.
-
-You can manage your Azure VMs or Azure Arc-enabled servers directly or at scale with Update Manager.
-
-## Prerequisites
-
-Along with the following prerequisites, see [Support matrix](support-matrix.md) for Update Manager.
-
-### Role
-
-Resource | Role
- |
-|Azure VM | [Azure Virtual Machine Contributor](../role-based-access-control/built-in-roles.md#virtual-machine-contributor) or Azure [Owner](../role-based-access-control/built-in-roles.md#owner)
-Azure Arc-enabled server | [Azure Connected Machine Resource Administrator](../azure-arc/servers/security-identity-authorization.md#identity-and-access-control)
-
-### Permissions
-
-You need the following permissions to create and manage update deployments. The table shows the permissions that are needed when you use Update Manager.
-
-Actions |Permission |Scope |
- | | |
-|Read Azure VM properties | Microsoft.Compute/virtualMachines/read ||
-|Update assessment on Azure VMs |Microsoft.Compute/virtualMachines/assessPatches/action ||
-|Read assessment data for Azure VMs | Microsoft.Compute/virtualMachines/patchAssessmentResults/latest </br> Microsoft.Compute/virtualMachines/patchAssessmentResults/latest/softwarePatches ||
-|Install update on Azure VMs |Microsoft.Compute/virtualMachines/installPatches/action ||
-|Read patch installation data for Azure VMs | Microsoft.Compute/virtualMachines/patchInstallationResults </br> Microsoft.Compute/virtualMachines/patchInstallationResults/softwarePatches ||
-|Read Azure Arc-enabled server properties | Microsoft.HybridCompute/machines/read||
-|Update assessment on Azure Arc-enabled server |Microsoft.HybridCompute/machines/assessPatches/action ||
-|Read assessment data for Azure Arc-enabled server | Microsoft.HybridCompute/machines/patchAssessmentResults </br> Microsoft.HybridCompute/machines/patchAssessmentResults/softwarePatches ||
-|Install update on Azure Arc-enabled server |Microsoft.HybridCompute/machines/installPatches/action ||
-|Read patch installation data for Azure Arc-enabled server | Microsoft.HybridCompute/machines/patchInstallationResults </br> Microsoft.HybridCompute/machines/patchInstallationResults/softwarePatches||
-|Register the subscription for the Microsoft.Maintenance resource provider| Microsoft.Maintenance/register/action | Subscription|
-|Create/modify maintenance configuration |Microsoft.Maintenance/maintenanceConfigurations/write |Subscription/resource group |
-|Create/modify configuration assignments |Microsoft.Maintenance/configurationAssignments/write |Subscription |
-|Read permission for Maintenance updates resource |Microsoft.Maintenance/updates/read |Machine |
-|Read permission for Maintenance apply updates resource |Microsoft.Maintenance/applyUpdates/read |Machine |
--
-### VM images
-
-For more information, see the [list of supported operating systems and VM images](support-matrix.md#supported-operating-systems).
-
- Azure Update Manager supports [specialized images](../virtual-machines/linux/imaging.md#specialized-images) including the VMs created by Azure Migrate, Azure Backup, and Azure Site Recovery.
-
-## VM extensions
-
-Azure VM extensions and Azure Arc-enabled VM extensions are available.
-
-#### [Azure VM extensions](#tab/azure-vms)
-
-| Operating system| Extension
-|-|-|
-|Windows | Microsoft.CPlat.Core.WindowsPatchExtension|
-|Linux | Microsoft.CPlat.Core.LinuxPatchExtension |
-
-#### [Azure Arc-enabled VM extensions](#tab/azure-arc-vms)
-
-| Operating system| Extension
-|-|-|
-|Windows | Microsoft.CPlat.Core.WindowsPatchExtension (Periodic assessment) <br> Microsoft.SoftwareUpdateManagement.WindowsOsUpdateExtension (On-demand operations and Schedule patching) |
-|Linux | Microsoft.SoftwareUpdateManagement.LinuxOsUpdateExtension (On-demand operations and Schedule patching) <br> Microsoft.CPlat.Core.LinuxPatchExtension (Periodic assessment) |
-
-To view the available extensions for a VM in the Azure portal:
-
-1. Go to the [Azure portal](https://portal.azure.com) and select a VM.
-1. On the VM home page, under **Settings**, select **Extensions + applications**.
-1. On the **Extensions** tab, you can view the available extensions.
--
-### Network planning
-
-To prepare your network to support Update Manager, you might need to configure some infrastructure components.
-
-For Windows machines, you must allow traffic to any endpoints required by the Windows Update agent. You can find an updated list of required endpoints in [Issues related to HTTP/Proxy](/windows/deployment/update/windows-update-troubleshooting#issues-related-to-httpproxy). If you have a local [WSUS](/windows-server/administration/windows-server-update-services/plan/plan-your-wsus-deployment) deployment, you must also allow traffic to the server specified in your [WSUS key](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry).
-
-For Red Hat Linux machines, see [IPs for the RHUI content delivery servers](../virtual-machines/workloads/redhat/redhat-rhui.md#the-ips-for-the-rhui-content-delivery-servers) for required endpoints. For other Linux distributions, see your provider documentation.
+ - Built as native functionality on Azure virtual machines and Azure Arc for Servers platforms for ease of use.
+ - No dependency on Log Analytics and Azure Automation.
+ - Azure [Policy support](https://aka.ms/aum-policy-support).
+ - Availability in most [Azure virtual machines and Azure Arc regions](https://aka.ms/aum-supported-regions).
+- Works with Azure roles and identity.
+ - Granular access control at the per-resource level instead of access control at the level of the Azure Automation account and Log Analytics workspace.
+ - Update Manager has Azure Resource Manager-based operations. It allows [role-based access control](../role-based-access-control/overview.md) and roles based on Azure Resource Manager in Azure.
+ - Offers enhanced flexibility
+ - Take immediate action either by [installing updates immediately](https://aka.ms/on-demand-patching) or [scheduling them for a later date](https://aka.ms/umc-scheduled-patching).
+ - [Check updates automatically](https://aka.ms/aum-policy-support) or [on demand](https://aka.ms/on-demand-assessment).
+ - Secure machines with new ways of patching such as [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md) in Azure, [hot patching](https://learn.microsoft.com/azure/automanage/automanage-hotpatch) or [custom maintenance schedules](https://aka.ms/umc-scheduled-patching).
+ - Sync patch cycles in relation to **patch Tuesday** the unofficial term for Microsoft's scheduled security fix release on every second Tuesday of each month.
+- Reporting and alerting
+ - Build custom reporting dashboards through [Azure Workbooks](manage-workbooks.md) to monitor the update compliance of your infrastructure.
+ - [Configure alerts](https://aka.ms/aum-alerts) on updates/compliance to be notified or to automate action whenever something requires your attention.
+
## Next steps--- [View updates for a single machine](view-updates.md)-- [Deploy updates now (on-demand) for a single machine](deploy-updates.md)
+- [How Update Manager works](workflow-update-manager.md)
+- [Prerequisites of Update Manager](prerequisites.md)
+- [View updates for a single machine](view-updates.md).
+- [Deploy updates now (on-demand) for a single machine](deploy-updates.md).
+- [Enable periodic assessment at scale using policy](https://aka.ms/aum-policy-support).
- [Schedule recurring updates](scheduled-patching.md)-- [Manage update settings via the portal](manage-update-settings.md)-- [Manage multiple machines by using Update Manager](manage-multiple-machines.md)
+- [Manage update settings via the portal](manage-update-settings.md).
+- [Manage multiple machines by using Update Manager](manage-multiple-machines.md).
update-manager Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/prerequisites.md
+
+ Title: Prerequisites for Azure Update Manager
+description: This article explains the prerequisites for Azure Update Manager, VM extensions and network planning.
++++ Last updated : 07/14/2024+++
+# Prerequisites for Azure Update Manager
+
+This article summarizes the prerequisites, the extensions for Azure VM extensions and Azure Arc-enabled servers and details on how to prepare your network to support Update Manager.
+
+## Prerequisites
+
+Azure Update Manager is an out of the box, zero onboarding service. Before you start using this service, consider the following list:
+
+### Arc-enabled servers
+Arc-enabled servers must be connected to Azure Arc to use Azure Update Manager. For more information, see [how to enable Arc on non-Azure machines](https://aka.ms/onboard-to-arc-aum-migration).
+
+### Support matrix
+Refer [support matrix](support-matrix.md) to find out about updates and the update sources, VM images and Azure regions that are supported for Azure Update Manager.
+
+### Roles and permissions
+
+To manage machines from Azure Update Manager, see [roles and permissions](roles-permissions.md).
+
+### VM extensions
+
+Azure VM extensions and Azure Arc-enabled VM extensions are required to run on the Azure and Arc-enabled machine respectively for Azure Update Manager to work. But separate installation is not required as the extensions are automatically pushed on the VM the first time you trigger any Update Manager operation on the VM. For more information, see the [VM extensions](workflow-update-manager.md#update-manager-vm-extensions) that are pushed on the machines
+
+### Network planning
+
+To prepare your network to support Update Manager, you might need to configure some infrastructure components. For more information, see the [network requirements for Arc-enabled servers](../azure-arc/servers/network-requirements.md).
+
+For Windows machines, you must allow traffic to any endpoints required by the Windows Update agent. You can find an updated list of required endpoints in [issues related to HTTP Proxy](https://learn.microsoft.com/troubleshoot/windows-client/installing-updates-features-roles/windows-update-issues-troubleshooting?toc=%2Fwindows%2Fdeployment%2Ftoc.json&bc=%2Fwindows%2Fdeployment%2Fbreadcrumb%2Ftoc.json#issues-related-to-httpproxy). If you have a local [WSUS](https://learn.microsoft.com/windows-server/administration/windows-server-update-services/plan/plan-your-wsus-deployment) deployment, you must allow traffic to the server specified in your [WSUS key](https://learn.microsoft.com/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry).
+
+For Red Hat Linux machines, see [IPs for the RHUI content delivery servers](../virtual-machines/workloads/redhat/redhat-rhui.md#the-ips-for-the-rhui-content-delivery-servers)for required endpoints. For other Linux distributions, see your provider documentation.
+
+### Configure Windows Update client
+
+Azure Update Manager relies on the [Windows Update client](https://learn.microsoft.com/windows/deployment/update/windows-update-overview) to download and install Windows updates. There are specific settings that are used by the Windows Update client when connecting to Windows Server Update Services (WSUS) or Windows Update. For more information, see [configure Windows Update client](configure-wu-agent.md).
+
+## Next steps
+
+- [View updates for a single machine](view-updates.md).
+- [Deploy updates now (on-demand) for a single machine](deploy-updates.md).
+- [Enable periodic assessment at scale using policy](https://aka.ms/aum-policy-support).
+- [Schedule recurring updates](scheduled-patching.md)
+- [Manage update settings via the portal](manage-update-settings.md).
+- [Manage multiple machines by using Update Manager](manage-multiple-machines.md).
update-manager Roles Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/roles-permissions.md
+
+ Title: Roles and permissions to manage Azure VM or Arc-enabled server in Azure Update Manager
+description: This article explains th roles and permission required to manage Azure VM or Arc-enabled servers in Azure Update Manager.
+++ Last updated : 07/19/2024++
+
+# Roles and permissions in Azure Update Manager
+
+To manage an Azure VM or an Azure Arc-enabled server using Azure Update Manager, you must have the appropriate roles assigned. You can either use predefined roles or create custom roles with the specific permissions you need. For more information, see the [permissions](#permissions).
+
+## Roles
+
+The built-in roles provide blanket permissions on a virtual machine, which includes all Azure Update Manager permissions as well.
+
+| **Resource** | **Role** |
+|||
+| **Azure VM** | Azure Virtual Machine Contributor or Azure [Owner](../role-based-access-control/built-in-roles.md)|
+| **Azure Arc-enabled server** | [Azure Connected Machine Resource Administrator](../azure-arc/servers/security-overview.md)|
+
+## Permissions
+
+You need the following permissions to manage update operations. The following table shows the permissions that are needed when you use Update Manager. You can create a custom role and assign only the desired permissions to that role so that only permissions for specific actions are provided as per need.
+
+### Read permissions for Update Manager to view Update Manager data
+
+| **Actions** | **Permission** | **Scope** |
+||||
+| **Read Azure VM properties** | Microsoft.Compute/virtualMachines/read | |
+| **Read assessment data for Azure VMs** | Microsoft.Compute/virtualMachines/patchAssessmentResults/read<br>Microsoft.Compute/virtualMachines/patchAssessmentResults/softwarePatches/read | |
+| **Read patch installation data for Azure VMs** | Microsoft.Compute/virtualMachines/patchInstallationResults/read<br>Microsoft.Compute/virtualMachines/patchInstallationResults/softwarePatches/read | |
+| **Read Azure Arc-enabled server properties** | Microsoft.HybridCompute/machines/read | |
+| **Read assessment data for Azure Arc-enabled server** | Microsoft.HybridCompute/machines/patchAssessmentResults/read<br>Microsoft.HybridCompute/machines/patchAssessmentResults/softwarePatches/read | |
+| **Read patch installation data for Azure Arc-enabled server** | Microsoft.HybridCompute/machines/patchInstallationResults/read<br>Microsoft.HybridCompute/machines/patchInstallationResults/softwarePatches/read | |
+| **Get the status of an asynchronous operation** **for Azure** **Virtual machine** | Microsoft.Compute/locations/operations/read | Machine subscription |
+| **Read the status of an update center operation on Arc machines** | Microsoft.HybridCompute/locations/updateCenterOperationResults/read | Machine subscription |
+
+### Permissions to perform on-demand actions in Azure Update Manager
+
+Note that following permissions would be required in addition to read permissions documented above on individual machines on which on-demand operations are performed.
+
+| **Actions** | **Permission** | **Scope** |
+||||
+| **Trigger** **assessment on Azure VMs** | Microsoft.Compute/virtualMachines/assessPatches/action | |
+| **Install update on Azure VMs** | Microsoft.Compute/virtualMachines/installPatches/action | |
+| **Get the status of an asynchronous operation for Azure Virtual machine** | Microsoft.Compute/locations/operations/read | Machine subscription |
+| **Trigger assessment on Azure Arc-enabled server** | Microsoft.HybridCompute/machines/assessPatches/action | |
+| **Install update on Azure Arc-enabled server** | Microsoft.HybridCompute/machines/installPatches/action | |
+| **Read the status of an update center operation on** **Arc** **machines** | Microsoft.HybridCompute/locations/updateCenterOperationResults/read | Machine subscription |
+| **Update patch** **mode /** **assessment mode** **for** **Azure Virtual** **Machines** | Microsoft.Compute/virtualMachines/write | Machine |
+| **Update assessment mode for** **Arc Machines** | Microsoft.HybridCompute/machines/write | Machine |
+
+## Scheduled patching (Maintenance configuration) related permissions
+
+Note that below permissions would be required in addition to permissions on individual machines, which are being managed by the schedules.
+
+| **Actions** | **Permission** | **Scope** |
+||||
+| **Register the subscription for the** **Microsoft.Maintenance resource provider** | Microsoft.Maintenance/register/action | Subscription |
+| **Create/modify maintenance configuration** | Microsoft.Maintenance/maintenanceConfigurations/write | Subscription/resource group |
+| **Create/modify configuration assignments** | Microsoft.Maintenance/configurationAssignments/write | Subscription/Resource group / machine |
+| **Read permission for Maintenance updates resource** | Microsoft.Maintenance/updates/read | Machine |
+| **Read permission for Maintenance apply updates resource** | Microsoft.Maintenance/applyUpdates/read | Machine |
+| **Get list of update deployment** | Microsoft.Resources/deployments/read | Maintenance configuration and virtual machine subscription |
+| **Create or update an update deployment** | Microsoft.Resources/deployments/write | Maintenance configuration and virtual machine subscription |
+| **Get a list of update deployment operation statuses** | Microsoft.Resources/deployments/operation statuses | Maintenance configuration and virtual machine subscription |
+
+## Next steps
+- [Prerequisites of Update Manager](prerequisites.md).
+- [How Update Manager works](workflow-update-manager.md).
update-manager Scheduled Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/scheduled-patching.md
Update Manager uses a maintenance control schedule instead of creating its own s
## Prerequisites for scheduled patching
-1. See [Prerequisites for Update Manager](./overview.md#prerequisites).
+1. See [Prerequisites for Update Manager](prerequisites.md).
1. Patch orchestration of the Azure machines should be set to **Customer Managed Schedules**. For more information, see [Enable schedule patching on existing VMs](prerequsite-for-schedule-patching.md#enable-schedule-patching-on-azure-vms). For Azure Arc-enabled machines, it isn't a requirement. > [!NOTE]
- > If you set the patch mode to **Azure orchestrated** (`AutomaticByPlatform`) but do not enable the **BypassPlatformSafetyChecksOnUserSchedule** flag and do not attach a maintenance configuration to an Azure machine, it's treated as an [automatic guest patching](../virtual-machines/automatic-vm-guest-patching.md)-enabled machine. The Azure platform automatically installs updates according to its own schedule. [Learn more](./overview.md#prerequisites).
+ > If you set the patch mode to **Azure orchestrated** (`AutomaticByPlatform`) but do not enable the **BypassPlatformSafetyChecksOnUserSchedule** flag and do not attach a maintenance configuration to an Azure machine, it's treated as an [automatic guest patching](../virtual-machines/automatic-vm-guest-patching.md)-enabled machine. The Azure platform automatically installs updates according to its own schedule. [Learn more](prerequisites.md).
## Schedule patching in an availability set
update-manager Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/support-matrix.md
# Support matrix for Azure Update Manager > [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly.
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Azure Update Manager will soon cease to support it. Please consider your use and planning accordingly. For more information, see the [CentOS End-Of-Life guidance](../virtual-machines/workloads/centos/centos-end-of-life.md).
This article details the Windows and Linux operating systems supported and system requirements for machines or servers managed by Azure Update Manager. The article includes the supported regions and specific versions of the Windows Server and Linux operating systems running on Azure virtual machines (VMs) or machines managed by Azure Arc-enabled servers.
-## Supported update sources
-
-**Windows**: [Windows Update Agent (WUA)](/windows/win32/wua_sdk/updating-the-windows-update-agent) reports to Microsoft Update by default, but you can configure it to report to [Windows Server Update Services (WSUS)](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus). If you configure WUA to report to WSUS, based on the last synchronization from WSUS with Microsoft Update, the results in Update Manager might differ from what Microsoft Update shows.
-
-To specify sources for scanning and downloading updates, see [Specify intranet Microsoft Update service location](/windows/deployment/update/waas-wu-settings?branch=main#specify-intranet-microsoft-update-service-location). To restrict machines to the internal update service, see [Don't connect to any Windows Update internet locations](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates?branch=main#do-not-connect-to-any-windows-update-internet-locations).
-
-**Linux**: You can configure Linux machines to report to a local or public YUM or APT package repository. The results shown in Update Manager depend on where the machines are configured to report.
-
-## Supported update types
-
-The following types of updates are supported.
-
-### Operating system updates
-
-Update Manager supports operating system updates for both Windows and Linux.
-
-Update Manager doesn't support driver updates.
-
-### Extended Security Updates (ESU) for Windows Server
-
-Using Azure Update Manager, you can deploy Extended Security Updates for your Azure Arc-enabled Windows Server 2012 / R2 machines. To enroll in Windows Server 2012 Extended Security Updates, follow the guidance on [How to get Extended Security Updates (ESU) for Windows Server 2012 and 2012 R2.](/windows-server/get-started/extended-security-updates-deploy#extended-security-updates-enabled-by-azure-arc)
-
-### First-party updates on Windows
-
-By default, the Windows Update client is configured to provide updates only for the Windows operating system. If you enable the **Give me updates for other Microsoft products when I update Windows** setting, you also receive updates for other Microsoft products. Updates include security patches for Microsoft SQL Server and other Microsoft software.
-
-Use one of the following options to perform the settings change at scale:
--- For servers configured to patch on a schedule from Update Manager (with virtual machine `PatchSettings` set to `AutomaticByPlatform = Azure-Orchestrated`), and for all Windows Servers running on an earlier operating system than Windows Server 2016, run the following PowerShell script on the server you want to change:-
- ```powershell
- $ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager")
- $ServiceManager.Services
- $ServiceID = "7971f918-a847-4430-9279-4a52d1efe18d"
- $ServiceManager.AddService2($ServiceId,7,"")
- ```
--- For servers running Windows Server 2016 or later that aren't using Update Manager scheduled patching (with virtual machine `PatchSettings` set to `AutomaticByOS = Azure-Orchestrated`), you can use Group Policy to control this process by downloading and using the latest Group Policy [Administrative template files](/troubleshoot/windows-client/group-policy/create-and-manage-central-store).-
-> [!NOTE]
-> Run the following PowerShell script on the server to disable first-party updates:
->
-> ```powershell
-> $ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager")
-> $ServiceManager.Services
-> $ServiceID = "7971f918-a847-4430-9279-4a52d1efe18d"
-> $ServiceManager.RemoveService($ServiceId)
-> ```
-
-### Third party updates
-
-**Windows**: Update Manager relies on the locally configured update repository to update supported Windows systems, either WSUS or Windows Update. Tools such as [System Center Updates Publisher](/mem/configmgr/sum/tools/updates-publisher) allow you to import and publish custom updates with WSUS. This scenario allows Update Manager to update machines that use Configuration Manager as their update repository with third party software. To learn how to configure Updates Publisher, see [Install Updates Publisher](/mem/configmgr/sum/tools/install-updates-publisher).
-
-**Linux**: If you include a specific third party software repository in the Linux package manager repository location, it's scanned when it performs software update operations. The package isn't available for assessment and installation if you remove it.
-
-Update Manager doesn't support managing the Configuration Manager client.
-
-## Supported regions
-
-Update Manager scales to all regions for both Azure VMs and Azure Arc-enabled servers. The following table lists the Azure public cloud where you can use Update Manager.
-
-#### [Azure Public cloud](#tab/public)
-
-### Azure VMs
-
-Azure Update Manager is available in all Azure public regions where compute virtual machines are available.
-
-### Azure Arc-enabled servers
--
-Azure Update Manager is currently supported in the following regions. It implies that VMs must be in the following regions.
-
-**Geography** | **Supported regions**
- |
-Africa | South Africa North
-Asia Pacific | East Asia </br> South East Asia
-Australia | Australia East </br> Australia Southeast
-Brazil | Brazil South
-Canada | Canada Central </br> Canada East
-Europe | North Europe </br> West Europe
-France | France Central
-Germany | Germany West Central
-India | Central India
-Japan | Japan East
-Korea | Korea Central
-Norway | Norway East
-Sweden | Sweden Central
-Switzerland | Switzerland North
-UAE | UAE North
-United Kingdom | UK South </br> UK West
-United States | Central US </br> East US </br> East US 2</br> North Central US </br> South Central US </br> West Central US </br> West US </br> West US 2 </br> West US 3
-
-#### [Azure for US Government](#tab/gov)
-
-**Geography** | **Supported regions** | **Details**
- | |
-United States | USGovVirginia </br> USGovArizona </br> USGovTexas | For both Azure VMs and Azure Arc-enabled servers </br> For both Azure VMs and Azure Arc-enabled servers </br> For Azure VMs only
-
-#### [Azure operated by 21Vianet](#tab/21via)
-
-**Geography** | **Supported regions** | **Details**
- | |
-China | ChinaEast </br> ChinaEast3 </br> ChinaNorth </br> ChinaNorth3 </br> ChinaEast2 </br> ChinaNorth2 | For Azure VMs only </br> For Azure VMs only </br> For Azure VMs only </br> For both Azure VMs and Azure Arc-enabled servers </br> For both Azure VMs and Azure Arc-enabled servers </br> For both Azure VMs and Azure Arc-enabled servers.
---- ## Supported operating systems >[!NOTE] > - All operating systems are assumed to be x64. For this reason, x86 isn't supported for any operating system. > - Update Manager doesn't support virtual machines created from CIS-hardened images.
-### Support for Azure Update Manager operations
--- [Periodic assessment, Schedule patching, On-demand assessment, and On-demand patching](#support-for-all-other-azure-update-manager-operations)-- [Automatic VM guest patching](#support-for-automatic-vm-guest-patching)- ### Support for automatic VM Guest patching
If [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching
- For marketplace images, see the list of [supported OS images](../virtual-machines/automatic-vm-guest-patching.md#supported-os-images). - For VMs created from customized images even if the Patch orchestration mode is set to `Azure Orchestrated/AutomaticByPlatform`, automatic VM guest patching doesn't work. We recommend that you use scheduled patching to patch the machines by defining your own schedules or install updates on-demand.
-### Support for all other Azure Update Manager operations
-
-Azure Update Manager supports the following operations:
--- [periodic assessment](assessment-options.md#periodic-assessment)-- [scheduled patching](prerequsite-for-schedule-patching.md)-- [on-demand assessment](assessment-options.md#check-for-updates-nowon-demand-assessment), and patching is described in the following sections:
+### Support for Check for Updates/One time Update/Periodic assessment and Scheduled patching
# [Azure VMs](#tab/azurevm-os)
The following table lists the workloads that aren't supported.
As Update Manager depends on your machine's OS package manager or update service, ensure that the Linux package manager or Windows Update client is enabled and can connect with an update source or repository. If you're running a Windows Server OS on your machine, see [Configure Windows Update settings](configure-wu-agent.md). +
+## Supported regions
+
+Update Manager scales to all regions for both Azure VMs and Azure Arc-enabled servers. The following table lists the Azure public cloud where you can use Update Manager.
+
+#### [Azure Public cloud](#tab/public)
+
+### Azure VMs
+
+Azure Update Manager is available in all Azure public regions where compute virtual machines are available.
+
+### Azure Arc-enabled servers
+
+Azure Update Manager is currently supported in the following regions. It implies that VMs must be in the following regions.
+
+**Geography** | **Supported regions**
+ |
+Africa | South Africa North
+Asia Pacific | East Asia </br> South East Asia
+Australia | Australia East </br> Australia Southeast
+Brazil | Brazil South
+Canada | Canada Central </br> Canada East
+Europe | North Europe </br> West Europe
+France | France Central
+Germany | Germany West Central
+India | Central India
+Japan | Japan East
+Korea | Korea Central
+Norway | Norway East
+Sweden | Sweden Central
+Switzerland | Switzerland North
+UAE | UAE North
+United Kingdom | UK South </br> UK West
+United States | Central US </br> East US </br> East US 2</br> North Central US </br> South Central US </br> West Central US </br> West US </br> West US 2 </br> West US 3
+
+#### [Azure for US Government](#tab/gov)
+
+**Geography** | **Supported regions** | **Details**
+ | |
+United States | USGovVirginia </br> USGovArizona </br> USGovTexas | For both Azure VMs and Azure Arc-enabled servers </br> For both Azure VMs and Azure Arc-enabled servers </br> For Azure VMs only
+
+#### [Azure operated by 21Vianet](#tab/21via)
+
+**Geography** | **Supported regions** | **Details**
+ | |
+China | ChinaEast </br> ChinaEast3 </br> ChinaNorth </br> ChinaNorth3 </br> ChinaEast2 </br> ChinaNorth2 | For Azure VMs only </br> For Azure VMs only </br> For Azure VMs only </br> For both Azure VMs and Azure Arc-enabled servers </br> For both Azure VMs and Azure Arc-enabled servers </br> For both Azure VMs and Azure Arc-enabled servers.
+++
+### Supported update sources
+For more information, see the supported [update sources](workflow-update-manager.md#update-source).
+
+### Supported update types
+The following types of updates are supported.
+
+#### Operating system updates
+Update Manager supports operating system updates for both Windows and Linux.
+
+Update Manager doesn't support driver updates.
+
+#### Extended Security Updates (ESU) for Windows Server
+
+Using Azure Update Manager, you can deploy Extended Security Updates for your Azure Arc-enabled Windows Server 2012 / R2 machines. ESUs are available are default to Azure Virtual machines. To enroll in Windows Server 2012 Extended Security Updates on Arc connected machines, follow the guidance on [How to get Extended Security Updates (ESU) for Windows Server 2012 and 2012 R2 via Azure Arc](https://learn.microsoft.com/windows-server/get-started/extended-security-updates-deploy#extended-security-updates-enabled-by-azure-arc).
++
+#### Microsoft application updates on Windows
+
+By default, the Windows Update client is configured to provide updates only for the Windows operating system.
+
+If you enable the **Give me updates for other Microsoft products when I update Windows** setting, you also receive updates for other Microsoft products. Updates include security patches for Microsoft SQL Server and other Microsoft software.
+
+Use one of the following options to perform the settings change at scale:
+
+ΓÇó For all Windows Servers running on an earlier operating system than Windows Server 2016, run the following PowerShell script on the server you want to change:
+
+ ```azurepowershell-interactive
+
+ $ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager")
+ $ServiceManager.Services
+ $ServiceID = "7971f918-a847-4430-9279-4a52d1efe18d"
+ $ServiceManager.AddService2($ServiceId,7,"")
+ ```
+
+ΓÇó For servers running Windows Server 2016 or later, you can use Group Policy to control this process by downloading and using the latest Group Policy Administrative template files.
+
+> [!NOTE]
+> Run the following PowerShell script on the server to disable Microsoft applications updates:
+
+ ```azurepowershell-interactive
+ $ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager")
+ $ServiceManager.Services
+ $ServiceID = "7971f918-a847-4430-9279-4a52d1efe18d"
+ $ServiceManager.RemoveService($ServiceId)
+ ```
+
+#### Third party application updates
+
+#### [Windows](#tab/third-party-win)
+
+Update Manager relies on the locally configured update repository to update supported Windows systems, either WSUS or Windows Update. Tools such as [System Center Updates Publisher](https://learn.microsoft.com/mem/configmgr/sum/tools/updates-publisher) allow you to import and publish custom updates with WSUS. This scenario allows Update Manager to update machines that use Configuration Manager as their update repository with third party software. To learn how to configure Updates Publisher, see [Install Updates Publisher](https://learn.microsoft.com/mem/configmgr/sum/tools/install-updates-publisher).
+
+#### [Linux](#tab/third-party-lin)
+
+Third party application updates are supported in Azure Update Manager. If you include a specific third party software repository in the Linux package manager repository location, it's scanned when it performs software update operations. The package isn't available for assessment and installation if you remove it.
+++
+As Update Manager depends on your machine's OS package manager or update service, ensure that the Linux package manager or Windows Update client is enabled and can connect with an update source or repository. If you're running a Windows Server OS on your machine, see [Configure Windows Update settings](configure-wu-agent.md).
++ ## Next steps - [View updates for a single machine](view-updates.md)
update-manager Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/troubleshoot.md
If you see an `HRESULT` error code, double-click the exception displayed in red
|Exception |Resolution or action | ||| |`Exception from HRESULT: 0x……C` | Search the relevant error code in the [Windows Update error code list](https://support.microsoft.com/help/938205/windows-update-error-code-list) to find more information about the cause of the exception. |
-|`0x8024402C`</br>`0x8024401C`</br>`0x8024402F` | Indicates network connectivity problems. Make sure your machine has network connectivity to Update Management. For a list of required ports and addresses, see the [Network planning](overview.md#network-planning) section. |
+|`0x8024402C`</br>`0x8024401C`</br>`0x8024402F` | Indicates network connectivity problems. Make sure your machine has network connectivity to Update Management. For a list of required ports and addresses, see the [Network planning](prerequisites.md#network-planning) section. |
|`0x8024001E`| The update operation didn't finish because the service or system was shutting down.| |`0x8024002E`| Windows Update service is disabled.| |`0x8024402C` | If you're using a WSUS server, make sure the registry values for `WUServer` and `WUStatusServer` under the `HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate` registry key specify the correct WSUS server. |
update-manager Workflow Update Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/workflow-update-manager.md
+
+ Title: Azure Update Manager operations
+description: This article tells what Azure Update Manager works in Azure is and the system updates for your Windows and Linux machines in Azure.
++++ Last updated : 07/14/2024+++
+# How Update Manager works
+
+Update Manager assesses and applies updates to all Azure machines and Azure Arc-enabled servers for both Windows and Linux.
+
+![Diagram that shows the Update Manager workflow.](./media/overview/update-management-center-overview.png)
+
+## Update Manager VM extensions
+
+When an Azure Update Manager operation(AUM) is enabled or triggered on your Azure or Arc-enabled server, AUM installs an [Azure extension](../virtual-machines/extensions/overview.md) or [Arc-enabled servers extensions](../azure-arc/servers/manage-vm-extensions.md) respectively on your machine to manage the updates.
+
+The extension is automatically installed on your machine when you initiate any Update Manager operation on your machine for the first time, such as Check for updates, Install one-time update, Periodic Assessment or when scheduled update deployment runs on your machine for the first time.
+
+Customer doesn't have to explicitly install the extension and its lifecycle as it is managed by Azure Update Manager including installation and configuration. The Update Manager extension is installed and managed by using the below agents, which are required for Update Manager to work on your machines:
+
+- [Azure VM Windows agent](../virtual-machines/extensions/agent-windows.md) or the [Azure VM Linux agent](../virtual-machines/extensions/agent-linux.md) for Azure VMs.
+- [Azure Arc-enabled servers agent](../azure-arc/servers/agent-overview.md)
+
+>[!NOTE]
+> Arc connectivity is a prerequisite for Update Manager, non-Azure machines including Arc-enabled VMWare, SCVMM etc.
+
+For Azure machines, single extension is installed whereas for Azure Arc-enabled machines, two extensions are installed. Below are the details of extensions, which get installed:
+
+#### [Azure VM extensions](#tab/azure-vms)
+
+| Operating system| Extension
+|-|-|
+|Windows | Microsoft.CPlat.Core.WindowsPatchExtension|
+|Linux | Microsoft.CPlat.Core.LinuxPatchExtension |
+
+#### [Azure Arc-enabled VM extensions](#tab/azure-arc-vms)
+
+| Operating system| Extension
+|-|-|
+|Windows | Microsoft.CPlat.Core.WindowsPatchExtension (Periodic assessment) <br> Microsoft.SoftwareUpdateManagement.WindowsOsUpdateExtension (On-demand operations and Schedule patching) |
+|Linux | Microsoft.SoftwareUpdateManagement.LinuxOsUpdateExtension (On-demand operations and Schedule patching) <br> Microsoft.CPlat.Core.LinuxPatchExtension (Periodic assessment) |
+
+To view the available extensions for a VM in the Azure portal:
+
+1. Go to the [Azure portal](https://portal.azure.com) and select a VM.
+1. On the VM home page, under **Settings**, select **Extensions + applications**.
+1. On the **Extensions** tab, you can view the available extensions.
++
+## Update source
+
+Azure Update Manager honors the update source settings on the machine and will fetch updates accordingly. AUM doesn't publish or provide updates.
+
+#### [Windows](#tab/update-win)
+
+If the [Windows Update Agent (WUA)](https://learn.microsoft.com/windows/win32/wua_sdk/updating-the-windows-update-agent) is configured to fetch updates from Windows Update repository or Microsoft Update repository or [Windows Server Update Services](https://learn.microsoft.com/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus) (WSUS), AUM will honor these settings. For more information, see how to [configure Windows Update client](https://learn.microsoft.com/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus). By default, **it is configured to fetch updates from Windows Updates repository**.
+
+#### [Linux](#tab/update-lin)
+
+If the package manager points to a public YUM, APT or Zypper repository or a local repository, AUM will honor the settings of the package manager.
+++
+AUM performs the following steps:
+
+- Retrieve the assessment information about status of system updates for it specified by the Windows Update client or Linux package manager.
+- Initiate the download and installation of updates with the Windows Update client or Linux package manager.
+
+>[!Note]
+> 1. The machines will report their update status based on the source they are configured to synchronize with. If the Windows Update service is configured to report to WSUS, the results in Update Manager might differ from what Microsoft Update shows, depending on when WSUS last synchronized with Microsoft Update. This behavior is the same for Linux machines that are configured to report to a local repository instead of a public package repository.
+> 1. Update Manager will only find updates that the Windows Update service finds when you select the local **Check for updates** button on the local Windows system. On Linux systems only updates on the local repository will be discovered.
+
+## Updates data stored in Azure Resource Graph
+
+Update Manager extension pushes all the pending updates information and update installation results to [Azure Resource Graph](https://learn.microsoft.com/azure/governance/resource-graph/overview) where data is retained for below time periods:
+
+|Data | Retention period in Azure Resource graph |
+|||
+|Pending updates (ARG table name: patchassessmentresources) | Seven Days|
+|Update installation results (ARG table name: patchinstallationresources)| 30 days|
+
+For more information, see [log structure of Azure Resource Graph](query-logs.md) and [sample queries](sample-query-logs.md).
+
+## How patches are installed in Azure Update Manager
+
+In Azure Update Manager, patches are installed in the following manner:
+
+1. It begins with a fresh assessment of the available updates on the VM.
+1. Update installation follows the assessment.
+ - In Windows, the selected updates that meet the customer's criteria are installed one by one,
+ - In Linux, they're installed in batches.
+1. During update installation, Maintenance window utilization is checked at multiple steps. For Windows and Linux, 10 and 15 minutes of the maintenance window are reserved for reboot at any point respectively. Before proceeding with the installation of the remaining updates, it checks whether the expected reboot time plus the average update installation time (for the next update or next set of updates) doesn't exceed the maintenance window.
+In the case of Windows, the average update installation time is 10 minutes for all types of updates except for service pack updates. For service pack updates, itΓÇÖs 15 minutes.
+1. Note that an ongoing update installation (once started based on the calculation above) isn't forcibly stopped even if it exceeds the maintenance window, to avoid landing the machine in a possibly undetermined state. However, it doesn't continue installing the remaining updates once the maintenance window has been exceeded, and a maintenance window exceeded error is thrown in such cases.
+1. Patching/Update installation is only marked as successful if all selected updates are installed, and all operations involved (including Reboot & Assessment) succeed. Otherwise, it's marked as Failed or Completed with warnings. For example,
+
+ |Scenario |Update installation status |
+ |||
+ |One of the selected updates fails to install.| Failed |
+ |Reboot doesn't happen for any reason & wait time for reboot times out. | Failed |
+ | Machine fails to start during a reboot. | Failed |
+ | Initial or final assessment failed| Failed |
+ | Reboot is required by the updates, but Never reboot option is selected. | Completed with warnings|
+ | ESM packages skipped patching in ubuntu 18 or lower if Ubuntu pro license wasn't present. | Completed with warnings|
+1. An assessment is conducted at the end. Note that the reboot and assessment done at the end of the update installation may not occur in some cases, for example if the maintenance window has already been exceeded, if the update installation fails for some reason, etc.
+
+## Next steps
+
+- [Prerequisites of Update Manager](prerequisites.md)
+- [View updates for a single machine](view-updates.md).
+- [Deploy updates now (on-demand) for a single machine](deploy-updates.md).
+- [Enable periodic assessment at scale using policy](https://aka.ms/aum-policy-support).
+- [Schedule recurring updates](scheduled-patching.md)
+- [Manage update settings via the portal](manage-update-settings.md).
+- [Manage multiple machines by using Update Manager](manage-multiple-machines.md).
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
For more information and instructions, seeΓÇ»[Add and manage app attach and MSIX
Here's what changed in May 2024:
+### Windows multi-session 11 with Microsoft 365 Apps gallery images now pre-install new Microsoft Teams
+
+Windows multi-session 11 with Microsoft 365 Apps images in the Azure Marketplace now come with the new Microsoft Teams pre-installed (not Teams (Classic)). This applies to Windows Enterprise multi-session 11 23H2 and 22H2.ΓÇ»
+ ### Configuring client device redirection for Windows App and the Remote Desktop app using Microsoft Intune is now in preview You can now use Microsoft Intune to configure client device redirection settings for Windows App and the Remote Desktop app in preview. IT admins can configure different redirection scenarios based on group membership and whether the device is managed by Intune or unmanaged. Additional capabilities include the ability to check and restrict access to Azure Virtual Desktop based on criteria such as OS version, allowed app (Windows App or the Remote Desktop app), allowed app version number, whether a threat is detected by Mobile Threat Defense (MTD), the device is jailbroken/rooted, and more.
virtual-machines Troubleshoot Maintenance Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/troubleshoot-maintenance-configurations.md
To create a dynamic scope, you must have the permission at the subscription leve
1. The subscription/resource group at which the dynamic scope is being created. 1. The maintenance configuration scope.
-For more information, see the [list of permissions list for various resources here](../update-manager/overview.md#permissions).
+For more information, see the [list of permissions list for various resources here](../update-manager/roles-permissions.md#permissions).
### An update is stuck and not progressing
virtual-network Virtual Network Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-encryption-overview.md
Previously updated : 05/06/2024- Last updated : 07/18/2024+ # Customer intent: As a network administrator, I want to learn about encryption in Azure Virtual Network so that I can secure my network traffic.
Azure Virtual Network encryption has the following limitations:
- **AllowUnencrypted** is the only supported enforcement at general availability. **DropUnencrypted** enforcement will be supported in the future. -- Virtual networks with encryption enabled do not support [Azure DNS Private Resolver](/azure/dns/dns-private-resolver-overview).
+- Virtual networks with encryption enabled don't support [Azure DNS Private Resolver](/azure/dns/dns-private-resolver-overview).
-## Next steps
+## Supported scenarios
-- For more information about Azure Virtual Networks, see [What is Azure Virtual Network?](/azure/virtual-network/virtual-networks-overview)
+Virtual network encryption is supported in the following scenarios:
+| Scenario | Support |
+| | |
+| VMs in the same virtual network (including virtual machine scale sets and their internal load balancer) | Supported on traffic between VMs from these [SKUs](#requirements). |
+| Virtual network peering | Supported on traffic between VMs across regional peering. |
+| Global virtual network peering | Supported on traffic between VMs across global peering. |
+| Azure Kubernetes Service (AKS) | - Supported on AKS using Azure CNI (regular or overlay mode), Kubenet, or BYOCNI: node and pod traffic is encrypted.<br> - Partially supported on AKS using Azure CNI Dynamic Pod IP Assignment (podSubnetId specified): node traffic is encrypted, but pod traffic isn't encrypted.<br> - Traffic to the AKS managed control plane egresses from the virtual network and thus isn't in scope for virtual network encryption. However, this traffic is always encrypted via TLS. |
+> [!NOTE]
+> Other services that currently don't support virtual network encryption are included in our future roadmap.
+
+## Related content
+
+- [Create a virtual network with encryption using the Azure portal](how-to-create-encryption-portal.md).
+- [Virtual network encryption frequently asked questions (FAQ)](virtual-network-encryption-faq.yml).
+- [What is Azure Virtual Network?](virtual-networks-overview.md)
vpn-gateway Gateway Change Active Active https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/gateway-change-active-active.md
+
+ Title: 'Change a gateway to active-active mode'
+
+description: Learn how to change a VPN gateway from active-standby to active-active.
+++ Last updated : 07/19/2024++++
+# Change a VPN gateway to active-active
+
+The steps in this article help you change active-standby VPN gateways to active-active. You can also change an active-active gateway to active-standby. For more information about active-active gateways, see [About active-active gateways](vpn-gateway-about-vpn-gateway-settings.md#active) and [About highly-available gateway connections](vpn-gateway-highlyavailable.md).
+
+## Change active-standby to active-active
+
+Use the following steps to convert active-standby mode gateway to active-active mode. If your gateway was created using the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md), you can also upgrade the SKU on this page.
+
+1. Navigate to the page for your virtual network gateway.
+
+1. On the left menu, select **Configuration**.
+
+1. On the **Configuration** page, configure the following settings:
+
+ * Change the Active-active mode to **Enabled**.
+ * Click **Add new** to add another public IP address. If you already have an IP address that you previously created that's available to dedicate to this resource, you can instead select it from the **SECOND PUBLIC IP ADDRESS** dropdown.
+
+ :::image type="content" source="./media/active-active-portal/active-active.png" alt-text="Screenshot shows the Configuration page with active-active mode enabled." lightbox="./media/active-active-portal/active-active.png":::
+
+1. On the **Choose public IP address** page and either specify an existing public IP address that meets the criteria, or select **+Create new** to create a new public IP address to use for the second VPN gateway instance. After you've specified the second public IP address, click **OK**.
+
+1. At the top of the **Configuration** page, click **Save**. This update can take about 30-45 minutes to complete.
+
+> [!IMPORTANT]
+> If you have BGP sessions running, be aware that the Azure VPN Gateway BGP configuration will change and two newly assigned BGP IPs will be provisioned within the Gateway Subnet address range. The old Azure VPN Gateway BGP IP address will no longer exist. This will incur downtime and updating the BGP peers on the on-premises devices will be required. Once the gateway is finished provisioning, the new BGP IPs can be obtained and the on-premises device configuration will need to be updated accordingly. This applies to non APIPA BGP IPs. To understand how to configure BGP in Azure, see [How to configure BGP on Azure VPN Gateways](bgp-howto.md).
+>
+
+### Change active-active to active-standby
+
+Use the following steps to convert active-active mode gateway to active-standby mode.
+
+1. Navigate to the page for your virtual network gateway.
+
+1. On the left menu, select **Configuration**.
+
+1. On the **Configuration** page, change the Active-active mode to **Disabled**.
+
+1. At the top of the **Configuration** page, click **Save**.
+
+> [!IMPORTANT]
+> If you have BGP sessions running, be aware that the Azure VPN Gateway BGP configuration will change from two BGP IP addresses to a single BGP address. The platform generally assigns the last usable IP of the Gateway Subnet. This will incur downtime and updating the BGP peers on the on-premises devices will be required. This applies to non APIPA BGP IPs. To understand how to configure BGP in Azure, see [How to configure BGP on Azure VPN Gateways](bgp-howto.md).
+>
+
+## Next steps
+
+For more information about active-active gateways, see [About active-active gateways](vpn-gateway-about-vpn-gateway-settings.md#active).