Updates from: 04/05/2024 01:12:22
Service Microsoft Docs article Related commit history on GitHub Change details
advisor Advisor How To Calculate Total Cost Savings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-how-to-calculate-total-cost-savings.md
+
+ Title: Export cost savings in Azure Advisor
+ Last updated : 02/06/2024
+description: Export cost savings in Azure Advisor and calculate the aggregated potential yearly savings by using the cost savings amount for each recommendation.
++
+# Export cost savings
+
+To calculate aggregated potential yearly savings, follow these steps:
+
+1. Sign in to the [**Azure portal**](https://portal.azure.com).
+
+1. Search for and select [**Advisor**](https://aka.ms/azureadvisordashboard) from any page.\
+The Advisor **Overview** page opens.
+
+1. Export cost recommendations by navigating to the **Cost** tab on the left navigation menu and choosing **Download as CSV**.
+
+1. Use the cost savings amount for each recommendation to calculate aggregated potential yearly savings.
+
+ [![Screenshot of the Azure Advisor cost recommendations page that shows download option.](./media/advisor-how-to-calculate-total-cost-savings.png)](./media/advisor-how-to-calculate-total-cost-savings.png#lightbox)
+
+> [!NOTE]
+> Recommendations show savings individually, and may overlap with the savings shown in other recommendations, for example ΓÇô you can only benefit from savings plans for compute or reservations for virtual machines, but not from both.
+
advisor Advisor Reference Operational Excellence Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-operational-excellence-recommendations.md
Virtual Network flow log allows you to record IP traffic flowing in a virtual ne
Learn more about [Resource - UpgradeNSGToVnetFlowLog (Upgrade NSG flow logs to VNet flow logs)](https://aka.ms/vnetflowlogspreviewdocs).
+### Migrate Azure Front Door (classic) to Standard/Premium tier
+On 31 March 2027, Azure Front Door (classic) will be retired for the public cloud, and youΓÇÖll need to migrate to Front Door Standard or Premium by that date.
+
+Beginning 1 April 2025, youΓÇÖll no longer be able to create new Front Door (classic) resources via the Azure portal, Terraform, or any command line tools. However, you can continue to make modifications to existing resources until Front Door (classic) is fully retired.
+
+Azure Front Door Standard and Premium combine the capabilities of static and dynamic content delivery with turnkey security, enhanced DevOps experiences, simplified pricing, and better Azure integrations
+
+Learn more about [Azure Front Door (classic) will be retired on 31 March 2027](https://azure.microsoft.com/updates/azure-front-door-classic-will-be-retired-on-31-march-2027/).
ai-services Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/manage-costs.md
Azure OpenAI fine-tuned models are charged based on three factors:
The hosting hours cost is important to be aware of since after a fine-tuned model is deployed, it continues to incur an hourly cost regardless of whether you're actively using it. Monitor fine-tuned model costs closely.
+> [!IMPORTANT]
+> After you deploy a customized model, if at any time the deployment remains inactive for greater than fifteen (15) days,
+> the deployment is deleted. The deployment of a customized model is _inactive_ if the model was deployed more than fifteen (15) days ago
+> and no completions or chat completions calls were made to it during a continuous 15-day period.
+>
+> The deletion of an inactive deployment doesn't delete or affect the underlying customized model,
+> and the customized model can be redeployed at any time.
+>
+> Each customized (fine-tuned) model that's deployed incurs an hourly hosting cost regardless of whether completions
+> or chat completions calls are being made to the model. .
### Other costs that might accrue with Azure OpenAI Service
ai-studio Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/architecture.md
description: Learn about the architecture of Azure AI Studio.
Previously updated : 02/06/2024 Last updated : 04/03/2024
For information on registering resource providers, see [Register an Azure resour
## Role-based access control and control plane proxy
-Azure AI Services and Azure OpenAI provide control plane endpoints for operations such as listing model deployments. These endpoints are secured using a separate Azure role-based access control (RBAC) configuration than the one used for Azure AI hub.
+Azure AI Services and Azure OpenAI provide control plane endpoints for operations such as listing model deployments. These endpoints are secured using a separate Azure role-based access control (Azure RBAC) configuration than the one used for Azure AI hub.
To reduce the complexity of Azure RBAC management, AI Studio provides a *control plane proxy* that allows you to perform operations on connected Azure AI Services and Azure OpenAI resources. Performing operations on these resources through the control plane proxy only requires Azure RBAC permissions on the AI hub. The Azure AI Studio service then performs the call to the Azure AI Services or Azure OpenAI control plane endpoint on your behalf. For more information, see [Role-based access control in Azure AI Studio](rbac-ai-studio.md).
+## Attribute-based access control
+
+Each AI hub you create has a default storage account. Each child AI project of the AI hub inherits the storage account of the AI hub. The storage account is used to store data and artifacts.
+
+To secure the shared storage account, Azure AI Studio uses both Azure RBAC and Azure attribute-based access control (Azure ABAC). Azure ABAC is a security model that defines access control based on attributes associated with the user, resource, and environment. Each AI project has:
+
+- A service principal that is assigned the Storage Blob Data Contributor role on the storage account.
+- A unique ID (workspace ID).
+- A set of containers in the storage account. Each container has a prefix that corresponds to the workspace ID value for the AI project.
+
+The role assignment for each AI project's service principal has a condition that only allows the service principal access to containers with the matching prefix value. This condition ensures that each AI project can only access its own containers.
+
+> [!NOTE]
+> For data encryption in the storage account, the scope is the entire storage and not per-container. So all containers are encrypted using the same key (provided either by Microsoft or by the customer).
+
+For more information on Azure access-based control, see [What is Azure attribute-based access control](/azure/role-based-access-control/conditions-overview).
+ ## Encryption
-Azure AI Studio uses encryption to protect data at rest and in transit. By default, Microsoft-managed keys are used for encryption, however you can use your own encryption keys. For more information, see [Customer-managed keys](../../ai-services/encryption/cognitive-services-encryption-keys-portal.md?context=/azure/ai-studio/context/context).
+Azure AI Studio uses encryption to protect data at rest and in transit. By default, Microsoft-managed keys are used for encryption. However you can use your own encryption keys. For more information, see [Customer-managed keys](../../ai-services/encryption/cognitive-services-encryption-keys-portal.md?context=/azure/ai-studio/context/context).
## Virtual network
ai-studio Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/vulnerability-management.md
description: Learn how Azure AI Studio manages vulnerabilities in images that th
Previously updated : 02/22/2024 Last updated : 4/4/2024
This article discusses these responsibilities and outlines the vulnerability man
## Microsoft-managed VM images
-Azure AI Studio manages host OS virtual machine (VM) images for compute instances and serverless compute clusters. The update frequency is monthly and includes the following details:
+Microsoft manages host OS virtual machine (VM) images for compute instances and serverless compute clusters. The update frequency is monthly and includes the following details:
* For each new VM image version, the latest updates are sourced from the original publisher of the OS. Using the latest updates helps ensure that you get all applicable OS-related patches. For Azure AI Studio, the publisher is Canonical for all the Ubuntu images. * VM images are updated monthly.
-* In addition to patches that the original publisher applies, Azure AI Studio updates system packages when updates are available.
+* In addition to patches that the original publisher applies, Microsoft updates system packages when updates are available.
-* Azure AI Studio checks and validates any machine learning packages that might require an upgrade. In most circumstances, new VM images contain the latest package versions.
+* Microsoft checks and validates any machine learning packages that might require an upgrade. In most circumstances, new VM images contain the latest package versions.
-* All VM images are built on secure subscriptions that run vulnerability scanning regularly. Azure AI Studio flags any unaddressed vulnerabilities and fixes them within the next release.
+* All VM images are built on secure subscriptions that run vulnerability scanning regularly. Microsoft flags any unaddressed vulnerabilities and fixes them within the next release.
-* The frequency is a monthly interval for most images. For compute instances, the image release is aligned with the release cadence of the Azure AI Studio SDK that's preinstalled in the environment.
+* The frequency is a monthly interval for most images. For compute instances, the image release is aligned with the release cadence of the Azure AI SDK that's preinstalled in the environment.
-In addition to the regular release cadence, Azure AI Studio applies hotfixes if vulnerabilities surface. Microsoft rolls out hotfixes within 72 hours for serverless compute clusters and within a week for compute instances.
+In addition to the regular release cadence, Microsoft applies hotfixes if vulnerabilities surface. Microsoft rolls out hotfixes within 72 hours for serverless compute clusters and within a week for compute instances.
> [!NOTE] > The host OS is not the OS version that you might specify for an environment when you're training or deploying a model. Environments run inside Docker. Docker runs on the host OS. ## Microsoft-managed container images
-[Base docker images](https://github.com/Azure/AzureML-Containers) that Azure AI Studio maintains get security patches frequently to address newly discovered vulnerabilities.
+[Base docker images](https://github.com/Azure/AzureML-Containers) that Microsoft maintains for Azure AI Studio get security patches frequently to address newly discovered vulnerabilities.
-Azure AI Studio releases updates for supported images every two weeks to address vulnerabilities. As a commitment, we aim to have no vulnerabilities older than 30 days in the latest version of supported images.
+Microsoft releases updates for supported images every two weeks to address vulnerabilities. As a commitment, we aim to have no vulnerabilities older than 30 days in the latest version of supported images.
Patched images are released under a new immutable tag and an updated `:latest` tag. Using the `:latest` tag or pinning to a particular image version might be a tradeoff between security and environment reproducibility for your machine learning job.
Patched images are released under a new immutable tag and an updated `:latest` t
In Azure AI Studio, Docker images are used to provide a runtime environment for [prompt flow deployments](../how-to/flow-deploy.md). The images are built from a base image that Azure AI Studio provides.
-Although Azure AI Studio patches base images with each release, whether you use the latest image might be tradeoff between reproducibility and vulnerability management. It's your responsibility to choose the environment version that you use for your jobs or model deployments.
+Although Microsoft patches base images with each release, whether you use the latest image might be tradeoff between reproducibility and vulnerability management. It's your responsibility to choose the environment version that you use for your jobs or model deployments.
By default, dependencies are layered on top of base images when you're building an image. After you install more dependencies on top of the Microsoft-provided images, vulnerability management becomes your responsibility.
-Associated with your AI hub resource is an Azure Container Registry instance that functions as a cache for container images. Any image that materializes is pushed to the container registry. The workspace uses it when deployment is triggered for the corresponding environment.
+Associated with your AI hub resource is an Azure Container Registry instance that functions as a cache for container images. Any image that materializes is pushed to the container registry. The AI hub uses it when deployment is triggered for the corresponding environment.
The AI hub doesn't delete any image from your container registry. You're responsible for evaluating the need for an image over time. To monitor and maintain environment hygiene, you can use [Microsoft Defender for Container Registry](/azure/defender-for-cloud/defender-for-container-registries-usage) to help scan your images for vulnerabilities. To automate your processes based on triggers from Microsoft Defender, see [Automate remediation responses](/azure/defender-for-cloud/workflow-automation).
The AI hub doesn't delete any image from your container registry. You're respons
Managed compute nodes in Azure AI Studio use Microsoft-managed OS VM images. When you provision a node, it pulls the latest updated VM image. This behavior applies to compute instance, serverless compute cluster, and managed inference compute options.
-Although OS VM images are regularly patched, Azure AI Studio doesn't actively scan compute nodes for vulnerabilities while they're in use. For an extra layer of protection, consider network isolation of your computes.
+Although OS VM images are regularly patched, Microsoft doesn't actively scan compute nodes for vulnerabilities while they're in use. For an extra layer of protection, consider network isolation of your computes.
Ensuring that your environment is up to date and that compute nodes use the latest OS version is a shared responsibility between you and Microsoft. Nodes that aren't idle can't be updated to the latest VM image. Considerations are slightly different for each compute type, as listed in the following sections.
ai-studio Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/configure-private-link.md
[!INCLUDE [Azure AI Studio preview](../includes/preview-ai-studio.md)]
-We have two network isolation aspects. One is the network isolation to access an Azure AI. Another is the network isolation of computing resources in your Azure AI and Azure AI projects such as Compute Instance, Serverless and Managed Online Endpoint. This document explains the former highlighted in the diagram. You can use private link to establish the private connection to your Azure AI and its default resources.
+We have two network isolation aspects. One is the network isolation to access an Azure AI. Another is the network isolation of computing resources in your Azure AI and Azure AI projects such as Compute Instance, Serverless and Managed Online Endpoint. This document explains the former highlighted in the diagram. You can use private link to establish the private connection to your Azure AI and its default resources. This article is for Azure AI. For information on Azure AI Services, see the [Azure AI Services documentation](/azure/ai-services/cognitive-services-virtual-networks).
:::image type="content" source="../media/how-to/network/azure-ai-network-inbound.svg" alt-text="Diagram of Azure AI network isolation." lightbox="../media/how-to/network/azure-ai-network-inbound.png":::
ai-studio Deploy Models Cohere Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-cohere-command.md
+
+ Title: How to deploy Cohere Command models with Azure AI Studio
+
+description: Learn how to deploy Cohere Command models with Azure AI Studio.
+++ Last updated : 04/02/2024++++++
+# How to deploy Cohere Command models with Azure AI Studio
++
+In this article, you learn how to use Azure AI Studio to deploy the Cohere Command models as a service with pay-as you go billing.
+
+Cohere offers two Command models in [Azure AI Studio](https://ai.azure.com). These models are available with pay-as-you-go token based billing with Models as a Service.
+* Cohere Command R
+* Cohere Command R+
+
+You can browse the Cohere family of models in the [Model Catalog](model-catalog.md) by filtering on the Cohere collection.
+
+## Models
+
+In this article, you learn how to use Azure AI Studio to deploy the Cohere models as a service with pay-as-you-go billing.
+
+### Cohere Command R
+Command R is a highly performant generative large language model, optimized for various use cases including reasoning, summarization, and question answering.
++
+*Model Architecture:* An auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety.
+
+*Languages covered:* The model is optimized to perform well in the following languages: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic.
+
+Pre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian.
+
+*Context length:* Command R supports a context length of 128K.
+
+*Input:* Models input text only.
+
+*Output:* Models generate text only.
+
+
+### Cohere Command R+
+Command R+ is a highly performant generative large language model, optimized for various use cases including reasoning, summarization, and question answering.
++
+*Model Architecture:* An auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety.
+
+*Languages covered:* The model is optimized to perform well in the following languages: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic.
+
+Pre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian.
+
+*Context length:* Command R+ supports a context length of 128K.
+
+*Input:* Models input text only.
+
+*Output:* Models generate text only.
++
+## Deploy with pay-as-you-go
+
+Certain models in the model catalog can be deployed as a service with pay-as-you-go, providing a way to consume them as an API without hosting them on your subscription, while keeping the enterprise security and compliance organizations need. This deployment option doesn't require quota from your subscription.
+
+The previously mentioned Cohere models can be deployed as a service with pay-as-you-go, and are offered by Cohere through the Microsoft Azure Marketplace. Cohere can change or update the terms of use and pricing of this model.
+
+### Prerequisites
+
+- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
+- An [Azure AI hub resource](../how-to/create-azure-ai-resource.md).
+
+ > [!IMPORTANT]
+ > For Cohere family models, the pay-as-you-go model deployment offering is only available with AI hubs created in EastUS, EastUS2 or Sweden Central regions.
+
+- An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio.
+- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
++
+### Create a new deployment
+
+To create a deployment:
+
+1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Select **Model catalog** from the **Explore** tab and search for *Cohere*.
+
+ Alternatively, you can initiate a deployment by starting from your project in AI Studio. From the **Build** tab of your project, select **Deployments** > **+ Create**.
+
+1. In the model catalog, on the model's **Details** page, select **Deploy** and then **Pay-as-you-go**.
+
+ :::image type="content" source="../media/deploy-monitor/cohere-command/command-r-deploy-pay-as-you-go.png" alt-text="A screenshot showing how to deploy a model with the pay-as-you-go option." lightbox="../media/deploy-monitor/cohere-command/command-r-deploy-pay-as-you-go.png":::
+
+1. Select the project in which you want to deploy your model. To deploy the model your project must be in the EastUS, EastUS2 or Sweden Central regions.
+1. In the deployment wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use.
+1. You can also select the **Marketplace offer details** tab to learn about pricing for the selected model.
+1. If this is your first time deploying the model in the project, you have to subscribe your project for the particular offering. This step requires that your account has the **Azure AI Developer role** permissions on the Resource Group, as listed in the prerequisites. Each project has its own subscription to the particular Azure Marketplace offering of the model, which allows you to control and monitor spending. Select **Subscribe and Deploy**. Currently you can have only one deployment for each model within a project.
+
+ :::image type="content" source="../media/deploy-monitor/cohere-command/command-r-marketplace-terms.png" alt-text="A screenshot showing the terms and conditions of a given model." lightbox="../media/deploy-monitor/cohere-command/command-r-marketplace-terms.png":::
+
+1. Once you subscribe the project for the particular Azure Marketplace offering, subsequent deployments of the _same_ offering in the _same_ project don't require subscribing again. If this scenario applies to you, there's a **Continue to deploy** option to select (Currently you can have only one deployment for each model within a project).
+
+ :::image type="content" source="../media/deploy-monitor/cohere-command/command-r-existing-subscription.png" alt-text="A screenshot showing a project that is already subscribed to the offering." lightbox="../media/deploy-monitor/cohere-command/command-r-existing-subscription.png":::
+
+1. Give the deployment a name. This name becomes part of the deployment API URL. This URL must be unique in each Azure region.
+
+ :::image type="content" source="../media/deploy-monitor/cohere-command/command-r-deployment-name.png" alt-text="A screenshot showing how to indicate the name of the deployment you want to create." lightbox="../media/deploy-monitor/cohere-command/command-r-deployment-name.png":::
+
+1. Select **Deploy**. Wait until the deployment is ready and you're redirected to the Deployments page.
+1. Select **Open in playground** to start interacting with the model.
+1. You can return to the Deployments page, select the deployment, and note the endpoint's **Target** URL and the Secret **Key**. For more information on using the APIs, see the [reference](#chat-api-reference-for-cohere-models-deployed-as-a-service) section.
+1. You can always find the endpoint's details, URL, and access keys by navigating to the **Build** tab and selecting **Deployments** from the Components section.
+
+To learn about billing for the Cohere models deployed with pay-as-you-go, see [Cost and quota considerations for Cohere models deployed as a service](#cost-and-quota-considerations-for-models-deployed-as-a-service).
+
+### Consume the Cohere models as a service
+
+These models can be consumed using the chat API.
+
+1. On the **Build** page, select **Deployments**.
+
+1. Find and select the deployment you created.
+
+1. Copy the **Target** URL and the **Key** value.
+
+1. Cohere exposes two routes for inference with the Command R and Command R+ models. `v1/chat/completions` adheres to the Azure AI Generative Messages API schema, and `v1/chat` supports Cohere's native API schema.
+
+For more information on using the APIs, see the [reference](#chat-api-reference-for-cohere-models-deployed-as-a-service) section.
+
+## Chat API reference for Cohere models deployed as a service
+
+### v1/chat/completions
+
+#### Request
+```
+ POST /v1/chat/completions HTTP/1.1
+ Host: <DEPLOYMENT_URI>
+ Authorization: Bearer <TOKEN>
+ Content-type: application/json
+```
+
+#### v1/chat/completions request schema
+
+Cohere Command R and Command R+ accept the following parameters for a `v1/chat/completions` response inference call:
+
+| Property | Type | Default | Description |
+| | | | |
+| `messages` | `array` | `None` | Text input for the model to respond to. |
+| `max_tokens` | `integer` | `None` | The maximum number of tokens the model generates as part of the response. Note: Setting a low value might result in incomplete generations. If not specified, generates tokens until end of sequence. |
+| `stop` | `array of strings` | `None` | The generated text is cut at the end of the earliest occurrence of a stop sequence. The sequence is included in the text.|
+| `stream` | `boolean` | `False` | When `true`, the response is a JSON stream of events. The final event contains the complete response, and has an `event_type` of `"stream-end"`. Streaming is beneficial for user interfaces that render the contents of the response piece by piece, as it gets generated. |
+| `temperature` | `float` | `0.3` |Use a lower value to decrease randomness in the response. Randomness can be further maximized by increasing the value of the `p` parameter. Min value is 0, and max is 2. |
+| `top_p` | `float` |`0.75` |Use a lower value to ignore less probable options. Set to 0 or 1.0 to disable. If both p and k are enabled, p acts after k. min value of 0.01, max value of 0.99.|
+| `frequency_penalty` | `float` | `0` |Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation. Min value of 0.0, max value of 1.0.|
+| `presence_penalty` | `float` |`0` |Used to reduce repetitiveness of generated tokens. Similar to `frequency_penalty`, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies. Min value of 0.0, max value of 1.0.|
+| `seed` | `integer` |`None` |If specified, the backend makes a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinism can't be guaranteed.|
+| `tools` | `list[Tool]` | `None` | A list of available tools (functions) that the model might suggest invoking before producing a text response. |
+
+`response_format` and `tool_choice` aren't yet supported parameters for the Command R and Command R+ models.
+++
+A System or User Message supports the following properties:
+
+| Property | Type | Default | Description |
+| | | | |
+| `role` | `enum` | Required | `role=system` or `role=user`. |
+|`content` |`string` |Required |Text input for the model to respond to. |
++
+An Assistant Message supports the following properties:
+
+| Property | Type | Default | Description |
+| | | | |
+| `role` | `enum` | Required | `role=assistant`|
+|`content` |`string` |Required |The contents of the assistant message. |
+|`tool_calls` |`array` |None |The tool calls generated by the model, such as function calls. |
++
+A Tool Message supports the following properties:
+
+| Property | Type | Default | Description |
+| | | | |
+| `role` | `enum` | Required | `role=tool`|
+|`content` |`string` |Required |The contents of the tool message. |
+|`tool_call_id` |`string` |None |Tool call that this message is responding to. |
++
+#### v1/chat/completions response schema
+
+The response payload is a dictionary with the following fields:
+
+| Key | Type | Description |
+| | | |
+| `id` | `string` | A unique identifier for the completion. |
+| `choices` | `array` | The list of completion choices the model generated for the input messages. |
+| `created` | `integer` | The Unix timestamp (in seconds) of when the completion was created. |
+| `model` | `string` | The model_id used for completion. |
+| `object` | `string` | chat.completion. |
+| `usage` | `object` | Usage statistics for the completion request. |
+
+The `choices` object is a dictionary with the following fields:
+
+| Key | Type | Description |
+| | | |
+| `index` | `integer` | Choice index. |
+| `messages` or `delta` | `string` | Chat completion result in messages object. When streaming mode is used, delta key is used. |
+| `finish_reason` | `string` | The reason the model stopped generating tokens. |
+
+The `usage` object is a dictionary with the following fields:
+
+| Key | Type | Description |
+| | | |
+| `prompt_tokens` | `integer` | Number of tokens in the prompt. |
+| `completion_tokens` | `integer` | Number of tokens generated in the completion. |
+| `total_tokens` | `integer` | Total tokens. |
++
+#### Examples
+
+Request:
+
+```json
+ "messages": [
+ {
+ "role": "user",
+ "content": "What is the weather like in Boston?"
+ },
+ {
+ "role": "assistant",
+ "tool_calls": [
+ {
+ "id": "call_ceRrx0tP7bYPTClugKrOgvh4",
+ "type": "function",
+ "function": {
+ "name": "get_current_weather",
+ "arguments": "{\"location\":\"Boston\"}"
+ }
+ }
+ ]
+ },
+ {
+ "role": "tool",
+ "content": "{\"temperature\":30}",
+ "tool_call_id": "call_ceRrx0tP7bYPTClugKrOgvh4"
+ }
+ ]
+```
+
+Response:
+
+```json
+ {
+ "id": "df23b9f7-e6bd-493f-9437-443c65d428a1",
+ "choices": [
+ {
+ "index": 0,
+ "finish_reason": "stop",
+ "message": {
+ "role": "assistant",
+ "content": "Right now, the weather in Boston is cool, with temperatures of around 30┬░F. Stay warm!"
+ }
+ }
+ ],
+ "created": 1711734274,
+ "model": "command-r",
+ "object": "chat.completion",
+ "usage": {
+ "prompt_tokens": 744,
+ "completion_tokens": 23,
+ "total_tokens": 767
+ }
+ }
+```
+
+### v1/chat
+#### Request
+
+```
+ POST /v1/chat HTTP/1.1
+ Host: <DEPLOYMENT_URI>
+ Authorization: Bearer <TOKEN>
+ Content-type: application/json
+```
+
+#### v1/chat request schema
+
+Cohere Command R and Command R+ accept the following parameters for a `v1/chat` response inference call:
+
+|Key |Type |Default |Description |
+|||||
+|`message` |`string` |Required |Text input for the model to respond to. |
+|`chat_history` |`array of messages` |`None` |A list of previous messages between the user and the model, meant to give the model conversational context for responding to the user's message. |
+|`documents` |`array` |`None ` |A list of relevant documents that the model can cite to generate a more accurate reply. Each document is a string-string dictionary. Keys and values from each document are serialized to a string and passed to the model. The resulting generation includes citations that reference some of these documents. Some suggested keys are "text", "author", and "date". For better generation quality, it's recommended to keep the total word count of the strings in the dictionary to under 300 words. An `_excludes` field (array of strings) can be optionally supplied to omit some key-value pairs from being shown to the model. The omitted fields still show up in the citation object. The "_excludes" field aren't passed to the model. See [Document Mode](https://docs.cohere.com/docs/retrieval-augmented-generation-rag#document-mode) guide from Cohere docs. |
+|`search_queries_only` |`boolean` |`false` |When `true`, the response only contains a list of generated search queries, but no search takes place, and no reply from the model to the user's `message` is generated.|
+|`stream` |`boolean` |`false` |When `true`, the response is a JSON stream of events. The final event contains the complete response, and has an `event_type` of `"stream-end"`. Streaming is beneficial for user interfaces that render the contents of the response piece by piece, as it gets generated.|
+|`max_tokens` |`integer` |None |The maximum number of tokens the model generates as part of the response. Note: Setting a low value might result in incomplete generations. If not specified, generates tokens until end of sequence.|
+|`temperature` |`float` |`0.3` |Use a lower value to decrease randomness in the response. Randomness can be further maximized by increasing the value of the `p` parameter. Min value is 0, and max is 2. |
+|`p` |`float` |`0.75` |Use a lower value to ignore less probable options. Set to 0 or 1.0 to disable. If both p and k are enabled, p acts after k. min value of 0.01, max value of 0.99.|
+|`k` |`float` |`0` |Specify the number of token choices the model uses to generate the next token. If both p and k are enabled, p acts after k. Min value is 0, max value is 500.|
+|`prompt_truncation` |`enum string` |`OFF` |Accepts `AUTO_PRESERVE_ORDER`, `AUTO`, `OFF`. Dictates how the prompt is constructed. With `prompt_truncation` set to `AUTO_PRESERVE_ORDER`, some elements from `chat_history` and `documents` are dropped to construct a prompt that fits within the model's context length limit. During this process the order of the documents and chat history are preserved. With `prompt_truncation` set to "OFF", no elements are dropped.|
+|`stop_sequences` |`array of strings` |`None` |The generated text is cut at the end of the earliest occurrence of a stop sequence. The sequence is included the text. |
+|`frequency_penalty` |`float` |`0` |Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation. Min value of 0.0, max value of 1.0.|
+|`presence_penalty` |`float` |`0` |Used to reduce repetitiveness of generated tokens. Similar to `frequency_penalty`, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies. Min value of 0.0, max value of 1.0.|
+|`seed` |`integer` |`None` |If specified, the backend makes a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinism can't be guaranteed.|
+|`return_prompt` |`boolean ` |`false ` |Returns the full prompt that was sent to the model when `true`. |
+|`tools` |`array of objects` |`None` |_Field is subject to changes._ A list of available tools (functions) that the model might suggest invoking before producing a text response. When `tools` is passed (without `tool_results`), the `text` field in the response is `""` and the `tool_calls` field in the response is populated with a list of tool calls that need to be made. If no calls need to be made, the `tool_calls` array is empty.|
+|`tool_results` |`array of objects` |`None` |_Field is subject to changes._ A list of results from invoking tools recommended by the model in the previous chat turn. Results are used to produce a text response and is referenced in citations. When using `tool_results`, `tools` must be passed as well. Each tool_result contains information about how it was invoked, and a list of outputs in the form of dictionaries. Cohere's unique fine-grained citation logic requires the output to be a list. In case the output is just one item, for example, `{"status": 200}`, still wrap it inside a list. |
+
+The `chat_history` object requires the following fields:
+
+|Key |Type |Description |
+||||
+|`role` |`enum string` |Takes `USER`, `SYSTEM`, or `CHATBOT`. |
+|`message` |`string` |Text contents of the message. |
+
+The `documents` object has the following optional fields:
+
+|Key |Type |Default| Description |
+|||||
+|`id` |`string` |`None` |Can be supplied to identify the document in the citations. This field isn't passed to the model. |
+|`_excludes` |`array of strings` |`None`| Can be optionally supplied to omit some key-value pairs from being shown to the model. The omitted fields still show up in the citation object. The `_excludes` field isn't passed to the model. |
+
+#### v1/chat response schema
+
+Response fields are fully documented on [Cohere's Chat API reference](https://docs.cohere.com/reference/chat). The response object always contains:
+
+|Key |Type |Description |
+||||
+|`response_id` |`string` |Unique identifier for chat completion. |
+|`generation_id` |`string` |Unique identifier for chat completion, used with Feedback endpoint on Cohere's platform. |
+|`text` |`string` |Model's response to chat message input. |
+|`finish_reason` |`enum string` |Why the generation was completed. Can be any of the following values: `COMPLETE`, `ERROR`, `ERROR_TOXIC`, `ERROR_LIMIT`, `USER_CANCEL` or `MAX_TOKENS` |
+|`token_count` |`integer` |Count of tokens used. |
+|`meta` |`string` |API usage data, including current version and billable tokens. |
+
+<br/>
+
+#### Documents
+If `documents` are specified in the request, there are two other fields in the response:
+
+|Key |Type |Description |
+||||
+|`documents ` |`array of objects` |Lists the documents that were cited in the response. |
+|`citations` |`array of objects` |Specifies which part of the answer was found in a given document. |
+
+`citations` is an array of objects with the following required fields:
+
+|Key |Type |Description |
+||||
+|`start` |`integer` |The index of text that the citation starts at, counting from zero. For example, a generation of `Hello, world!` with a citation on `world` would have a start value of `7`. This is because the citation starts at `w`, which is the seventh character. |
+|`end` |`integer` |The index of text that the citation ends after, counting from zero. For example, a generation of `Hello, world!` with a citation on `world` would have an end value of `11`. This is because the citation ends after `d`, which is the eleventh character. |
+|`text` |`string` |The text of the citation. For example, a generation of `Hello, world!` with a citation of `world` would have a text value of `world`. |
+|`document_ids` |`array of strings` |Identifiers of documents cited by this section of the generated reply. |
+
+#### Tools
+If `tools` are specified and invoked by the model, there's another field in the response:
+
+|Key |Type |Description |
+||||
+|`tool_calls ` |`array of objects` |Contains the tool calls generated by the model. Use it to invoke your tools. |
+
+`tool_calls` is an array of objects with the following fields:
+
+|Key |Type |Description |
+||||
+|`name` |`string` |Name of the tool to call. |
+|`parameters` |`object` |The name and value of the parameters to use when invoking a tool. |
+
+#### Search_queries_only
+If `search_queries_only=TRUE` is specified in the request, there are two other fields in the response:
+
+|Key |Type |Description |
+||||
+|`is_search_required` |`boolean` |Instructs the model to generate a search query. |
+|`search_queries` |`array of objects` |Object that contains a list of search queries. |
+
+`search_queries` is an array of objects with the following fields:
+
+|Key |Type |Description |
+||||
+|`text` |`string` |The text of the search query. |
+|`generation_id` |`string` |Unique identifier for the generated search query. Useful for submitting feedback. |
+
+#### Examples
+
+##### Chat - Completions
+The following example is a sample request call to get chat completions from the Cohere Command model. Use when generating a chat completion.
+
+Request:
+
+```json
+ {
+ "chat_history": [
+ {"role":"USER", "message": "What is an interesting new role in AI if I don't have an ML background"},
+ {"role":"CHATBOT", "message": "You could explore being a prompt engineer!"}
+ ],
+ "message": "What are some skills I should have"
+ }
+```
+
+Response:
+
+```json
+ {
+ "response_id": "09613f65-c603-41e6-94b3-a7484571ac30",
+ "text": "Writing skills are very important for prompt engineering. Some other key skills are:\n- Creativity\n- Awareness of biases\n- Knowledge of how NLP models work\n- Debugging skills\n\nYou can also have some fun with it and try to create some interesting, innovative prompts to train an AI model that can then be used to create various applications.",
+ "generation_id": "6d31a57f-4d94-4b05-874d-36d0d78c9549",
+ "finish_reason": "COMPLETE",
+ "token_count": {
+ "prompt_tokens": 99,
+ "response_tokens": 70,
+ "total_tokens": 169,
+ "billed_tokens": 151
+ },
+ "meta": {
+ "api_version": {
+ "version": "1"
+ },
+ "billed_units": {
+ "input_tokens": 81,
+ "output_tokens": 70
+ }
+ }
+ }
+```
+
+##### Chat - Grounded generation and RAG capabilities
+
+Command R and Command R+ are trained for RAG via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. We introduce that prompt template via the `documents` parameter. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings. The values can be text or semi-structured.
+
+Request:
+
+```json
+ {
+ "message": "Where do the tallest penguins live?",
+ "documents": [
+ {
+ "title": "Tall penguins",
+ "snippet": "Emperor penguins are the tallest."
+ },
+ {
+ "title": "Penguin habitats",
+ "snippet": "Emperor penguins only live in Antarctica."
+ }
+ ]
+ }
+```
+
+Response:
+
+```json
+ {
+ "response_id": "d7e72d2e-06c0-469f-8072-a3aa6bd2e3b2",
+ "text": "Emperor penguins are the tallest species of penguin and they live in Antarctica.",
+ "generation_id": "b5685d8d-00b4-48f1-b32f-baebabb563d8",
+ "finish_reason": "COMPLETE",
+ "token_count": {
+ "prompt_tokens": 615,
+ "response_tokens": 15,
+ "total_tokens": 630,
+ "billed_tokens": 22
+ },
+ "meta": {
+ "api_version": {
+ "version": "1"
+ },
+ "billed_units": {
+ "input_tokens": 7,
+ "output_tokens": 15
+ }
+ },
+ "citations": [
+ {
+ "start": 0,
+ "end": 16,
+ "text": "Emperor penguins",
+ "document_ids": [
+ "doc_0"
+ ]
+ },
+ {
+ "start": 69,
+ "end": 80,
+ "text": "Antarctica.",
+ "document_ids": [
+ "doc_1"
+ ]
+ }
+ ],
+ "documents": [
+ {
+ "id": "doc_0",
+ "snippet": "Emperor penguins are the tallest.",
+ "title": "Tall penguins"
+ },
+ {
+ "id": "doc_1",
+ "snippet": "Emperor penguins only live in Antarctica.",
+ "title": "Penguin habitats"
+ }
+ ]
+ }
+```
+
+##### Chat - Tool Use
+
+If invoking tools or generating a response based on tool results, use the following parameters.
+
+Request:
+
+```json
+ {
+ "message":"I'd like 4 apples and a fish please",
+ "tools":[
+ {
+ "name":"personal_shopper",
+ "description":"Returns items and requested volumes to purchase",
+ "parameter_definitions":{
+ "item":{
+ "description":"the item requested to be purchased, in all caps eg. Bananas should be BANANAS",
+ "type": "str",
+ "required": true
+ },
+ "quantity":{
+ "description": "how many of the items should be purchased",
+ "type": "int",
+ "required": true
+ }
+ }
+ }
+ ],
+
+ "tool_results": [
+ {
+ "call": {
+ "name": "personal_shopper",
+ "parameters": {
+ "item": "Apples",
+ "quantity": 4
+ },
+ "generation_id": "cb3a6e8b-6448-4642-b3cd-b1cc08f7360d"
+ },
+ "outputs": [
+ {
+ "response": "Sale completed"
+ }
+ ]
+ },
+ {
+ "call": {
+ "name": "personal_shopper",
+ "parameters": {
+ "item": "Fish",
+ "quantity": 1
+ },
+ "generation_id": "cb3a6e8b-6448-4642-b3cd-b1cc08f7360d"
+ },
+ "outputs": [
+ {
+ "response": "Sale not completed"
+ }
+ ]
+ }
+ ]
+ }
+```
+
+Response:
+
+```json
+ {
+ "response_id": "fa634da2-ccd1-4b56-8308-058a35daa100",
+ "text": "I've completed the sale for 4 apples. \n\nHowever, there was an error regarding the fish; it appears that there is currently no stock.",
+ "generation_id": "f567e78c-9172-4cfa-beba-ee3c330f781a",
+ "chat_history": [
+ {
+ "message": "I'd like 4 apples and a fish please",
+ "response_id": "fa634da2-ccd1-4b56-8308-058a35daa100",
+ "generation_id": "a4c5da95-b370-47a4-9ad3-cbf304749c04",
+ "role": "User"
+ },
+ {
+ "message": "I've completed the sale for 4 apples. \n\nHowever, there was an error regarding the fish; it appears that there is currently no stock.",
+ "response_id": "fa634da2-ccd1-4b56-8308-058a35daa100",
+ "generation_id": "f567e78c-9172-4cfa-beba-ee3c330f781a",
+ "role": "Chatbot"
+ }
+ ],
+ "finish_reason": "COMPLETE",
+ "token_count": {
+ "prompt_tokens": 644,
+ "response_tokens": 31,
+ "total_tokens": 675,
+ "billed_tokens": 41
+ },
+ "meta": {
+ "api_version": {
+ "version": "1"
+ },
+ "billed_units": {
+ "input_tokens": 10,
+ "output_tokens": 31
+ }
+ },
+ "citations": [
+ {
+ "start": 5,
+ "end": 23,
+ "text": "completed the sale",
+ "document_ids": [
+ ""
+ ]
+ },
+ {
+ "start": 113,
+ "end": 132,
+ "text": "currently no stock.",
+ "document_ids": [
+ ""
+ ]
+ }
+ ],
+ "documents": [
+ {
+ "response": "Sale completed"
+ }
+ ]
+ }
+```
+
+Once you run your function and received tool outputs, you can pass them back to the model to generate a response for the user.
+
+Request:
+
+```json
+ {
+ "message":"I'd like 4 apples and a fish please",
+ "tools":[
+ {
+ "name":"personal_shopper",
+ "description":"Returns items and requested volumes to purchase",
+ "parameter_definitions":{
+ "item":{
+ "description":"the item requested to be purchased, in all caps eg. Bananas should be BANANAS",
+ "type": "str",
+ "required": true
+ },
+ "quantity":{
+ "description": "how many of the items should be purchased",
+ "type": "int",
+ "required": true
+ }
+ }
+ }
+ ],
+
+ "tool_results": [
+ {
+ "call": {
+ "name": "personal_shopper",
+ "parameters": {
+ "item": "Apples",
+ "quantity": 4
+ },
+ "generation_id": "cb3a6e8b-6448-4642-b3cd-b1cc08f7360d"
+ },
+ "outputs": [
+ {
+ "response": "Sale completed"
+ }
+ ]
+ },
+ {
+ "call": {
+ "name": "personal_shopper",
+ "parameters": {
+ "item": "Fish",
+ "quantity": 1
+ },
+ "generation_id": "cb3a6e8b-6448-4642-b3cd-b1cc08f7360d"
+ },
+ "outputs": [
+ {
+ "response": "Sale not completed"
+ }
+ ]
+ }
+ ]
+ }
+```
+
+Response:
+
+```json
+ {
+ "response_id": "fa634da2-ccd1-4b56-8308-058a35daa100",
+ "text": "I've completed the sale for 4 apples. \n\nHowever, there was an error regarding the fish; it appears that there is currently no stock.",
+ "generation_id": "f567e78c-9172-4cfa-beba-ee3c330f781a",
+ "chat_history": [
+ {
+ "message": "I'd like 4 apples and a fish please",
+ "response_id": "fa634da2-ccd1-4b56-8308-058a35daa100",
+ "generation_id": "a4c5da95-b370-47a4-9ad3-cbf304749c04",
+ "role": "User"
+ },
+ {
+ "message": "I've completed the sale for 4 apples. \n\nHowever, there was an error regarding the fish; it appears that there is currently no stock.",
+ "response_id": "fa634da2-ccd1-4b56-8308-058a35daa100",
+ "generation_id": "f567e78c-9172-4cfa-beba-ee3c330f781a",
+ "role": "Chatbot"
+ }
+ ],
+ "finish_reason": "COMPLETE",
+ "token_count": {
+ "prompt_tokens": 644,
+ "response_tokens": 31,
+ "total_tokens": 675,
+ "billed_tokens": 41
+ },
+ "meta": {
+ "api_version": {
+ "version": "1"
+ },
+ "billed_units": {
+ "input_tokens": 10,
+ "output_tokens": 31
+ }
+ },
+ "citations": [
+ {
+ "start": 5,
+ "end": 23,
+ "text": "completed the sale",
+ "document_ids": [
+ ""
+ ]
+ },
+ {
+ "start": 113,
+ "end": 132,
+ "text": "currently no stock.",
+ "document_ids": [
+ ""
+ ]
+ }
+ ],
+ "documents": [
+ {
+ "response": "Sale completed"
+ }
+ ]
+ }
+```
+
+##### Chat - Search queries
+If you're building a RAG agent, you can also use Cohere's Chat API to get search queries from Command. Specify `search_queries_only=TRUE` in your request.
++
+Request:
+
+```json
+ {
+ "message": "Which lego set has the greatest number of pieces?",
+ "search_queries_only": true
+ }
+```
+
+Response:
+
+```json
+ {
+ "response_id": "5e795fe5-24b7-47b4-a8bc-b58a68c7c676",
+ "text": "",
+ "finish_reason": "COMPLETE",
+ "meta": {
+ "api_version": {
+ "version": "1"
+ }
+ },
+ "is_search_required": true,
+ "search_queries": [
+ {
+ "text": "lego set with most pieces",
+ "generation_id": "a086696b-ad8e-4d15-92e2-1c57a3526e1c"
+ }
+ ]
+ }
+```
+
+##### More inference examples
+
+| **Sample Type** | **Sample Notebook** |
+|-|-|
+| CLI using CURL and Python web requests - Command R | [command-r.ipynb](https://aka.ms/samples/cohere-command-r/webrequests)|
+| CLI using CURL and Python web requests - Command R+ | [command-r-plus.ipynb](https://aka.ms/samples/cohere-command-r-plus/webrequests)|
+| OpenAI SDK (experimental) | [openaisdk.ipynb](https://aka.ms/samples/cohere-command/openaisdk) |
+| LangChain | [langchain.ipynb](https://aka.ms/samples/cohere/langchain) |
+| Cohere SDK | [cohere-sdk.ipynb](https://aka.ms/samples/cohere-python-sdk) |
+
+## Cost and quotas
+
+### Cost and quota considerations for models deployed as a service
+
+Cohere models deployed as a service are offered by Cohere through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying the model.
+
+Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference; however, multiple meters are available to track each scenario independently.
+
+For more information on how to track costs, see [monitor costs for models offered throughout the Azure Marketplace](./costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace).
+
+Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
+
+## Content filtering
+
+Models deployed as a service with pay-as-you-go are protected by [Azure AI Content Safety](../../ai-services/content-safety/overview.md). With Azure AI content safety, both the prompt and completion pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Learn more about [content filtering here](../concepts/content-filtering.md).
+
+## Next steps
+
+- [What is Azure AI Studio?](../what-is-ai-studio.md)
+- [Azure AI FAQ article](../faq.yml)
ai-studio Deploy Models Cohere Embed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-cohere-embed.md
+
+ Title: How to deploy Cohere Embed models with Azure AI Studio
+
+description: Learn how to deploy Cohere Embed models with Azure AI Studio.
+++ Last updated : 04/02/2024++++++
+# How to deploy Cohere Embed models with Azure AI Studio
++
+In this article, you learn how to use Azure AI Studio to deploy the Cohere Embed models as a service with pay-as you go billing.
+
+Cohere offers two Embed models in [Azure AI Studio](https://ai.azure.com). These models are available with pay-as-you-go token based billing with Models as a Service.
+* Cohere Embed v3 - English
+* Cohere Embed v3 - Multilingual
+
+You can browse the Cohere family of models in the [Model Catalog](model-catalog.md) by filtering on the Cohere collection.
+
+## Models
+
+In this article, you learn how to use Azure AI Studio to deploy the Cohere Embed models as a service with pay-as-you-go billing.
+
+### Cohere Embed v3 - English
+Cohere Embed English is the market's leading text representation model used for semantic search, retrieval-augmented generation (RAG), classification, and clustering. Embed English has top performance on the HuggingFace MTEB benchmark and performs well on various industries such as Finance, Legal, and General-Purpose Corpora.
+
+* Embed English has 1,024 dimensions.
+* Context window of the model is 512 tokens
+
+### Cohere Embed v3 - Multilingual
+Cohere Embed Multilingual is the market's leading text representation model used for semantic search, retrieval-augmented generation (RAG), classification, and clustering. Embed Multilingual supports 100+ languages and can be used to search within a language (for example, search with a French query on French documents) and across languages (for example, search with an English query on Chinese documents). Embed multilingual has SOTA performance on multilingual benchmarks such as Miracl.
+
+* Embed Multilingual has 1,024 dimensions.
+* Context window of the model is 512 tokens
+
+## Deploy with pay-as-you-go
+
+Certain models in the model catalog can be deployed as a service with pay-as-you-go, providing a way to consume them as an API without hosting them on your subscription, while keeping the enterprise security and compliance organizations need. This deployment option doesn't require quota from your subscription.
+
+The previously mentioned Cohere models can be deployed as a service with pay-as-you-go, and are offered by Cohere through the Microsoft Azure Marketplace. Cohere can change or update the terms of use and pricing of this model.
+
+### Prerequisites
+
+- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
+- An [Azure AI hub resource](../how-to/create-azure-ai-resource.md).
+
+ > [!IMPORTANT]
+ > For Cohere family models, the pay-as-you-go model deployment offering is only available with AI hubs created in EastUS, EastUS2 or Sweden Central regions.
+
+- An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio.
+- Azure role-based access controls are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
++
+### Create a new deployment
+
+To create a deployment:
+
+1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Select **Model catalog** from the **Explore** tab and search for *Cohere*.
+
+ Alternatively, you can initiate a deployment by starting from your project in AI Studio. From the **Build** tab of your project, select **Deployments** > **+ Create**.
+
+1. In the model catalog, on the model's **Details** page, select **Deploy** and then **Pay-as-you-go**.
+
+ :::image type="content" source="../media/deploy-monitor/cohere-embed/embed-english-deploy-pay-as-you-go.png" alt-text="A screenshot showing how to deploy a model with the pay-as-you-go option." lightbox="../media/deploy-monitor/cohere-embed/embed-english-deploy-pay-as-you-go.png":::
+
+1. Select the project in which you want to deploy your model. To deploy the model, your project must be in the EastUS, EastUS2 or Sweden Central regions.
+1. In the deployment wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use.
+1. You can also select the **Marketplace offer details** tab to learn about pricing for the selected model.
+1. If it is your first time deploying the model in the project, you have to subscribe your project for the particular offering. This step requires that your account has the **Azure AI Developer role** permissions on the Resource Group, as listed in the prerequisites. Each project has its own subscription to the particular Azure Marketplace offering of the model, which allows you to control and monitor spending. Select **Subscribe and Deploy**. Currently you can have only one deployment for each model within a project.
+
+ :::image type="content" source="../media/deploy-monitor/cohere-embed/embed-english-marketplace-terms.png" alt-text="A screenshot showing the terms and conditions of a given model." lightbox="../media/deploy-monitor/cohere-embed/embed-english-marketplace-terms.png":::
+
+1. Once you subscribe the project for the particular Azure Marketplace offering, subsequent deployments of the _same_ offering in the _same_ project don't require subscribing again. If this scenario applies to you, there's a **Continue to deploy** option to select (Currently you can have only one deployment for each model within a project).
+
+ :::image type="content" source="../media/deploy-monitor/cohere-embed/embed-english-existing-subscription.png" alt-text="A screenshot showing a project that is already subscribed to the offering." lightbox="../media/deploy-monitor/cohere-embed/embed-english-existing-subscription.png":::
+
+1. Give the deployment a name. This name becomes part of the deployment API URL. This URL must be unique in each Azure region.
+
+ :::image type="content" source="../media/deploy-monitor/cohere-embed/embed-english-deployment-name.png" alt-text="A screenshot showing how to indicate the name of the deployment you want to create." lightbox="../media/deploy-monitor/cohere-embed/embed-english-deployment-name.png":::
+
+1. Select **Deploy**. Wait until the deployment is ready and you're redirected to the Deployments page.
+1. Select **Open in playground** to start interacting with the model.
+1. You can return to the Deployments page, select the deployment, and note the endpoint's **Target** URL and the Secret **Key**. For more information on using the APIs, see the [reference](#embed-api-reference-for-cohere-embed-models-deployed-as-a-service) section.
+1. You can always find the endpoint's details, URL, and access keys by navigating to the **Build** tab and selecting **Deployments** from the Components section.
+
+To learn about billing for the Cohere models deployed with pay-as-you-go, see [Cost and quota considerations for Cohere models deployed as a service](#cost-and-quota-considerations-for-models-deployed-as-a-service).
+
+### Consume the Cohere Embed models as a service
+
+These models can be consumed using the embed API.
+
+1. On the **Build** page, select **Deployments**.
+
+1. Find and select the deployment you created.
+
+1. Copy the **Target** URL and the **Key** value.
+
+1. Cohere exposes two routes for inference with the Embed v3 - English and Embed v3 - Multilingual models. `v1/embeddings` adheres to the Azure AI Generative Messages API schema, and `v1/embed` supports Cohere's native API schema.
+
+ For more information on using the APIs, see the [reference](#embed-api-reference-for-cohere-embed-models-deployed-as-a-service) section.
+
+## Embed API reference for Cohere Embed models deployed as a service
+
+### v1/embeddings
+#### Request
+
+```
+ POST /v1/embeddings HTTP/1.1
+ Host: <DEPLOYMENT_URI>
+ Authorization: Bearer <TOKEN>
+ Content-type: application/json
+```
+
+#### v1/emebeddings request schema
+
+Cohere Embed v3 - English and Embed v3 - Multilingual accept the following parameters for a `v1/embeddings` API call:
+
+| Property | Type | Default | Description |
+| | | | |
+|`input` |`array of strings` |Required |An array of strings for the model to embed. Maximum number of texts per call is 96. We recommend reducing the length of each text to be under 512 tokens for optimal quality. |
+
+#### v1/emebeddings response schema
+
+The response payload is a dictionary with the following fields:
+
+| Key | Type | Description |
+| | | |
+| `id` | `string` | A unique identifier for the completion. |
+| `object` | `enum`|The object type, which is always `list` |
+| `data` | `array` | The Unix timestamp (in seconds) of when the completion was created. |
+| `model` | `string` | The model_id used for creating the embeddings. |
+| `usage` | `object` | Usage statistics for the completion request. |
+
+The `data` object is a dictionary with the following fields:
+
+| Key | Type | Description |
+| | | |
+| `index` | `integer` |The index of the embedding in the list of embeddings. |
+| `object` | `enum` | The object type, which is always "embedding". |
+| `embedding` | `array` | The embedding vector, which is a list of floats. |
+
+The `usage` object is a dictionary with the following fields:
+
+| Key | Type | Description |
+| | | |
+| `prompt_tokens` | `integer` | Number of tokens in the prompt. |
+| `completion_tokens` | `integer` | Number of tokens generated in the completion. |
+| `total_tokens` | `integer` | Total tokens. |
++
+### v1/embeddings examples
+
+Request:
+
+```json
+ {
+ "input": ["hi"]
+ }
+```
+
+Response:
+
+```json
+ {
+ "id": "87cb11c5-2316-4c88-af3c-4b2b77ed58f3",
+ "object": "list",
+ "data": [
+ {
+ "index": 0,
+ "object": "embedding",
+ "embedding": [
+ 1.1513672,
+ 1.7060547,
+ ...
+ ]
+ }
+ ],
+ "model": "tmp",
+ "usage": {
+ "prompt_tokens": 1,
+ "completion_tokens": 0,
+ "total_tokens": 1
+ }
+ }
+```
+
+### v1/embed
+#### Request
+
+```
+ POST /v1/embed HTTP/1.1
+ Host: <DEPLOYMENT_URI>
+ Authorization: Bearer <TOKEN>
+ Content-type: application/json
+```
+
+#### v1/embed request schema
+
+Cohere Embed v3 - English and Embed v3 - Multilingual accept the following parameters for a `v1/embed` API call:
+
+|Key |Type |Default |Description |
+|||||
+|`texts` |`array of strings` |Required |An array of strings for the model to embed. Maximum number of texts per call is 96. We recommend reducing the length of each text to be under 512 tokens for optimal quality. |
+|`input_type` |`enum string` |Required |Prepends special tokens to differentiate each type from one another. You shouldn't mix different types together, except when mixing types for for search and retrieval. In this case, embed your corpus with the `search_document` type and embedded queries with type `search_query` type. <br/> `search_document` ΓÇô In search use-cases, use search_document when you encode documents for embeddings that you store in a vector database. <br/> `search_query` ΓÇô Use search_query when querying your vector database to find relevant documents. <br/> `classification` ΓÇô Use classification when using embeddings as an input to a text classifier. <br/> `clustering` ΓÇô Use clustering to cluster the embeddings.|
+|`truncate` |`enum string` |`NONE` |`NONE` ΓÇô Returns an error when the input exceeds the maximum input token length. <br/> `START` ΓÇô Discards the start of the input. <br/> `END` ΓÇô Discards the end of the input. |
+|`embedding_types` |`array of strings` |`float` |Specifies the types of embeddings you want to get back. Can be one or more of the following types. `float`, `int8`, `uint8`, `binary`, `ubinary` |
+
+#### v1/embed response schema
+
+Cohere Embed v3 - English and Embed v3 - Multilingual include the following fields in the response:
+
+|Key |Type |Description |
+||||
+|`response_type` |`enum` |The response type. Returns `embeddings_floats` when `embedding_types` isn't specified, or returns `embeddings_by_type` when `embeddings_types` is specified. |
+|`id` |`integer` |An identifier for the response. |
+|`embeddings` |`array` or `array of objects` |An array of embeddings, where each embedding is an array of floats with 1,024 elements. The length of the embeddings array is the same as the length of the original texts array.|
+|`texts` |`array of strings` |The text entries for which embeddings were returned. |
+|`meta` |`string` |API usage data, including current version and billable tokens. |
+
+For more information, see [https://docs.cohere.com/reference/embed](https://docs.cohere.com/reference/embed).
+
+### v1/embed examples
+
+#### embeddings_floats Response
+
+Request:
+
+```json
+ {
+ "input_type": "clustering",
+ "truncate": "START",
+ "texts":["hi", "hello"]
+ }
+```
+
+Response:
+
+```json
+ {
+ "id": "da7a104c-e504-4349-bcd4-4d69dfa02077",
+ "texts": [
+ "hi",
+ "hello"
+ ],
+ "embeddings": [
+ [
+ ...
+ ],
+ [
+ ...
+ ]
+ ],
+ "meta": {
+ "api_version": {
+ "version": "1"
+ },
+ "billed_units": {
+ "input_tokens": 2
+ }
+ },
+ "response_type": "embeddings_floats"
+ }
+```
+
+#### Embeddings_by_types response
+
+Request:
+
+```json
+ {
+ "input_type": "clustering",
+ "embedding_types": ["int8", "binary"],
+ "truncate": "START",
+ "texts":["hi", "hello"]
+ }
+```
+
+Response:
+
+```json
+ {
+ "id": "b604881a-a5e1-4283-8c0d-acbd715bf144",
+ "texts": [
+ "hi",
+ "hello"
+ ],
+ "embeddings": {
+ "binary": [
+ [
+ ...
+ ],
+ [
+ ...
+ ]
+ ],
+ "int8": [
+ [
+ ...
+ ],
+ [
+ ...
+ ]
+ ]
+ },
+ "meta": {
+ "api_version": {
+ "version": "1"
+ },
+ "billed_units": {
+ "input_tokens": 2
+ }
+ },
+ "response_type": "embeddings_by_type"
+ }
+```
+
+#### More inference examples
+
+| **Sample Type** | **Sample Notebook** |
+|-|-|
+| CLI using CURL and Python web requests | [cohere-embed.ipynb](https://aka.ms/samples/embed-v3/webrequests)|
+| OpenAI SDK (experimental) | [openaisdk.ipynb](https://aka.ms/samples/cohere-embed/openaisdk) |
+| LangChain | [langchain.ipynb](https://aka.ms/samples/cohere-embed/langchain) |
+| Cohere SDK | [cohere-sdk.ipynb](https://aka.ms/samples/cohere-embed/cohere-python-sdk) |
+
+## Cost and quotas
+
+### Cost and quota considerations for models deployed as a service
+
+Cohere models deployed as a service are offered by Cohere through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying the model.
+
+Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference; however, multiple meters are available to track each scenario independently.
+
+For more information on how to track costs, see [monitor costs for models offered throughout the Azure Marketplace](./costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace).
+
+Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
+
+## Content filtering
+
+Models deployed as a service with pay-as-you-go are protected by [Azure AI Content Safety](../../ai-services/content-safety/overview.md). With Azure AI content safety, both the prompt and completion pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Learn more about [content filtering here](../concepts/content-filtering.md).
+
+## Next steps
+
+- [What is Azure AI Studio?](../what-is-ai-studio.md)
+- [Azure AI FAQ article](../faq.yml)
aks Deployment Safeguards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deployment-safeguards.md
After deploying your Kubernetes manifest, if the cluster isn't compliant with de
**Warning** ```
-PS C:\Users\testUser\Code> kubectl apply -f pod.yml
+$ kubectl apply -f pod.yml
Warning: [azurepolicy-k8sazurev2containerenforceprob-0e8a839bcd103e7b96a8] Container <my-container> in your Pod <my-pod> has no <livenessProbe>. Required probes: ["readinessProbe", "livenessProbe"] Warning: [azurepolicy-k8sazurev2containerenforceprob-0e8a839bcd103e7b96a8] Container <my-container> in your Pod <my-pod> has no <readinessProbe>. Required probes: ["readinessProbe", "livenessProbe"] Warning: [azurepolicy-k8sazurev1restrictedlabels-67c4210cc58f28acdfdb] Label <{"kubernetes.azure.com"}> is reserved for AKS use only
pod/my-pod created
**Enforcement** ```
-PS C:\Users\testUser\Code> kubectl apply -f pod.yml
+$ kubectl apply -f pod.yml
Error from server (Forbidden): error when creating ".\pod.yml": admission webhook "validation.gatekeeper.sh" denied the request: [azurepolicy-k8sazurev2containerenforceprob-0e8a839bcd103e7b96a8] Container <my-container> in your Pod <my-pod> has no <livenessProbe>. Required probes: ["readinessProbe", "livenessProbe"] [azurepolicy-k8sazurev2containerenforceprob-0e8a839bcd103e7b96a8] Container <my-container> in your Pod <my-pod> has no <readinessProbe>. Required probes: ["readinessProbe", "livenessProbe"] [azurepolicy-k8sazurev2containerallowedimag-1ff6d14b2f8da22019d7] Container image my-image for container my-container has not been allowed.
aks Operator Best Practices Container Image Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-container-image-management.md
Title: Operator best practices - Container image management in Azure Kubernetes
description: Learn the cluster operator best practices for how to manage and secure container images in Azure Kubernetes Service (AKS). Last updated 06/27/2023+++ # Best practices for container image management and security in Azure Kubernetes Service (AKS)
This article focused on how to secure your containers. To implement some of thes
[acr-base-image-update]: ../container-registry/container-registry-tutorial-base-image-update.md [security-center-containers]: ../security-center/container-security.md [security-center-acr]: ../security-center/defender-for-container-registries-introduction.md+
aks Operator Best Practices Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-network.md
description: Learn the cluster operator best practices for virtual network resources and connectivity in Azure Kubernetes Service (AKS). Last updated 03/18/2024+++
This article focused on network connectivity and security. For more information
[aks-configure-kubenet-networking]: configure-kubenet.md [concepts-node-selectors]: concepts-clusters-workloads.md#node-selectors [nodepool-upgrade]: manage-node-pools.md#upgrade-a-single-node-pool+
aks Operator Best Practices Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-storage.md
description: Learn the cluster operator best practices for storage, data encrypt
Last updated 04/28/2023+++
This article focused on storage best practices in AKS. For more information abou
[managed-disks]: ../virtual-machines/managed-disks-overview.md [best-practices-multi-region]: operator-best-practices-multi-region.md [remove-state]: operator-best-practices-multi-region.md#remove-service-state-from-inside-containers+
aks Outbound Rules Control Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/outbound-rules-control-egress.md
If you want to restrict how pods communicate between themselves and East-West tr
[use-network-policies]: ./use-network-policies.md +
aks Passive Cold Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/passive-cold-solution.md
If you're considering a different solution, see the following articles:
- [Active passive disaster recovery solution overview for Azure Kubernetes Service (AKS)](./active-passive-solution.md) - [Active active high availability solution overview for Azure Kubernetes Service (AKS)](./active-active-solution.md)+
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md
The following example output shows the maintenance window for *aksManagedAutoUpg
[az-aks-maintenanceconfiguration-list]: /cli/azure/aks/maintenanceconfiguration#az_aks_maintenanceconfiguration_list [az-aks-maintenanceconfiguration-show]: /cli/azure/aks/maintenanceconfiguration#az_aks_maintenanceconfiguration_show [az-aks-maintenanceconfiguration-delete]: /cli/azure/aks/maintenanceconfiguration#az_aks_maintenanceconfiguration_delete+
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Last updated 02/06/2024+++
the link in the **Version** column to view the source on the
- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy). - Review the [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md). - Review [Understanding policy effects](../governance/policy/concepts/effects.md).+
aks Quickstart Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-dapr.md
Now that both the Node.js and Python applications are deployed, you watch messag
[hello-world-gh]: https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes [azure-portal-cache]: https://portal.azure.com/#create/Microsoft.Cache [dapr-component-secrets]: https://docs.dapr.io/operations/components/component-secrets/+
aks Quickstart Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-event-grid.md
description: Use Azure Event Grid to subscribe to Azure Kubernetes Service event
Last updated 06/22/2023+++ # Quickstart: Subscribe to Azure Kubernetes Service (AKS) events with Azure Event Grid
To learn more about AKS, and walk through a complete code to deployment example,
[az-group-delete]: /cli/azure/group#az_group_delete [sp-delete]: kubernetes-service-principal.md#other-considerations [remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup+
aks Quickstart Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-helm.md
description: Use Helm with AKS and Azure Container Registry to package and run a
Last updated 01/25/2024+++ # Quickstart: Develop on Azure Kubernetes Service (AKS) with Helm
For more information about using Helm, see the [Helm documentation][helm-documen
[helm-install]: https://helm.sh/docs/intro/install/ [sp-delete]: kubernetes-service-principal.md#other-considerations [acr-helm]: ../container-registry/container-registry-helm-repos.md+
aks Quotas Skus Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quotas-skus-regions.md
description: Learn about the default quotas, restricted node VM SKU sizes, and region availability of the Azure Kubernetes Service (AKS). Last updated 01/12/2024+++ # Quotas, virtual machine size restrictions, and region availability in Azure Kubernetes Service (AKS)
You can increase certain default limits and quotas. If your resource supports an
[nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool [b-series-vm]: ../virtual-machines/sizes-b-series-burstable.md +
aks Reduce Latency Ppg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/reduce-latency-ppg.md
Title: Use proximity placement groups to reduce latency for Azure Kubernetes Ser
description: Learn how to use proximity placement groups to reduce latency for your Azure Kubernetes Service (AKS) cluster workloads. Last updated 06/19/2023+++ # Use proximity placement groups to reduce latency for Azure Kubernetes Service (AKS) clusters
Learn more about [proximity placement groups][proximity-placement-groups].
[az-group-create]: /cli/azure/group#az_group_create [az-group-delete]: /cli/azure/group#az_group_delete [az-ppg-create]: /cli/azure/ppg#az_ppg_create+
aks Release Tracker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/release-tracker.md
The bottom half of the tracker shows the SDP process. The table has two views: o
<!-- LINKS - external --> [aks-release]: https://github.com/Azure/AKS/releases [release-tracker-webpage]: https://releases.aks.azure.com/webpage/https://docsupdatetracker.net/index.html+
aks Resize Node Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/resize-node-pool.md
description: Learn how to resize node pools for a cluster in Azure Kubernetes Se
Last updated 02/08/2023+++ #Customer intent: As a cluster operator, I want to resize my node pools so that I can run more or larger workloads.
After resizing a node pool by cordoning and draining, learn more about [using mu
[specify-disruption-budget]: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ [disruptions]: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/ [use-multiple-node-pools]: create-node-pools.md+
aks Scale Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/scale-cluster.md
Title: Manually scale nodes in an Azure Kubernetes Service (AKS) cluster
description: Learn how to manually scale the number of nodes in an Azure Kubernetes Service (AKS) cluster. Last updated 01/22/2024+++ # Manually scale the node count in an Azure Kubernetes Service (AKS) cluster
In this article, you manually scaled an AKS cluster to increase or decrease the
[az-aks-nodepool-scale]: /cli/azure/aks/nodepool#az_aks_nodepool_scale [update-azaksnodepool]: /powershell/module/az.aks/update-azaksnodepool [service-quotas]: ./quotas-skus-regions.md#service-quotas-and-limits+
aks Scale Down Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/scale-down-mode.md
az aks nodepool add --enable-cluster-autoscaler --min-count 1 --max-count 10 --m
[ephemeral-os]: concepts-storage.md#ephemeral-os-disk [state-billing-azure-vm]: ../virtual-machines/states-billing.md [spot-node-pool]: spot-node-pool.md+
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 02/06/2024+++
You can assign the built-ins for a **security control** individually to help mak
- Learn more about [Azure Policy Regulatory Compliance](../governance/policy/concepts/regulatory-compliance.md). - See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).+
aks Servicemesh About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/servicemesh-about.md
Title: About service meshes description: Obtain an overview of service meshes, supported scenarios, selection criteria, and next steps to explore.-+ Last updated 04/18/2023
For more details on service mesh standardization efforts, see:
[osm-about]: ./open-service-mesh-about.md [istio-about]: ./istio-about.md [aks-support-policy]: support-policies.md+
aks Spot Node Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/spot-node-pool.md
Title: Add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster
description: Learn how to add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster. Last updated 03/29/2023+++ #Customer intent: As a cluster operator or developer, I want to learn how to add an Azure Spot node pool to an AKS Cluster.
In this article, you learned how to add a Spot node pool to an AKS cluster. For
[use-multiple-node-pools]: create-node-pools.md [vmss-spot]: ../virtual-machine-scale-sets/use-spot.md [upgrade-cluster]: upgrade-cluster.md+
aks Start Stop Nodepools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/start-stop-nodepools.md
This article assumes you have an existing AKS cluster. If you need an AKS cluste
[az-aks-nodepool-stop]: /cli/azure/aks/nodepool#az_aks_nodepool_stop [az-aks-nodepool-start]:/cli/azure/aks/nodepool#az_aks_nodepool_start [az-aks-nodepool-show]: /cli/azure/aks/nodepool#az_aks_nodepool_show+
aks Static Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/static-ip.md
For more control over the network traffic to your applications, use the applicat
[az-aks-show]: /cli/azure/aks#az-aks-show [az-aks-create]: /cli/azure/aks#az-aks-create [az-group-create]: /cli/azure/group#az-group-create+
aks Stop Cluster Upgrade Api Breaking Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/stop-cluster-upgrade-api-breaking-changes.md
Last updated 10/19/2023+++ # Stop Azure Kubernetes Service (AKS) cluster upgrades automatically on API breaking changes
This article showed you how to stop AKS cluster upgrades automatically on API br
<!-- LINKS - internal --> [az-aks-update]: /cli/azure/aks#az_aks_update [container-insights]:/azure/azure-monitor/containers/container-insights-log-query#resource-logs+
aks Support Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/support-policies.md
Title: Support policies for Azure Kubernetes Service (AKS)
description: Learn about Azure Kubernetes Service (AKS) support policies, shared responsibility, and features that are in preview (or alpha or beta). Last updated 08/28/2023+++ #Customer intent: As a cluster operator or developer, I want to understand what AKS components I need to manage, what components are managed by Microsoft (including security patches), and networking and preview features.
When the root cause of a technical support issue is due to one or more upstream
* The workaround and details about an upgrade or another persistence of the solution. * Rough timelines for the issue's inclusion, based on the upstream release cadence.
-[add-ons]: integrations.md#add-ons
+[add-ons]: integrations.md#add-ons
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
For the past release history, see [Kubernetes history](https://github.com/kubern
| K8s version | Upstream release | AKS preview | AKS GA | End of life | Platform support | |--|-|--||-|--|
+| 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | Jan 14, 2024 | Until 1.29 GA |
| 1.26 | Dec 2022 | Feb 2023 | Apr 2023 | Mar 2024 | Until 1.30 GA | | 1.27* | Apr 2023 | Jun 2023 | Jul 2023 | Jul 2024, LTS until Jul 2025 | Until 1.31 GA | | 1.28 | Aug 2023 | Sep 2023 | Nov 2023 | Nov 2024 | Until 1.32 GA| | 1.29 | Dec 2023 | Feb 2024 | Mar 2024 | | Until 1.33 GA |
-| 1.30 | Apr 2024 | May 2024 | Jun 2024 | | Until 1.34 GA |
*\* Indicates the version is designated for Long Term Support*
Note the following important changes before you upgrade to any of the available
|Kubernetes Version | AKS Managed Addons | AKS Components | OS components | Breaking Changes | Notes |--||-||-||
+| 1.25 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.5.3<br>Image Cleaner v1.1.1<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 18.04 Cgroups V1 <br>ContainerD 1.7<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>| Ubuntu 22.04 by default with cgroupv2 and Overlay VPA 0.13.0 |CgroupsV2 - If you deploy Java applications with the JDK, prefer to use JDK 11.0.16 and later or JDK 15 and later, which fully support cgroup v2
| 1.26 | Azure policy 1.3.0<br>Metrics-Server 0.6.3<br>KEDA 2.10.1<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.5.3<br>Image Cleaner v1.2.3<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0<br>azurefile-csi-driver 1.26.10<br>| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|azurefile-csi-driver 1.26.10 |None | 1.27 | Azure policy 1.3.0<br>azuredisk-csi driver v1.28.5<br>azurefile-csi driver v1.28.7<br>blob-csi v1.22.4<br>csi-attacher v4.3.0<br>csi-resizer v1.8.0<br>csi-snapshotter v6.2.2<br>snapshot-controller v6.2.2<br>Metrics-Server 0.6.3<br>Keda 2.11.2<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.2.3<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>azurefile-csi-driver 1.28.7<br>KMS 0.5.0<br>CSI Secret store driver 1.3.4-1<br>|Cilium 1.13.10-1<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7 for Linux and 1.6 for Windows<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|Keda 2.11.2<br>Cilium 1.13.10-1<br>azurefile-csi-driver 1.28.7<br>azuredisk-csi driver v1.28.5<br>blob-csi v1.22.4<br>csi-attacher v4.3.0<br>csi-resizer v1.8.0<br>csi-snapshotter v6.2.2<br>snapshot-controller v6.2.2|Because of Ubuntu 22.04 FIPS certification status, we'll switch AKS FIPS nodes from 18.04 to 20.04 from 1.27 onwards. | 1.28 | Azure policy 1.3.0<br>azurefile-csi-driver 1.29.2<br>csi-node-driver-registrar v2.9.0<br>csi-livenessprobe 2.11.0<br>azuredisk-csi-linux v1.29.2<br>azuredisk-csi-windows v1.29.2<br>csi-provisioner v3.6.2<br>csi-attacher v4.5.0<br>csi-resizer v1.9.3<br>csi-snapshotter v6.2.2<br>snapshot-controller v6.2.2<br>Metrics-Server 0.6.3<br>KEDA 2.11.2<br>Open Service Mesh 1.2.7<br>Core DNS V1.9.4<br>Overlay VPA 0.13.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.2.3<br>Azure Workload identity v1.2.0<br>MDC Defender Security Publisher 1.0.68<br>CSI Secret store driver 1.3.4-1<br>MDC Defender Old File Cleaner 1.3.68<br>MDC Defender Pod Collector 1.0.78<br>MDC Defender Low Level Collector 1.3.81<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.8.1|Cilium 1.13.10-1<br>CNI v1.4.43.1 (Default)/v1.5.11 (Azure CNI Overlay)<br> Cluster Autoscaler 1.27.3<br>Tigera-Operator 1.28.13| OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7.5 for Linux and 1.7.1 for Windows<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|azurefile-csi-driver 1.29.2<br>csi-resizer v1.9.3<br>csi-attacher v4.4.2<br>csi-provisioner v4.4.2<br>blob-csi v1.23.2<br>azurefile-csi driver v1.29.2<br>azuredisk-csi driver v1.29.2<br>csi-livenessprobe v2.11.0<br>csi-node-driver-registrar v2.9.0|None
New Supported Version List
Platform support policy is a reduced support plan for certain unsupported Kubernetes versions. During platform support, customers only receive support from Microsoft for AKS/Azure platform related issues. Any issues related to Kubernetes functionality and components aren't supported.
-Platform support policy applies to clusters in an n-3 version (where n is the latest supported AKS GA minor version), before the cluster drops to n-4. For example, Kubernetes v1.26 is considered platform support when v1.29 is the latest GA version. However, during the v1.30 GA release, v1.26 will then auto-upgrade to v1.27. If you are a running an n-2 version, the moment it becomes n-3 it also becomes deprecated, and you enter into the platform support policy.
+Platform support policy applies to clusters in an n-3 version (where n is the latest supported AKS GA minor version), before the cluster drops to n-4. For example, Kubernetes v1.25 is considered platform support when v1.28 is the latest GA version. However, during the v1.29 GA release, v1.25 will then auto-upgrade to v1.26. If you are a running an n-2 version, the moment it becomes n-3 it also becomes deprecated, and you enter into the platform support policy.
AKS relies on the releases and patches from [Kubernetes](https://kubernetes.io/releases/), which is an Open Source project that only supports a sliding window of three minor versions. AKS can only guarantee [full support](#kubernetes-version-support-policy) while those versions are being serviced upstream. Since there's no more patches being produced upstream, AKS can either leave those versions unpatched or fork. Due to this limitation, platform support doesn't support anything from relying on Kubernetes upstream.
For information on how to upgrade your cluster, see:
[get-azaksversion]: /powershell/module/az.aks/get-azaksversion [aks-tracker]: release-tracker.md [fleet-multi-cluster-upgrade]: /azure/kubernetes-fleet/update-orchestration+
aks Trusted Access Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/trusted-access-feature.md
az aks trustedaccess rolebinding delete --name <role binding name> --resource-gr
[az-provider-register]: /cli/azure/provider#az-provider-register [aks-azure-backup]: ../backup/azure-kubernetes-service-backup-overview.md [azure-cli-install]: /cli/azure/install-azure-cli+
aks Tutorial Kubernetes App Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-app-update.md
Title: Kubernetes on Azure tutorial - Update an application
description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to update an existing application deployment to AKS with a new version of the application code. Last updated 05/23/2023+++ #Customer intent: As a developer, I want to learn how to update an existing application deployment in an Azure Kubernetes Service (AKS) cluster so that I can maintain the application lifecycle.
Advance to the next tutorial to learn how to upgrade an AKS cluster to a new ver
[azure-powershell-install]: /powershell/azure/install-az-ps [get-azcontainerregistry]: /powershell/module/az.containerregistry/get-azcontainerregistry [connect-azcontainerregistry]: /powershell/module/az.containerregistry/connect-azcontainerregistry+
aks Tutorial Kubernetes Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-deploy-cluster.md
Title: Kubernetes on Azure tutorial - Create an Azure Kubernetes Service (AKS) c
description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to create an AKS cluster and use kubectl to connect to the Kubernetes main node. Last updated 02/14/2024+++ #Customer intent: As a developer or IT pro, I want to learn how to create an Azure Kubernetes Service (AKS) cluster so that I can deploy and run my own applications.
In the next tutorial, you learn how to deploy an application to your cluster.
[import-azakscredential]: /powershell/module/az.aks/import-azakscredential [aks-k8s-rbac]: azure-ad-rbac.md [azd-auth-login]: /azure/developer/azure-developer-cli/reference#azd-auth-login+
aks Tutorial Kubernetes Prepare Acr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-prepare-acr.md
Title: Kubernetes on Azure tutorial - Create an Azure Container Registry and bui
description: In this Azure Kubernetes Service (AKS) tutorial, you create an Azure Container Registry instance and upload sample application container images. Last updated 11/28/2023+++ #Customer intent: As a developer, I want to learn how to create and use a container registry so that I can deploy my own applications to Azure Kubernetes Service.
In the next tutorial, you learn how to deploy a Kubernetes cluster in Azure.
[new-azcontainerregistry]: /powershell/module/az.containerregistry/new-azcontainerregistry [get-azcontainerregistryrepository]: /powershell/module/az.containerregistry/get-azcontainerregistryrepository [acr-tasks]: ../container-registry/container-registry-tasks-overview.md
-[az-acr-build]: /cli/azure/acr#az_acr_build
+[az-acr-build]: /cli/azure/acr#az_acr_build
aks Tutorial Kubernetes Prepare App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-prepare-app.md
Title: Kubernetes on Azure tutorial - Prepare an application for Azure Kubernete
description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to prepare and build a multi-container app with Docker Compose that you can then deploy to AKS. Last updated 02/15/2023+++ #Customer intent: As a developer, I want to learn how to build a container-based application so that I can deploy the app to Azure Kubernetes Service.
In the next tutorial, you learn how to create a cluster using the `azd` template
<!-- LINKS - internal --> [aks-tutorial-prepare-acr]: ./tutorial-kubernetes-prepare-acr.md [aks-tutorial-deploy-cluster]: ./tutorial-kubernetes-deploy-cluster.md
-[azd]: /azure/developer/azure-developer-cli/install-azd
+[azd]: /azure/developer/azure-developer-cli/install-azd
aks Tutorial Kubernetes Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-scale.md
Title: Kubernetes on Azure tutorial - Scale applications in Azure Kubernetes Ser
description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to scale nodes and pods and implement horizontal pod autoscaling. Last updated 03/05/2023+++ #Customer intent: As a developer or IT pro, I want to learn how to scale my applications in an Azure Kubernetes Service (AKS) cluster so I can provide high availability or respond to customer demand and application load.
In the next tutorial, you learn how to upgrade Kubernetes in your AKS cluster.
[set-azakscluster]: /powershell/module/az.aks/set-azakscluster [aks-tutorial-upgrade-kubernetes]: ./tutorial-kubernetes-upgrade-cluster.md [keda-addon]: ./keda-about.md+
aks Tutorial Kubernetes Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-upgrade-cluster.md
Title: Kubernetes on Azure tutorial - Upgrade an Azure Kubernetes Service (AKS)
description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to upgrade an existing AKS cluster to the latest available Kubernetes version. Last updated 11/02/2023+++ #Customer intent: As a developer or IT pro, I want to learn how to upgrade an Azure Kubernetes Service (AKS) cluster so that I can use the latest version of Kubernetes and features.
For more information on AKS, see the [AKS overview][aks-intro]. For guidance on
[auto-upgrade-node-image]: ./auto-upgrade-node-image.md [node-image-upgrade]: ./node-image-upgrade.md [az-aks-update]: /cli/azure/aks#az_aks_update+
aks Upgrade Aks Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-aks-cluster.md
Last updated 01/26/2024+++ # Upgrade an Azure Kubernetes Service (AKS) cluster
For a detailed discussion of upgrade best practices and other considerations, se
<!-- LINKS - external --> [kubernetes-drain]: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/+
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
description: Learn the different ways to upgrade an Azure Kubernetes Service (AK
Last updated 02/08/2024+++ # Upgrade options for Azure Kubernetes Service (AKS) clusters
This article listed different upgrade options for AKS clusters. For a detailed d
[planned-maintenance]: planned-maintenance.md [specific-nodepool]: node-image-upgrade.md#upgrade-a-specific-node-pool [upgrade-operators-guide]: /azure/architecture/operator-guides/aks/aks-upgrade-practices+
aks Upgrade Windows 2019 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-windows-2019-2022.md
Last updated 09/12/2023+++ # Upgrade the OS version for your Azure Kubernetes Service (AKS) Windows workloads
In this article, you learned how to upgrade the OS version for Windows workloads
<!-- LINKS - External --> [aks-release-notes]: https://github.com/Azure/AKS/releases+
aks Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade.md
For more information what cluster operations may trigger specific upgrade events
[ts-subnet-full]: /troubleshoot/azure/azure-kubernetes/error-code-subnetisfull-upgrade [node-security-patches]: ./concepts-vulnerability-management.md#worker-nodes [node-updates-kured]: ./node-updates-kured.md+
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-ad-pod-identity.md
Last updated 08/15/2023+++ # Use Microsoft Entra pod-managed identities in Azure Kubernetes Service (Preview)
For more information on managed identities, see [Managed identities for Azure re
<!-- LINKS - external --> [RFC 1123]: https://tools.ietf.org/html/rfc1123 [DNS subdomain name]: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names+
aks Use Azure Dedicated Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-dedicated-hosts.md
description: Learn how to create an Azure Dedicated Hosts Group and associate it
Last updated 03/10/2023+++ # Add Azure Dedicated Host to an Azure Kubernetes Service (AKS) cluster
In this article, you learned how to create an AKS cluster with a Dedicated host,
[az-vm-host-group-create]: /cli/azure/vm/host/group#az_vm_host_group_create [determine-host-based-on-vm-utilization]: ../virtual-machines/dedicated-host-general-purpose-skus.md [host-utilization-evaluate]: ../virtual-machines/dedicated-hosts-how-to.md#check-the-status-of-the-host+
aks Use Azure Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-linux.md
description: Learn how to use the Azure Linux container host on Azure Kubernetes
Last updated 02/27/2024+++ # Use the Azure Linux container host for Azure Kubernetes Service (AKS)
To learn more about Azure Linux, see the [Azure Linux documentation][azurelinuxd
[auto-upgrade-aks]: auto-upgrade-cluster.md [kured]: node-updates-kured.md [azurelinuxdocumentation]: ../azure-linux/intro-azure-linux.md+
aks Use Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-policy.md
description: Learn how to use Azure Policy to secure your Azure Kubernetes Servi
Last updated 06/20/2023+++
For more information about how Azure Policy works, see the following articles:
[custom-policy-tutorial-assign]: ../governance/policy/concepts/policy-for-kubernetes.md#assign-a-policy-definition [azure-policy-samples]: ../governance/policy/samples/index.md [azure-policy-definition-structure]: ../governance/policy/concepts/definition-structure.md+
aks Use Byo Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-byo-cni.md
Learn more about networking in AKS in the following articles:
[deploy-bicep-template]: ../azure-resource-manager/bicep/deploy-cli.md [az-group-create]: /cli/azure/group#az_group_create [deploy-arm-template]: ../azure-resource-manager/templates/deploy-cli.md+
aks Use Cvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-cvm.md
Title: Use Confidential Virtual Machines (CVM) in Azure Kubernetes Service (AKS)
description: Learn how to create Confidential Virtual Machines (CVM) node pools with Azure Kubernetes Service (AKS) Last updated 08/14/2023+++ # Use Confidential Virtual Machines (CVM) in Azure Kubernetes Service (AKS) cluster
In this article, you learned how to add a node pool with CVM to an AKS cluster.
[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add [az-aks-nodepool-show]: /cli/azure/aks/nodepool#az_aks_nodepool_show [az-aks-nodepool-delete]: /cli/azure/aks/nodepool#az_aks_nodepool_delete+
aks Use Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-labels.md
Learn more about Kubernetes labels in the [Kubernetes labels documentation][kube
[az-aks-nodepool-update]: /cli/azure/aks/nodepool#az-aks-nodepool-update [create-or-update-os-sku]: /rest/api/aks/agent-pools/create-or-update#ossku [install-azure-cli]: /cli/azure/install-azure-cli+
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-managed-identity.md
- devx-track-azurecli - ignite-2023 Last updated 03/07/2024+++ # Use a managed identity in Azure Kubernetes Service (AKS)
Now you can create your AKS cluster with your existing identities. Make sure to
[az-aks-show]: /cli/azure/aks#az_aks_show [az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create [managed-identity-operator]: ../role-based-access-control/built-in-roles.md#managed-identity-operator+
aks Use Metrics Server Vertical Pod Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-metrics-server-vertical-pod-autoscaler.md
Title: Configure Metrics Server VPA in Azure Kubernetes Service (AKS)
description: Learn how to vertically autoscale your Metrics Server pods on an Azure Kubernetes Service (AKS) cluster. Last updated 03/27/2023+++ # Configure Metrics Server VPA in Azure Kubernetes Service (AKS)
Metrics Server is a component in the core metrics pipeline. For more information
<! INTERNAL LINKS > [horizontal-pod-autoscaler]: concepts-scale.md#horizontal-pod-autoscaler+
aks Use Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-network-policies.md
description: Learn how to secure traffic that flows in and out of pods by using
Last updated 03/28/2024+++ # Secure traffic between pods by using network policies in AKS
To learn more about policies, see [Kubernetes network policies][kubernetes-netwo
[az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az_extension_update [dsr]: ../load-balancer/load-balancer-multivip-overview.md#rule-type-2-backend-port-reuse-by-using-floating-ip+
aks Use Node Public Ips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-node-public-ips.md
Containers:
[cordon-and-drain]: resize-node-pool.md#cordon-the-existing-nodes [internal-lb-different-subnet]: internal-lb.md#specify-a-different-subnet [drain-nodes]: resize-node-pool.md#drain-the-existing-nodes+
aks Use Pod Sandboxing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-sandboxing.md
Last updated 06/07/2023+++ # Pod Sandboxing (preview) with Azure Kubernetes Service (AKS)
Learn more about [Azure Dedicated hosts][azure-dedicated-hosts] for nodes with y
[az-aks-update]: /cli/azure/aks#az-aks-update [azurelinux-cluster-config]: cluster-configuration.md#azure-linux-container-host-for-aks [register-the-katavmisolationpreview-feature-flag]: #register-the-katavmisolationpreview-feature-flag+
aks Use Premium V2 Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-premium-v2-disks.md
Title: Enable Premium SSD v2 Disk support on Azure Kubernetes Service (AKS)
description: Learn how to enable and configure Premium SSD v2 Disks in an Azure Kubernetes Service (AKS) cluster. Last updated 04/25/2023+++
az disk update --subscription subscriptionName --resource-group myResourceGroup
[operator-best-practices-storage]: operator-best-practices-storage.md [az-disk-update]: /cli/azure/disk#az-disk-update [manage-resources-azure-portal]: ../azure-resource-manager/management/manage-resources-portal.md#open-resources
-[aks-two-resource-groups]: faq.md#why-are-two-resource-groups-created-with-aks
+[aks-two-resource-groups]: faq.md#why-are-two-resource-groups-created-with-aks
aks Use Psa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-psa.md
description: Learn how to enable and use Pod Security Admission with Azure Kuber
Last updated 09/12/2023+++ # Use Pod Security Admission in Azure Kubernetes Service (AKS)
In this article, you learned how to enable Pod Security Admission an AKS cluster
<!-- LINKS - Internal --> [kubernetes-psa]: https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-namespace-labels/ [kubernetes-pss]: https://kubernetes.io/docs/concepts/security/pod-security-standards/+
aks Use System Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-system-pools.md
Title: Use system node pools in Azure Kubernetes Service (AKS)
description: Learn how to create and manage system node pools in Azure Kubernetes Service (AKS) Last updated 12/26/2023+++
In this article, you learned how to create and manage system node pools in an AK
[update-node-pool-mode]: use-system-pools.md#update-existing-cluster-system-and-user-node-pools [start-stop-nodepools]: ./start-stop-nodepools.md [node-affinity]: operator-best-practices-advanced-scheduler.md#node-affinity+
aks Use Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-tags.md
description: Learn how to use Azure provider tags to track resources in Azure Ku
Last updated 06/16/2023+++ # Use Azure tags in Azure Kubernetes Service (AKS)
Learn more about [using labels in an AKS cluster][use-labels-aks].
[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az-aks-nodepool-add [az-aks-nodepool-update]: /cli/azure/aks/nodepool#az-aks-nodepool-update [az-aks-update]: /cli/azure/aks#az-aks-update+
aks Use Ultra Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-ultra-disks.md
Title: Enable Ultra Disk support on Azure Kubernetes Service (AKS)
description: Learn how to enable and configure Ultra Disks in an Azure Kubernetes Service (AKS) cluster Last updated 07/26/2023+++
Once the persistent volume claim has been created and the disk successfully prov
[azure-disk-volume]: azure-disk-csi.md [operator-best-practices-storage]: operator-best-practices-storage.md [az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add+
aks Use Wasi Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-wasi-node-pools.md
description: Learn how to create a WebAssembly System Interface (WASI) node pool
Last updated 05/17/2023+++ # Create WebAssembly System Interface (WASI) node pools in Azure Kubernetes Service (AKS) to run your WebAssembly (WASM) workload (preview)
az aks nodepool delete --name mywasipool -g myresourcegroup --cluster-name myaks
[install-azure-cli]: /cli/azure/install-azure-cli [use-multiple-node-pools]: use-multiple-node-pools.md [use-system-pool]: use-system-pools.md+
aks Use Windows Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-windows-gpu.md
Title: Use GPUs for Windows node pools on Azure Kubernetes Service (AKS)
description: Learn how to use Windows GPUs for high performance compute or graphics-intensive workloads on Azure Kubernetes Service (AKS). Last updated 03/18/2024+++ #Customer intent: As a cluster administrator or developer, I want to create an AKS cluster that can use high-performance GPU-based VMs for compute-intensive workloads using a Windows os.
After creating your cluster, confirm that GPUs are schedulable in Kubernetes.
[az-extension-add]: /cli/azure/extension#az-extension-add [az-extension-update]: /cli/azure/extension#az-extension-update [NVadsA10]: /azure/virtual-machines/nva10v5-series+
aks Vertical Pod Autoscaler Api Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/vertical-pod-autoscaler-api-reference.md
description: Learn about the Vertical Pod Autoscaler API reference for Azure Kub
Last updated 09/26/2023+++ # Vertical Pod Autoscaler API reference
See [Vertical Pod Autoscaler][vertical-pod-autoscaler] to understand how to impr
<!-- INTERNAL LINKS --> [vertical-pod-autoscaler]: vertical-pod-autoscaler.md+
aks Vertical Pod Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/vertical-pod-autoscaler.md
description: Learn how to vertically autoscale your pod on an Azure Kubernetes S
Last updated 09/28/2023+++ # Vertical Pod Autoscaling in Azure Kubernetes Service (AKS)
This article showed you how to automatically scale resource utilization, such as
[az-feature-register]: /cli/azure/feature#az-feature-register [az-feature-show]: /cli/azure/feature#az-feature-show [horizontal-pod-autoscaler-overview]: concepts-scale.md#horizontal-pod-autoscaler+
aks Virtual Nodes Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/virtual-nodes-cli.md
description: Learn how to use Azure CLI to create an Azure Kubernetes Services (
Last updated 08/28/2023+++
Virtual nodes are often one component of a scaling solution in AKS. For more inf
[az-provider-register]: /cli/azure/provider#az_provider_register [virtual-nodes-aks]: virtual-nodes.md [virtual-nodes-networking-aci]: ../container-instances/container-instances-virtual-network-concepts.md+
aks Virtual Nodes Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/virtual-nodes-portal.md
Title: Create virtual nodes in Azure Kubernetes Service (AKS) using the Azure po
description: Learn how to use the Azure portal to create an Azure Kubernetes Services (AKS) cluster that uses virtual nodes to run pods. Last updated 05/09/2023+++
Virtual nodes are one component of a scaling solution in AKS. For more informati
[aks-basic-ingress]: ingress-basic.md [az-provider-list]: /cli/azure/provider#az_provider_list [az-provider-register]: /cli/azure/provider#az_provider_register+
aks Windows Aks Customer Stories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-aks-customer-stories.md
description: Learn how customers are using Windows Containers on AKS. Last updated 11/29/2023+++ # Windows AKS customer stories
Microsoft's E+D group, responsible for supporting products such as Teams and Off
The transition enabled Microsoft 365 developers to focus more on innovation and iterating quickly, leveraging the benefits of AKS like security-optimized hosting, automated compliance checks, and centralized capacity management, thereby accelerating development while optimizing resource utilization and costs.
-For more information visit [MicrosoftΓÇÖs E+D Windows AKS customer story](https://customers.microsoft.com/story/1536483517282553662-modernizing-microsoft-365-windows-containers-azure-kubernetes-service).
+For more information visit [MicrosoftΓÇÖs E+D Windows AKS customer story](https://customers.microsoft.com/story/1536483517282553662-modernizing-microsoft-365-windows-containers-azure-kubernetes-service).
aks Windows Aks Migration Modernization Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-aks-migration-modernization-solutions.md
description: Read end-to-end walkthroughs of assessing your applications to find containerization and PaaS blockers. Last updated 01/31/2024+++ # Migration and modernization solutions for Windows AKS
For more information on their Windows container migration guidance, case studies
UnifyCloud introduces its CloudAtlas platform, an end-to-end migration tool, for streamlining the modernization of .NET applications to Windows containers on Azure Kubernetes Services.
-To explore more about their guidance on Windows container migration, along with valuable migration learnings and insights for modernization plans, you can [explore their article](https://techcommunity.microsoft.com/t5/containers/unifycloud-modernizing-your-net-apps-to-windows-containers-on/ba-p/4037872).
+To explore more about their guidance on Windows container migration, along with valuable migration learnings and insights for modernization plans, you can [explore their article](https://techcommunity.microsoft.com/t5/containers/unifycloud-modernizing-your-net-apps-to-windows-containers-on/ba-p/4037872).
aks Windows Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-faq.md
description: See the frequently asked questions when you run Windows Server node
Last updated 03/27/2024+++ #Customer intent: As a cluster operator, I want to see frequently asked questions when running Windows node pools and application workloads.
To get started with Windows Server containers in AKS, see [Create a node pool th
[resource-groups]: faq.md#why-are-two-resource-groups-created-with-aks [dsr]: ../load-balancer/load-balancer-multivip-overview.md#rule-type-2-backend-port-reuse-by-using-floating-ip [windows-server-password]: /windows/security/threat-protection/security-policy-settings/password-must-meet-complexity-requirements#reference+
aks Windows Vs Linux Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-vs-linux-containers.md
For more information on Windows containers, see the [Windows Server containers F
[gen-2-vms]: cluster-configuration.md#generation-2-virtual-machines [custom-node-config]: custom-node-configuration.md [custom-kubelet-parameters]: custom-node-configuration.md#kubelet-custom-configuration+
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
Last updated 02/22/2024+++ # Deploy and configure workload identity on an Azure Kubernetes Service (AKS) cluster
In this article, you deployed a Kubernetes cluster and configured it to use a wo
[az-identity-federated-credential-create]: /cli/azure/identity/federated-credential#az-identity-federated-credential-create [workload-identity-migration]: workload-identity-migrate-from-pod-identity.md [azure-identity-libraries]: ../active-directory/develop/reference-v2-libraries.md+
aks Workload Identity Migrate From Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-migrate-from-pod-identity.md
Last updated 07/31/2023+++ # Migrate from pod managed-identity to workload identity
This article showed you how to set up your pod to authenticate using a workload
<!-- EXTERNAL LINKS --> [kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe [kubelet-logs]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs+
aks Workload Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md
Last updated 11/17/2023+++ # Use Microsoft Entra Workload ID with Azure Kubernetes Service (AKS)
The following table summarizes our migration or deployment recommendations for w
[aks-virtual-nodes]: virtual-nodes.md [unsupported-regions-user-assigned-managed-identities]: ../active-directory/workload-identities/workload-identity-federation-considerations.md#unsupported-regions-user-assigned-managed-identities [general-federated-identity-credential-considerations]: ../active-directory/workload-identities/workload-identity-federation-considerations.md#general-federated-identity-credential-considerations+
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md
This guide provides key concepts and instructions for containerization of Window
::: zone pivot="container-linux"
-This guide provides key concepts and instructions for containerization of Linux apps in App Service. If you've never used Azure App Service, follow the [custom container quickstart](quickstart-custom-container.md) and [tutorial](tutorial-custom-container.md) first. There's also a [multi-container app quickstart](quickstart-multi-container.md) and [tutorial](tutorial-multi-container-app.md).
+This guide provides key concepts and instructions for containerization of Linux apps in App Service. If you've never used Azure App Service, follow the [custom container quickstart](quickstart-custom-container.md) and [tutorial](tutorial-custom-container.md) first. There's also a [multi-container app quickstart](quickstart-multi-container.md) and [tutorial](tutorial-multi-container-app.md). For sidecar containers (preview), see [Tutorial: Configure a sidecar container for custom container in Azure App Service (preview)](tutorial-custom-container-sidecar.md).
::: zone-end
To use an image from a private registry, such as Azure Container Registry, run t
az webapp config container set --name <app-name> --resource-group <group-name> --docker-custom-image-name <image-name> --docker-registry-server-url <private-repo-url> --docker-registry-server-user <username> --docker-registry-server-password <password> ```
-For *\<username>* and *\<password>*, supply the login credentials for your private registry account.
+For *\<username>* and *\<password>*, supply the sign-in credentials for your private registry account.
## Use managed identity to pull image from Azure Container Registry
-Use the following steps to configure your web app to pull from ACR using managed identity. The steps will use system-assigned managed identity, but you can use user-assigned managed identity as well.
+Use the following steps to configure your web app to pull from ACR using managed identity. The steps use system-assigned managed identity, but you can use user-assigned managed identity as well.
1. Enable [the system-assigned managed identity](./overview-managed-identity.md) for the web app by using the [`az webapp identity assign`](/cli/azure/webapp/identity#az-webapp-identity-assign) command: ```azurecli-interactive az webapp identity assign --resource-group <group-name> --name <app-name> --query principalId --output tsv ```
- Replace `<app-name>` with the name you used in the previous step. The output of the command (filtered by the --query and --output arguments) is the service principal ID of the assigned identity, which you use shortly.
+ Replace `<app-name>` with the name you used in the previous step. The output of the command (filtered by the `--query` and `--output` arguments) is the service principal ID of the assigned identity, which you use shortly.
1. Get the resource ID of your Azure Container Registry: ```azurecli-interactive az acr show --resource-group <group-name> --name <registry-name> --query id --output tsv ```
- Replace `<registry-name>` with the name of your registry. The output of the command (filtered by the --query and --output arguments) is the resource ID of the Azure Container Registry.
+ Replace `<registry-name>` with the name of your registry. The output of the command (filtered by the `--query` and `--output` arguments) is the resource ID of the Azure Container Registry.
1. Grant the managed identity permission to access the container registry: ```azurecli-interactive
Use the following steps to configure your web app to pull from ACR using managed
Replace the following values: - `<app-name>` with the name of your web app. >[!Tip]
- > If you are using PowerShell console to run the commands, you will need to escape the strings in the `--generic-configurations` argument in this and the next step. For example: `--generic-configurations '{\"acrUseManagedIdentityCreds\": true'`
-1. (Optional) If your app uses a [user-assigned managed identity](overview-managed-identity.md#add-a-user-assigned-identity), make sure this is configured on the web app and then set an additional `acrUserManagedIdentityID` property to specify its client ID:
+ > If you are using PowerShell console to run the commands, you need to escape the strings in the `--generic-configurations` argument in this and the next step. For example: `--generic-configurations '{\"acrUseManagedIdentityCreds\": true'`
+1. (Optional) If your app uses a [user-assigned managed identity](overview-managed-identity.md#add-a-user-assigned-identity), make sure this is configured on the web app and then set the `acrUserManagedIdentityID` property to specify its client ID:
```azurecli-interactive az identity show --resource-group <group-name> --name <identity-name> --query clientId --output tsv
Use the following steps to configure your web app to pull from ACR using managed
az webapp config set --resource-group <group-name> --name <app-name> --generic-configurations '{"acrUserManagedIdentityID": "<client-id>"}' ```
-You are all set, and the web app will now use managed identity to pull from Azure Container Registry.
+You're all set, and the web app now uses managed identity to pull from Azure Container Registry.
## Use an image from a network protected registry
-To connect and pull from a registry inside a virtual network or on-premises, your app will need to be connected to a virtual network using the virtual network integration feature. This is also needed for Azure Container Registry with private endpoint. When your network and DNS resolution is configured, you enable the routing of the image pull through the virtual network by configuring the `vnetImagePullEnabled` site setting:
+To connect and pull from a registry inside a virtual network or on-premises, your app must integrate with a virtual network. This is also needed for Azure Container Registry with private endpoint. When your network and DNS resolution is configured, you enable the routing of the image pull through the virtual network by configuring the `vnetImagePullEnabled` site setting:
```azurecli-interactive az resource update --resource-group <group-name> --name <app-name> --resource-type "Microsoft.Web/sites" --set properties.vnetImagePullEnabled [true|false]
If you change your Docker container settings to point to a new container, it mig
## How container images are stored
-The first time you run a custom Docker image in App Service, App Service does a `docker pull` and pulls all image layers. These layers are stored on disk, like if you were using Docker on-premises. Each time the app restarts, App Service does a `docker pull`, but only pulls layers that have changed. If there have been no changes, App Service uses existing layers on the local disk.
+The first time you run a custom Docker image in App Service, App Service does a `docker pull` and pulls all image layers. These layers are stored on disk, like if you were using Docker on-premises. Each time the app restarts, App Service does a `docker pull`, but only pulls layers that have changed. If there are no changes, App Service uses existing layers on the local disk.
-If the app changes compute instances for any reason, such as scaling up and down the pricing tiers, App Service must pull down all layers again. The same is true if you scale out to add additional instances. There are also rare cases where the app instances might change without a scale operation.
+If the app changes compute instances for any reason, such as scaling up and down the pricing tiers, App Service must pull down all layers again. The same is true if you scale out to add more instances. There are also rare cases where the app instances might change without a scale operation.
## Configure port number
This method works both for single-container apps or multi-container apps, where
You can use the *C:\home* directory in your custom container file system to persist files across restarts and share them across instances. The `C:\home` directory is provided to enable your custom container to access persistent storage.
-When persistent storage is disabled, then writes to the `C:\home` directory are not persisted across app restarts or across multiple instances. When persistent storage is enabled, all writes to the `C:\home` directory are persisted and can be accessed by all instances of a scaled-out app. Additionally, any contents inside the `C:\home` directory of the container are overwritten by any existing files already present on the persistent storage when the container starts.
+When persistent storage is disabled, then writes to the `C:\home` directory aren't persisted across app restarts or across multiple instances. When persistent storage is enabled, all writes to the `C:\home` directory are persisted and can be accessed by all instances of a scaled-out app. Additionally, any contents inside the `C:\home` directory of the container are overwritten by any existing files already present on the persistent storage when the container starts.
-The only exception is the `C:\home\LogFiles` directory, which is used to store the container and application logs. This folder will always persist upon app restarts if [application logging is enabled](troubleshoot-diagnostic-logs.md?#enable-application-logging-windows) with the **File System** option, independently of the persistent storage being enabled or disabled. In other words, enabling or disabling the persistent storage will not affect the application logging behavior.
+The only exception is the `C:\home\LogFiles` directory, which is used to store the container and application logs. This folder always persists upon app restarts if [application logging is enabled](troubleshoot-diagnostic-logs.md?#enable-application-logging-windows) with the **File System** option, independently of the persistent storage being enabled or disabled. In other words, enabling or disabling the persistent storage doesn't affect the application logging behavior.
By default, persistent storage is *disabled* on Windows custom containers. To enable it, set the `WEBSITES_ENABLE_APP_SERVICE_STORAGE` app setting value to `true` via the [Cloud Shell](https://shell.azure.com). In Bash:
Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -AppSettings @{"WE
::: zone pivot="container-linux"
-You can use the */home* directory in your custom container file system to persist files across restarts and share them across instances. The `/home` directory is provided to enable your custom container to access persistent storage. Saving data within `/home` will contribute to the [storage space quota](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits) included with your App Service Plan.
+You can use the */home* directory in your custom container file system to persist files across restarts and share them across instances. The `/home` directory is provided to enable your custom container to access persistent storage. Saving data within `/home` contributes to the [storage space quota](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits) included with your App Service Plan.
-When persistent storage is disabled, then writes to the `/home` directory are not persisted across app restarts or across multiple instances. When persistent storage is enabled, all writes to the `/home` directory are persisted and can be accessed by all instances of a scaled-out app. Additionally, any contents inside the `/home` directory of the container are overwritten by any existing files already present on the persistent storage when the container starts.
+When persistent storage is disabled, then writes to the `/home` directory aren't persisted across app restarts or across multiple instances. When persistent storage is enabled, all writes to the `/home` directory are persisted and can be accessed by all instances of a scaled-out app. Additionally, any contents inside the `/home` directory of the container are overwritten by any existing files already present on the persistent storage when the container starts.
-The only exception is the `/home/LogFiles` directory, which is used to store the container and application logs. This folder will always persist upon app restarts if [application logging is enabled](troubleshoot-diagnostic-logs.md#enable-application-logging-linuxcontainer) with the **File System** option, independently of the persistent storage being enabled or disabled. In other words, enabling or disabling the persistent storage will not affect the application logging behavior.
+The only exception is the `/home/LogFiles` directory, which is used to store the container and application logs. This folder always persists upon app restarts if [application logging is enabled](troubleshoot-diagnostic-logs.md#enable-application-logging-linuxcontainer) with the **File System** option, independently of the persistent storage being enabled or disabled. In other words, enabling or disabling the persistent storage doesn't affect the application logging behavior.
-It is recommended to write data to `/home` or a [mounted Azure storage path](configure-connect-to-azure-storage.md?tabs=portal&pivots=container-linux). Data written outside these paths will not be persistent during restarts and will be saved to platform-managed host disk space separate from the App Service Plans file storage quota.
+It's recommended to write data to `/home` or a [mounted Azure storage path](configure-connect-to-azure-storage.md?tabs=portal&pivots=container-linux). Data written outside these paths isn't persistent during restarts and is saved to platform-managed host disk space separate from the App Service Plans file storage quota.
By default, persistent storage is *enabled* on Linux custom containers. To disable it, set the `WEBSITES_ENABLE_APP_SERVICE_STORAGE` app setting value to `false` via the [Cloud Shell](https://shell.azure.com). In Bash:
Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -AppSettings @{"WE
App Service terminates TLS/SSL at the front ends. That means that TLS/SSL requests never get to your app. You don't need to, and shouldn't implement any support for TLS/SSL into your app.
-The front ends are located inside Azure data centers. If you use TLS/SSL with your app, your traffic across the Internet will always be safely encrypted.
+The front ends are located inside Azure data centers. If you use TLS/SSL with your app, your traffic across the Internet is always safely encrypted.
::: zone pivot="container-windows"
The new keys at each restart might reset ASP.NET forms authentication and view s
## Connect to the container
-You can connect to your Windows container directly for diagnostic tasks by navigating to `https://<app-name>.scm.azurewebsites.net/` and choosing the SSH option. A direct SSH session with your container is established in which you can run commands inside your container
+You can connect to your Windows container directly for diagnostic tasks by navigating to `https://<app-name>.scm.azurewebsites.net/` and choosing the SSH option. A direct SSH session with your container is established in which you can run commands inside your container.
- It functions separately from the graphical browser above it, which only shows the files in your [shared storage](#use-persistent-shared-storage). - In a scaled-out app, the SSH session is connected to one of the container instances. You can select a different instance from the **Instance** dropdown in the top Kudu menu.
You can connect to your Windows container directly for diagnostic tasks by navig
## Access diagnostic logs
-App Service logs actions by the Docker host as well as activities from within the container. Logs from the Docker host (platform logs) are shipped by default, but application logs or web server logs from within the container need to be enabled manually. For more information, see [Enable application logging](troubleshoot-diagnostic-logs.md#enable-application-logging-linuxcontainer) and [Enable web server logging](troubleshoot-diagnostic-logs.md#enable-web-server-logging).
+App Service logs actions by the Docker host and activities from within the container. Logs from the Docker host (platform logs) are shipped by default, but application logs or web server logs from within the container need to be enabled manually. For more information, see [Enable application logging](troubleshoot-diagnostic-logs.md#enable-application-logging-linuxcontainer) and [Enable web server logging](troubleshoot-diagnostic-logs.md#enable-web-server-logging).
There are several ways to access Docker logs:
There are several ways to access Docker logs:
### In Azure portal
-Docker logs are displayed in the portal, in the **Container Settings** page of your app. The logs are truncated, but you can download all the logs clicking **Download**.
+Docker logs are displayed in the portal, in the **Container Settings** page of your app. The logs are truncated, but you can download all the logs selecting **Download**.
### From Kudu
-Navigate to `https://<app-name>.scm.azurewebsites.net/DebugConsole` and click the **LogFiles** folder to see the individual log files. To download the entire **LogFiles** directory, click the **Download** icon to the left of the directory name. You can also access this folder using an FTP client.
+Navigate to `https://<app-name>.scm.azurewebsites.net/DebugConsole` and select the **LogFiles** folder to see the individual log files. To download the entire **LogFiles** directory, select the **Download** icon to the left of the directory name. You can also access this folder using an FTP client.
-In the SSH terminal, you can't access the `C:\home\LogFiles` folder by default because persistent shared storage is not enabled. To enable this behavior in the console terminal, [enable persistent shared storage](#use-persistent-shared-storage).
+In the SSH terminal, you can't access the `C:\home\LogFiles` folder by default because persistent shared storage isn't enabled. To enable this behavior in the console terminal, [enable persistent shared storage](#use-persistent-shared-storage).
If you try to download the Docker log that is currently in use by using an FTP client, you might get an error because of a file lock.
In PowerShell:
Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -AppSettings @{"WEBSITE_MEMORY_LIMIT_MB"=2000} ```
-The value is defined in MB and must be less and equal to the total physical memory of the host. For example, in an App Service plan with 8 GB RAM, the cumulative total of `WEBSITE_MEMORY_LIMIT_MB` for all the apps must not exceed 8 GB. Information on how much memory is available for each pricing tier can be found in [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/windows/), in the **Premium v3 service plan** section.
+The value is defined in MB and must be less and equal to the total physical memory of the host. For example, in an App Service plan with 8GB RAM, the cumulative total of `WEBSITE_MEMORY_LIMIT_MB` for all the apps must not exceed 8 GB. Information on how much memory is available for each pricing tier can be found in [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/windows/), in the **Premium v3 service plan** section.
## Customize the number of compute cores
The processors might be multicore or hyperthreading processors. Information on h
## Customize health ping behavior
-App Service considers a container to be successfully started when the container starts and responds to an HTTP ping. The health ping request contains the header `User-Agent= "App Service Hyper-V Container Availability Check"`. If the container starts but does not respond to a ping after a certain amount of time, App Service logs an event in the Docker log, saying that the container didn't start.
+App Service considers a container to be successfully started when the container starts and responds to an HTTP ping. The health ping request contains the header `User-Agent= "App Service Hyper-V Container Availability Check"`. If the container starts but doesn't respond to a ping after a certain amount of time, App Service logs an event in the Docker log, saying that the container didn't start.
If your application is resource-intensive, the container might not respond to the HTTP ping in time. To control the actions when HTTP pings fail, set the `CONTAINER_AVAILABILITY_CHECK_MODE` app setting. You can set it via the [Cloud Shell](https://shell.azure.com). In Bash:
Secure Shell (SSH) is commonly used to execute administrative commands remotely
> - `Ciphers` must include at least one item in this list: `aes128-cbc,3des-cbc,aes256-cbc`. > - `MACs` must include at least one item in this list: `hmac-sha1,hmac-sha1-96`.
-2. Create an entrypoint script with the name `entrypoint.sh` (or change any existing entrypoint file) and add the command to start the SSH service, along with the application startup command. The following example demonstrates starting a Python application. Please replace the last command according to the project language/stack:
+2. Create an entrypoint script with the name `entrypoint.sh` (or change any existing entrypoint file) and add the command to start the SSH service, along with the application startup command. The following example demonstrates starting a Python application. Replace the last command according to the project language/stack:
### [Debian](#tab/debian)
Secure Shell (SSH) is commonly used to execute administrative commands remotely
```
-3. Add to the Dockerfile the following instructions according to the base image distribution. The same will copy the new files, install OpenSSH server, set proper permissions and configure the custom entrypoint, and expose the ports required by the application and SSH server, respectively:
+3. Add to the Dockerfile the following instructions according to the base image distribution. These instructions copy the new files, install OpenSSH server, set proper permissions and configure the custom entrypoint, and expose the ports required by the application and SSH server, respectively:
### [Debian](#tab/debian)
Secure Shell (SSH) is commonly used to execute administrative commands remotely
> [!NOTE]
- > The root password must be exactly `Docker!` as it is used by App Service to let you access the SSH session with the container. This configuration doesn't allow external connections to the container. Port 2222 of the container is accessible only within the bridge network of a private virtual network and is not accessible to an attacker on the internet.
+ > The root password must be exactly `Docker!` as it's used by App Service to let you access the SSH session with the container. This configuration doesn't allow external connections to the container. Port 2222 of the container is accessible only within the bridge network of a private virtual network and isn't accessible to an attacker on the internet.
4. Rebuild and push the Docker image to the registry, and then test the Web App SSH feature on Azure portal.
-For further troubleshooting additional information is available at the Azure App Service OSS blog: [Enabling SSH on Linux Web App for Containers](https://azureossd.github.io/2022/04/27/2022-Enabling-SSH-on-Linux-Web-App-for-Containers/https://docsupdatetracker.net/index.html#troubleshooting)
+Further troubleshooting information is available at the Azure App Service OSS blog: [Enabling SSH on Linux Web App for Containers](https://azureossd.github.io/2022/04/27/2022-Enabling-SSH-on-Linux-Web-App-for-Containers/https://docsupdatetracker.net/index.html#troubleshooting)
## Access diagnostic logs
wordpress:
### Preview limitations
-Multi-container is currently in preview. The following App Service platform features are not supported:
+Multi-container is currently in preview. The following App Service platform features aren't supported:
- Authentication / Authorization - Managed Identities - CORS-- Virtual network integration is not supported for Docker Compose scenarios
+- Virtual network integration isn't supported for Docker Compose scenarios
- Docker Compose on Azure App Services currently has a limit of 4,000 characters at this time. ### Docker Compose options
The following lists show supported and unsupported Docker Compose configuration
::: zone-end
-Or, see additional resources:
+Or, see more resources:
- [Environment variables and app settings reference](reference-app-settings.md) - [Load certificate in Windows/Linux containers](configure-ssl-certificate-in-code.md#load-certificate-in-linuxwindows-containers)
app-service Deploy Ci Cd Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-ci-cd-custom-container.md
Once you authorize your Azure account with GitHub, **select** the **Organization
::: zone pivot="container-linux" ## 3. Configure registry settings
+> [!NOTE]
+> Sidecar containers (preview) will succeed multi-container (Docker Compose) apps in App Service. To get started, see [Tutorial: Configure a sidecar container for custom container in Azure App Service (preview)](tutorial-custom-container-sidecar.md).
+ To deploy a multi-container (Docker Compose) app, **select** **Docker Compose** in **Container Type**. If you don't see the **Container Type** dropdown, scroll back up to **Source** and **select** **Container Registry**.
app-service How To Side By Side Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-side-by-side-migrate.md
az rest --method get --uri "${ASE_ID}?api-version=2022-03-01" --query properties
If the step is in progress, you get a status of `Migrating`. After you get a status of `Ready`, run the following command to view your new outbound IPs. If you don't see the new IPs immediately, wait a few minutes and try again. ```azurecli
-az rest --method get --uri "${ASE_ID}/configurations/networking?api-version=2022-03-01 --query properties.windowsOutboundIpAddresses"
+az rest --method get --uri "${ASE_ID}/configurations/networking?api-version=2022-03-01" --query properties.windowsOutboundIpAddresses
``` ## 5. Update dependent resources with new outbound IPs
app-service Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/getting-started.md
zone_pivot_groups: app-service-getting-started-stacks
| **Monitor your app**|- [Troubleshoot with Azure Monitor](./tutorial-troubleshoot-monitor.md)<br>- [Log stream](./troubleshoot-diagnostic-logs.md#stream-logs)<br>- [Diagnose and solve tool](./overview-diagnostics.md)| | **Add domains & certificates** |- [Map a custom domain](./app-service-web-tutorial-custom-domain.md?tabs=root%2Cazurecli)<br>- [Add SSL certificate](./configure-ssl-certificate.md)| | **Connect to a database** | - [MySQL with PHP](./tutorial-php-mysql-app.md)|
-| **Custom containers** |- [Multi-container](./quickstart-multi-container.md)|
+| **Custom containers** |- [Multi-container](./quickstart-multi-container.md)<br>- [Sidecar containers](tutorial-custom-container-sidecar.md)|
| **Review best practices** | - [Scale your app]()<br>- [Deployment](./deploy-best-practices.md)<br>- [Security](/security/benchmark/azure/baselines/app-service-security-baseline?toc=/azure/app-service/toc.json)<br>- [Virtual Network](./configure-vnet-integration-enable.md)| ::: zone-end
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
Title: Integrate your app with an Azure virtual network
description: Integrate your app in Azure App Service with Azure virtual networks. Previously updated : 02/06/2024 Last updated : 04/05/2024
When virtual network integration is enabled, your app makes outbound calls throu
When all traffic routing is enabled, all outbound traffic is sent into your virtual network. If all traffic routing isn't enabled, only private traffic (RFC1918) and service endpoints configured on the integration subnet is sent into the virtual network. Outbound traffic to the internet is routed directly from the app.
-For Windows App Service plans, the virtual network integration feature supports two virtual interfaces per worker. Two virtual interfaces per worker mean two virtual network integrations per App Service plan. In other words, a Windows App Service plan can have virtual network integrations with up to two subnets/virtual networks. The apps in the same App Service plan can only use one of the virtual network integrations to a specific subnet, meaning an app can only have a single virtual network integration at a given time. Linux App Service plans support only one virtual network integration per plan.
+The virtual network integration feature supports two virtual interfaces per worker. Two virtual interfaces per worker mean two virtual network integrations per App Service plan. In other words, an App Service plan can have virtual network integrations with up to two subnets/virtual networks. The apps in the same App Service plan can only use one of the virtual network integrations to a specific subnet, meaning an app can only have a single virtual network integration at a given time.
## Subnet requirements
Virtual network integration depends on a dedicated subnet. When you create a sub
When you scale up/down in instance size, the amount of IP addresses used by the App Service plan is temporarily doubled while the scale operation completes. The new instances need to be fully operational before the existing instances are deprovisioned. The scale operation affects the real, available supported instances for a given subnet size. Platform upgrades need free IP addresses to ensure upgrades can happen without interruptions to outbound traffic. Finally, after scale up, down, or in operations complete, there might be a short period of time before IP addresses are released. In rare cases, this operation can be up to 12 hours.
-Because subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. You should also reserve IP addresses for platform upgrades. To avoid any issues with subnet capacity, use a `/26` with 64 addresses. When you're creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of /27 is required. If the subnet already exists before integrating through the portal, you can use a /28 subnet.
+Because subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. You should also reserve IP addresses for platform upgrades. To avoid any issues with subnet capacity, use a `/26` with 64 addresses. When you're creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of `/27` is required. If the subnet already exists before integrating through the portal, you can use a `/28` subnet.
-When you want your apps in your plan to reach a virtual network that apps in another plan already connect to, select a different subnet than the one being used by the pre-existing virtual network integration.
+With multi plan subnet join (MPSJ) you can join multiple App Service plans in to the same subnet. All App Service plans must be in the same subscription but the virtual network/subnet can be in a different subscription. Each instance from each App Service plan requires an IP address from the subnet and to use MPSJ a minimum size of `/26` subnet is required. If you plan to join many and/or large scale plans, you should plan for larger subnet ranges.
+
+>[!NOTE]
+> Multi plan subnet join is currently in public preview. During preview the following known limitations should be observed:
+>
+> * The minimum requirement for subnet size of `/26` is currently not enforced, but will be enforced at GA.
+> * There is currently no validation if the subnet has available IPs, so you might be able to join N+1 plan, but the instances will not get an IP. You can view available IPs in the Virtual network integration page in Azure portal in apps that are already connected to the subnet.
### Windows Containers specific limits
There are some limitations with using virtual network integration:
* The feature is available from all App Service deployments in Premium v2 and Premium v3. It's also available in Basic and Standard tier but only from newer App Service deployments. If you're on an older deployment, you can only use the feature from a Premium v2 App Service plan. If you want to make sure you can use the feature in a Basic or Standard App Service plan, create your app in a Premium v3 App Service plan. Those plans are only supported on our newest deployments. You can scale down if you want after the plan is created. * The feature isn't available for Isolated plan apps in an App Service Environment. * You can't reach resources across peering connections with classic virtual networks.
-* The feature requires an unused subnet that's an IPv4 `/28` block or larger in an Azure Resource Manager virtual network.
+* The feature requires an unused subnet that's an IPv4 `/28` block or larger in an Azure Resource Manager virtual network. MPSJ requires a `/26` block or larger.
* The app and the virtual network must be in the same region. * The integration virtual network can't have IPv6 address spaces defined. * The integration subnet can't have [service endpoint policies](../virtual-network/virtual-network-service-endpoint-policies-overview.md) enabled.
-* Only one App Service plan virtual network integration connection per integration subnet is supported.
* You can't delete a virtual network with an integrated app. Remove the integration before you delete the virtual network. * You can't have more than two virtual network integrations per App Service plan. Multiple apps in the same App Service plan can use the same virtual network integration. * You can't change the subscription of an app or a plan while there's an app that's using virtual network integration.
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md
Azure App Service is a fully managed platform as a service (PaaS) offering for d
* **Multiple languages and frameworks** - App Service has first-class support for ASP.NET, ASP.NET Core, Java, Node.js, PHP, or Python. You can also run [PowerShell and other scripts or executables](webjobs-create.md) as background services. * **Managed production environment** - App Service automatically [patches and maintains the OS and language frameworks](overview-patch-os-runtime.md) for you. Spend time writing great apps and let Azure worry about the platform.
-* **Containerization and Docker** - Dockerize your app and host a custom Windows or Linux container in App Service. Run multi-container apps with Docker Compose. Migrate your Docker skills directly to App Service.
+* **Containerization and Docker** - Dockerize your app and host a custom Windows or Linux container in App Service. Run sidecar containers of your choice. Migrate your Docker skills directly to App Service.
* **DevOps optimization** - Set up [continuous integration and deployment](deploy-continuous-deployment.md) with Azure DevOps, GitHub, BitBucket, Docker Hub, or Azure Container Registry. Promote updates through [test and staging environments](deploy-staging-slots.md). Manage your apps in App Service by using [Azure PowerShell](/powershell/azure/) or the [cross-platform command-line interface (CLI)](/cli/azure/install-azure-cli). * **Global scale with high availability** - Scale [up](manage-scale-up.md) or [out](../azure-monitor/autoscale/autoscale-get-started.md) manually or automatically. Host your apps anywhere in Microsoft's global datacenter infrastructure, and the App Service [SLA](https://azure.microsoft.com/support/legal/sla/app-service/) promises high availability. * **Connections to SaaS platforms and on-premises data** - Choose from [many hundreds of connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) for enterprise systems (such as SAP), SaaS services (such as Salesforce), and internet services (such as Facebook). Access on-premises data using [Hybrid Connections](app-service-hybrid-connections.md) and [Azure Virtual Networks](./overview-vnet-integration.md).
app-service Quickstart Multi Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-multi-container.md
# Create a multi-container (preview) app using a Docker Compose configuration > [!NOTE]
-> Multi-container is in preview.
+> Sidecar containers (preview) will succeed multi-container apps in App Service. To get started, see [Tutorial: Configure a sidecar container for custom container in Azure App Service (preview)](tutorial-custom-container-sidecar.md).
[Web App for Containers](overview.md#app-service-on-linux) provides a flexible way to use Docker images. This quickstart shows how to deploy a multi-container app (preview) to Web App for Containers in the [Cloud Shell](../cloud-shell/overview.md) using a Docker Compose configuration.
app-service Tutorial Custom Container Sidecar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-custom-container-sidecar.md
+
+ Title: 'Tutorial: Configure a sidecar container'
+description: Add sidecar containers to your custom container in Azure App Service. Add or update services to your application without changing your application container.
+ Last updated : 04/07/2024++
+keywords: azure app service, web app, linux, windows, docker, container, sidecar
++
+# Tutorial: Configure a sidecar container for custom container in Azure App Service (preview)
+
+In this tutorial, you add OpenTelemetry collector as a sidecar container to a Linux custom container app in Azure App Service.
+
+In Azure App Service, you can add up to 4 sidecar containers for each sidecar-enabled custom container app. Sidecar containers let you deploy extra services and features to your container application without making them tightly coupled to your main application container. For example, you can add monitoring, logging, configuration, and networking services as sidecar containers. An OpenTelemetry collector sidecar is one such monitoring example.
+
+For more information about sidecars, see [Sidecar pattern](/azure/architecture/patterns/sidecar).
+
+> [!NOTE]
+> For the preview period, sidecar support must be enabled at app creation. There's currently no way to enable sidecar support for an existing app.
++
+## 1. Set up the needed resources
+
+First you create the resources that the tutorial uses (for more information, see [Cloud Shell Overview](../cloud-shell/overview.md)). They're used for this particular scenario and aren't required for sidecar containers in general.
+
+1. In the [Azure Cloud Shell](https://shell.azure.com), run the following commands:
+
+ ```azurecli-interactive
+ git clone https://github.com/Azure-Samples/app-service-sidecar-tutorial-prereqs
+ cd app-service-sidecar-tutorial-prereqs
+ azd provision
+ ```
+
+1. When prompted, supply the environment name, subscription, and region you want. For example:
+
+ - Environment name: *my-sidecar-env*
+ - Subscription: your subscription
+ - Region: *(Europe) West Europe*
+
+ When deployment completes, you should see the following output:
+
+ <pre>
+ APPLICATIONINSIGHTS_CONNECTION_STRING = <b>InstrumentationKey=...;IngestionEndpoint=...;LiveEndpoint=...</b>
+
+ Open resource group in the portal: <b>https://portal.azure.com/#@/resource/subscriptions/.../resourceGroups/...</b>
+ </pre>
+
+1. Open the resource group link in a browser tab. You'll need to use the connection string later.
+
+ > [!NOTE]
+ > `azd provision` uses the included templates to create the following Azure resources:
+ >
+ > - A resource group
+ > - A [container registry](../container-registry/container-registry-intro.md) with two images deployed:
+ > - An Nginx image with the OpenTelemetry module.
+ > - An OpenTelemetry collector image, configured to export to [Azure Monitor](../azure-monitor/overview.md).
+ > - A [log analytics workspace](../azure-monitor/logs/log-analytics-overview.md)
+ > - An [Application Insights](../azure-monitor/app/app-insights-overview.md) component
+
+## 2. Create a sidecar-enabled app
+
+1. In the resource group's management page, select **Create**.
+1. Search for *web app*, then select the down arrow on **Create** and select **Web App**.
+
+ :::image type="content" source="media/tutorial-custom-container-sidecar/create-web-app.png" alt-text="Screenshot showing the Azure Marketplace page with the web app being searched and create web app buttons being clicked.":::
+
+1. Configure the **Basics** panel as follows:
+ - **Name**: A unique name
+ - **Publish**: **Container**
+ - **Operating System**: **Linux**
+ - **Region**: Same region as the one you chose with `azd provision`
+ - **Linux Plan**: A new App Service plan
+
+ :::image type="content" source="media/tutorial-custom-container-sidecar/create-wizard-basics-panel.png" alt-text="Screenshot showing the web app create wizard and settings for a Linux custom container app highlighted.":::
+
+1. Select **Container**. Configure the **Container** panel as follows:
+ - **Sidecar support**: **Enabled**
+ - **Image Source**: **Azure Container Registry**
+ - **Registry**: The registry created by `azd provision`
+ - **Image**: **nginx**
+ - **Tag**: **latest**
+ - **Port**: **80**
+
+ :::image type="content" source="media/tutorial-custom-container-sidecar/create-wizard-container-panel.png" alt-text="Screenshot showing the web app create wizard and settings for the container image and the sidecar support highlighted.":::
+
+ > [!NOTE]
+ > These settings are configured differently in sidecar-enabled apps. For more information, see [Differences for sidecar-enabled apps](#differences-for-sidecar-enabled-apps).
+
+1. Select **Review + create**, then select **Create**.
+
+1. Once the deployment completes, select **Go to resource**.
+
+1. In a new browser tab, navigate to `https://<app-name>.azurewebsites.net` and see the default Nginx page.
+
+## 3. Add a sidecar container
+
+In this section, you add a sidecar container to your custom container app.
+
+1. In the app's management page, from the left menu, select **Deployment Center**.
+
+ The deployment center shows you all the containers in the app. Right now, it only has the main container.
+
+1. Select **Add** and configure the new container as follows:
+ - **Name**: *otel-collector*
+ - **Image source**: **Azure Container Registry**
+ - **Registry**: The registry created by `azd provision`
+ - **Image**: **otel-collector**
+ - **Tag**: **latest**
+ - **Port**: **4317**
+
+ Port 4317 is the default port used by the sample container to receive OpenTelemetry data. It's accessible from any other container in the app at `localhost:4317`. This is exactly how the Nginx container sends data to the sidecar (see the [OpenTelemetry module configuration for the sample Nginx image](https://github.com/Azure-Samples/app-service-sidecar-tutorial-prereqs/blob/main/images/nginx/opentelemetry_module.conf)).
+
+1. Select **Apply**.
+
+ :::image type="content" source="media/tutorial-custom-container-sidecar/add-sidecar-container.png" alt-text="Screenshot showing how to configure a sidecar container in a web app's deployment center.":::
+
+ You should now see two containers in the deployment center. The main container is marked **Main**, and the sidecar container is marked **Sidecar**. Each app must have one main container but can have multiple sidecar containers.
+
+## 4. Configure environment variables
+
+For the sample scenario, the otel-collector sidecar is configured to export the OpenTelemetry data to Azure Monitor, but it needs the connection string as an environment variable (see the [OpenTelemetry configuration file for the otel-collector image](https://github.com/Azure-Samples/app-service-sidecar-tutorial-prereqs/blob/main/images/otel-collector/otel-collector-config.yaml)).
+
+You configure environment variables for the containers like any App Service app, by configuring [app settings](configure-common.md#configure-app-settings). The app settings are accessible to all the containers in the app.
+
+1. In the app's management page, from the left menu, select **Configuration**.
+
+1. Add an app setting by selecting **New application setting** and configure it as follows:
+ - **Name**: *APPLICATIONINSIGHTS_CONNECTION_STRING*
+ - **Value**: The connection string in the output of `azd provision`
+
+1. Select **Save**, then select **Continue**.
+
+ :::image type="content" source="media/tutorial-custom-container-sidecar/configure-app-settings.png" alt-text="Screenshot showing a web app's Configuration page with two app settings added.":::
+
+> [!NOTE]
+> Certain app settings don't apply to sidecar-enabled apps. For more information, see [Differences for sidecar-enabled apps](#differences-for-sidecar-enabled-apps)
+
+## 5. Verify in Application Insights
+
+The otel-collector sidecar should export data to Application Insights now.
+
+1. Back in the browser tab for `https://<app-name>.azurewebsites.net`, refresh the page a few times to generate some web requests.
+1. Go back to the resource group overview page, select the Application Insights resource. You should now see some data in the default charts.
+
+ :::image type="content" source="media/tutorial-custom-container-sidecar/app-insights-view.png" alt-text="Screenshot of the Application Insights page showing data in the default charts.":::
+
+> [!NOTE]
+> In this very common monitoring scenario, Application Insights is just one of the OpenTelemetry targets you can use, such as Jaeger, Prometheus, and Zipkin.
+
+## Clean up resources
+
+When you no longer need the environment, you can delete the resource group, App service, and all related resources. Just run this command in the Cloud Shell, in the cloned repository:
+
+```azurecli-interactive
+azd down
+```
+
+## Differences for sidecar-enabled apps
+
+You configure sidecar-enabled apps differently than apps that aren't sidecar-enabled. Specifically, you don't configure the main container and sidecars with app settings, but directly in the resource properties. These app settings don't apply for sidecar-enabled apps:
+
+- Registry authentication settings: `DOCKER_REGISTRY_SERVER_URL`, `DOCKER_REGISTRY_SERVER_USERNAME` and `DOCKER_REGISTRY_SERVER_PASSWORD`.
+- Container port: `WEBSITES_PORT`
+
+## More resources
+
+- [Configure custom container](configure-custom-container.md)
+- [Deploy custom containers with GitHub Actions](deploy-container-github-action.md)
+- [OpenTelemetry](https://opentelemetry.io/)
app-service Tutorial Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-custom-container.md
Or, check out other resources:
> [!div class="nextstepaction"] > [Configure custom container](configure-custom-container.md)
+> [!div class="nextstepaction"]
+> [Configure sidecar](configure-custom-container.md)
+ ::: zone pivot="container-linux" > [!div class="nextstepaction"]
-> [Tutorial: Multi-container WordPress app](tutorial-multi-container-app.md)
+> [Tutorial: Configure a sidecar container for custom container in Azure App Service (preview)](tutorial-custom-container-sidecar.md)
::: zone-end
app-service Tutorial Multi Container App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-multi-container-app.md
# Tutorial: Create a multi-container (preview) app in Web App for Containers > [!NOTE]
-> Multi-container is in preview.
+> Sidecar containers (preview) will succeed multi-container apps in App Service. To get started, see [Tutorial: Configure a sidecar container for custom container in Azure App Service (preview)](tutorial-custom-container-sidecar.md).
[Web App for Containers](overview.md#app-service-on-linux) provides a flexible way to use Docker images. In this tutorial, you'll learn how to create a multi-container app using WordPress and MySQL. You'll complete this tutorial in Cloud Shell, but you can also run these commands locally with the [Azure CLI](/cli/azure/install-azure-cli) command-line tool (2.0.32 or later).
app-service Webjobs Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/webjobs-create.md
description: Learn how to use WebJobs to run background tasks in Azure App Servi
ms.assetid: af01771e-54eb-4aea-af5f-f883ff39572b Previously updated : 7/30/2023 Last updated : 3/01/2024 -+ adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B
adobe-target-content: ./webjobs-create-ieux
# Run background tasks with WebJobs in Azure App Service
+> [!NOTE]
+> WebJobs for **Windows container**, **Linux code**, and **Linux container** is in preview. WebJobs for Windows code is generally available and not in preview.
+ Deploy WebJobs by using the [Azure portal](https://portal.azure.com) to upload an executable or script. You can run background tasks in the Azure App Service. If instead of the Azure App Service, you're using Visual Studio to develop and deploy WebJobs, see [Deploy WebJobs using Visual Studio](webjobs-dotnet-deploy-vs.md). ## Overview
-WebJobs is a feature of [Azure App Service](index.yml) that enables you to run a program or script in the same instance as a web app, API app, or mobile app. There's no extra cost to use WebJobs.
+WebJobs is a feature of [Azure App Service](index.yml) that enables you to run a program or script in the same instance as a web app. There's no extra cost to use WebJobs.
-You can use the Azure WebJobs SDK with WebJobs to simplify many programming tasks. WebJobs aren't supported for App Service on Linux yet. For more information, see [What is the WebJobs SDK](https://github.com/Azure/azure-webjobs-sdk/wiki).
+You can use the Azure WebJobs SDK with WebJobs to simplify many programming tasks. For more information, see [What is the WebJobs SDK](https://github.com/Azure/azure-webjobs-sdk/wiki).
Azure Functions provides another way to run programs and scripts. For a comparison between WebJobs and Functions, see [Choose between Flow, Logic Apps, Functions, and WebJobs](../azure-functions/functions-compare-logic-apps-ms-flow-webjobs.md). ++ ## WebJob types
-The following table describes the differences between *continuous* and *triggered* WebJobs.
+### <a name="acceptablefiles"></a>Supported file types for scripts or programs
+
+### [Windows code](#tab/windowscode)
+The following file types are supported:<br>
+**.cmd**, **.bat**, **.exe** (using Windows cmd)<br>**.ps1** (using PowerShell)<br>**.sh** (using Bash)<br>**.php** (using PHP)<br>**.py** (using Python)<br>**.js** (using Node.js)<br>**.jar** (using Java)<br><br>The necessary runtimes to run these file types are already installed on the web app instance.
+### [Windows container](#tab/windowscontainer)
+> [!NOTE]
+> WebJobs for Windows container is in preview.
+>
+
+The following file types are supported:<br>
+**.cmd**, **.bat**, **.exe** (using Windows cmd)<br><br>In addition to these file types, WebJobs written in the language runtime of the Windows container app.<br>Example: .jar and .war scripts if the container is a Java app.
+### [Linux code](#tab/linuxcode)
+> [!NOTE]
+> WebJobs for Linux code is in preview.
+>
+
+**.sh** scripts are supported.<br><br>In addition to shell scripts, WebJobs written in the language of the selected runtime are also supported.<br>Example: Python (.py) scripts if the main site is a Python code app.
+### [Linux container](#tab/linuxcontainer)
+> [!NOTE]
+> WebJobs for Linux container is in preview.
+>
+**.sh** scripts are supported. <br><br>In addition to shell scripts, WebJobs written in the language runtime of the Linux container app are also supported. <br>Example: Node (.js) scripts if the site is a Node.js app.
+++
+### Continuous vs. triggered WebJobs
+
+The following table describes the differences between *continuous* and *triggered* WebJobs:
|Continuous |Triggered | |||
The following table describes the differences between *continuous* and *triggere
[!INCLUDE [webjobs-always-on-note](../../includes/webjobs-always-on-note.md)]
-## <a name="acceptablefiles"></a>Supported file types for scripts or programs
-
-The following file types are supported:
-* .cmd, .bat, .exe (using Windows cmd)
-* .ps1 (using PowerShell)
-* .sh (using Bash)
-* .php (using PHP)
-* .py (using Python)
-* .js (using Node.js)
-* .jar (using Java)
## <a name="CreateContinuous"></a> Create a continuous WebJob
when making changes in one don't forget the other two.
| **Name** | myContinuousWebJob | A name that is unique within an App Service app. Must start with a letter or a number and must not contain special characters other than "-" and "_". | | **File Upload** | ConsoleApp.zip | A *.zip* file that contains your executable or script file and any supporting files needed to run the program or script. The supported executable or script file types are listed in the [Supported file types](#acceptablefiles) section. | | **Type** | Continuous | The [WebJob types](#webjob-types) are described earlier in this article. |
- | **Scale** | Multi Instance | Available only for Continuous WebJobs. Determines whether the program or script runs on all instances or just one instance. The option to run on multiple instances doesn't apply to the Free or Shared [pricing tiers](https://azure.microsoft.com/pricing/details/app-service/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). |
+ | **Scale** | Multi Instance | Available only for Continuous WebJobs. Determines whether the program or script runs on all instances or one instance. The option to run on multiple instances doesn't apply to the Free or Shared [pricing tiers](https://azure.microsoft.com/pricing/details/app-service/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). |
1. The new WebJob appears on the **WebJobs** page. If you see a message that says the WebJob was added, but you don't see it, select **Refresh**.
To learn more, see [Scheduling a triggered WebJob](webjobs-dotnet-deploy-vs.md#s
## Manage WebJobs
-You can manage the running state individual WebJobs running in your site in the [Azure portal](https://portal.azure.com). Just go to **Settings** > **WebJobs**, choose the WebJob, and you can start and stop the WebJob. You can also view and modify the password of the webhook that runs the WebJob.
+You can manage the running state individual WebJobs running in your site in the [Azure portal](https://portal.azure.com). Go to **Settings** > **WebJobs**, choose the WebJob, and you can start and stop the WebJob. You can also view and modify the password of the webhook that runs the WebJob.
You can also [add an application setting](configure-common.md#configure-app-settings) named `WEBJOBS_STOPPED` with a value of `1` to stop all WebJobs running on your site. You can use this method to prevent conflicting WebJobs from running both in staging and production slots. You can similarly use a value of `1` for the `WEBJOBS_DISABLE_SCHEDULE` setting to disable triggered WebJobs in the site or a staging slot. For slots, remember to enable the **Deployment slot setting** option so that the setting itself doesn't get swapped.
You can also [add an application setting](configure-common.md#configure-app-sett
## WebJob statuses
-Below is a list of common WebJob statuses:
+The following is a list of common WebJob statuses:
-- **Initializing** The app has just started and the WebJob is going through its initialization process.
+- **Initializing** The app has started and the WebJob is going through its initialization process.
- **Starting** The WebJob is starting up. - **Running** The WebJob is running. - **PendingRestart** A continuous WebJob exits in less than two minutes since it started for any reason, and App Service waits 60 seconds before restarting the WebJob. If the continuous WebJob exits after the two-minute mark, App Service doesn't wait the 60 seconds and restarts the WebJob immediately. - **Stopped** The WebJob was stopped (usually from the Azure portal) and is currently not running and won't run until you start it again manually, even for a continuous or scheduled WebJob.-- **Aborted** This can occur for a number of reasons, such as when a long-running WebJob reaches the timeout marker.
+- **Aborted** This can occur for many of reasons, such as when a long-running WebJob reaches the timeout marker.
## <a name="NextSteps"></a> Next steps
automation Automation Runbook Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md
Currently, Python 3.10 (preview) runtime version is supported for both Cloud and
- Uses the robust Python libraries. - Can run in Azure or on Hybrid Runbook Workers.-- For Python 2.7, Windows Hybrid Runbook Workers are supported with [python 2.7](https://www.python.org/downloads/release/latest/python2) installed.
+- For Python 2.7, Windows Hybrid Runbook Workers are supported with [python 2.7](https://www.python.org/downloads/release/python-270/) installed.
- For Python 3.8 Cloud Jobs, Python 3.8 version is supported. Scripts and packages from any 3.x version might work if the code is compatible across different versions. - For Python 3.8 Hybrid jobs on Windows machines, you can choose to install any 3.x version you may want to use. - For Python 3.8 Hybrid jobs on Linux machines, we depend on the Python 3 version installed on the machine to run DSC OMSConfig and the Linux Hybrid Worker. Different versions should work if there are no breaking changes in method signatures or contracts between versions of Python 3.
automation Python Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/python-packages.md
Azure Automation doesn't resolve dependencies for Python packages during the imp
### Manually download
-On a Windows 64-bit machine with [Python2.7](https://www.python.org/downloads/release/latest/python2) and [pip](https://pip.pypa.io/en/stable/) installed, run the following command to download a package and all its dependencies:
+On a Windows 64-bit machine with [Python2.7](https://www.python.org/downloads/release/python-270/) and [pip](https://pip.pypa.io/en/stable/) installed, run the following command to download a package and all its dependencies:
```cmd C:\Python27\Scripts\pip2.7.exe download -d <output dir> <package name>
azure-cache-for-redis Cache Nodejs Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-nodejs-get-started.md
In this quickstart, you incorporate Azure Cache for Redis into a Node.js app to
- Azure subscription - [create one for free](https://azure.microsoft.com/free/) - [node_redis](https://github.com/mranney/node_redis), which you can install with the command `npm install redis`.
-For examples of using other Node.js clients, see the individual documentation for the Node.js clients listed at [Node.js Redis clients](https://redis.io/clients#nodejs).
+For examples of using other Node.js clients, see the individual documentation for the Node.js clients listed at [Node.js Redis clients](https://redis.io/docs/connect/clients/nodejs/).
## Create a cache
azure-government Documentation Government Overview Wwps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-wwps.md
Azure Stack Hub and Azure Stack Edge represent key enabling technologies that al
[Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) (formerly Azure Stack) is an integrated system of software and validated hardware that you can purchase from Microsoft hardware partners, deploy in your own data center, and then operate entirely on your own or with the help from a managed service provider. With Azure Stack Hub, you're always fully in control of access to your data. Azure Stack Hub can accommodate up to [16 physical servers per Azure Stack Hub scale unit](/azure-stack/operator/azure-stack-overview). It represents an extension of Azure, enabling you to provision various IaaS and PaaS services and effectively bring multi-tenant cloud technology to on-premises and edge environments. You can run many types of VM instances, App Services, Containers (including Azure AI containers), Functions, Azure Monitor, Key Vault, Event Hubs, and other services while using the same development tools, APIs, and management processes you use in Azure. Azure Stack Hub isn't dependent on connectivity to Azure to run deployed applications and enable operations via local connectivity.
-In addition to Azure Stack Hub, which is intended for on-premises deployment (for example, in a data center), a ruggedized and field-deployable version called [Tactical Azure Stack Hub](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/converged-infrastructure/dell-emc-integrated-system-for-azure-stack-hub-tactical-spec-sheet.pdf) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions.
+In addition to Azure Stack Hub, which is intended for on-premises deployment (for example, in a data center), a ruggedized and field-deployable version called [Tactical Azure Stack Hub](https://www.dell.com/support/manuals/en-us/cloud-for-microsoft-azure-stack14g/cas_pub_tech_book/dell-integrated-system-for-microsoft-azure-stack-hub-tactical?guid=guid-3b3ec158-8940-45e4-b637-3d761cfe4a14&lang=en-us) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions.
Azure Stack Hub can be [operated disconnected](/azure-stack/operator/azure-stack-disconnected-deployment) from Azure or the Internet. You can run the next generation of AI-enabled hybrid applications where your data lives. For example, you can rely on Azure Stack Hub to bring a trained AI model to the edge and integrate it with your applications for low-latency intelligence, with no tool or process changes for local applications.
For classified workloads, you can provision key enabling Azure services to secur
Similar data classification schemes exist in many countries/regions. For top secret data, you can deploy Azure Stack Hub, which can operate disconnected from Azure and the Internet.
-[Tactical Azure Stack Hub](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/converged-infrastructure/dell-emc-integrated-system-for-azure-stack-hub-tactical-spec-sheet.pdf) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions. Figure 8 depicts key enabling services that you can provision to accommodate various workloads on Azure.
+[Tactical Azure Stack Hub](https://www.dell.com/support/manuals/en-us/cloud-for-microsoft-azure-stack14g/cas_pub_tech_book/dell-integrated-system-for-microsoft-azure-stack-hub-tactical?guid=guid-3b3ec158-8940-45e4-b637-3d761cfe4a14&lang=en-us) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions. Figure 8 depicts key enabling services that you can provision to accommodate various workloads on Azure.
:::image type="content" source="./media/wwps-data-classifications.png" alt-text="Azure support for various data classifications" border="false"::: **Figure 8.** Azure support for various data classifications
Listed below are key enabling products that you may find helpful when deploying
- All recommended technologies used for secret data. - [Azure Stack Hub](/azure-stack/operator/azure-stack-overview) (formerly Azure Stack) enables you to run workloads using the same architecture and APIs as in Azure while having a physically isolated network for your highest classification data. - [Azure Stack Edge](../databox-online/azure-stack-edge-gpu-overview.md) (formerly Azure Data Box Edge) allows the storage and processing of highest classification data but also enables you to upload resulting information or models directly to Azure. This approach creates a path for information sharing between domains that makes it easier and more secure.-- In addition to Azure Stack Hub, which is intended for on-premises deployment (for example, in a data center), a ruggedized and field-deployable version called [Tactical Azure Stack Hub](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/converged-infrastructure/dell-emc-integrated-system-for-azure-stack-hub-tactical-spec-sheet.pdf) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions.
+- In addition to Azure Stack Hub, which is intended for on-premises deployment (for example, in a data center), a ruggedized and field-deployable version called [Tactical Azure Stack Hub](https://www.dell.com/support/manuals/en-us/cloud-for-microsoft-azure-stack14g/cas_pub_tech_book/dell-integrated-system-for-microsoft-azure-stack-hub-tactical?guid=guid-3b3ec158-8940-45e4-b637-3d761cfe4a14&lang=en-us) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions.
- User-provided hardware security modules (HSMs) allow you to store your encryption keys in HSMs deployed on-premises and controlled solely by you. Accommodating top secret data will likely require a disconnected environment, which is what Azure Stack Hub provides. Azure Stack Hub can be [operated disconnected](/azure-stack/operator/azure-stack-disconnected-deployment) from Azure or the Internet. Even though ΓÇ£air-gappedΓÇ¥ networks don't necessarily increase security, many governments may be reluctant to store data with this classification in an Internet connected environment.
This section addresses common customer questions related to Azure public, privat
- **DevOps personnel (cleared nationals):** What controls or clearance levels does Microsoft have for the personnel that have DevOps access to cloud environments or physical access to data centers? **Answer:** Microsoft conducts [background screening](./documentation-government-plan-security.md#screening) on operations personnel with access to production systems and physical data center infrastructure. Microsoft cloud background check includes verification of education and employment history upon hire, and extra checks conducted every two years thereafter (where permissible by law), including criminal history check, OFAC list, BIS denied persons list, and DDTC debarred parties list. - **Data center site options:** Is Microsoft willing to deploy a data center to a specific physical location to meet more advanced security requirements? **Answer:** You should inquire with your Microsoft account team regarding options for data center locations. - **Service availability guarantee:** How can my organization ensure that Microsoft (or particular government or other entity) canΓÇÖt turn off our cloud services? **Answer:** You should review the Microsoft [Product Terms](https://www.microsoft.com/licensing/docs/view/Product-Terms) (formerly Online Services Terms) and the Microsoft Products and Services [Data Protection Addendum](https://aka.ms/dpa) (DPA) for contractual commitments Microsoft makes regarding service availability and use of online services.-- **Non-traditional cloud service needs:** What options does Microsoft provide for periodically internet free/disconnected environments? **Answer:** In addition to [Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/), which is intended for on-premises deployment and disconnected scenarios, a ruggedized and field-deployable version called [Tactical Azure Stack Hub](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/converged-infrastructure/dell-emc-integrated-system-for-azure-stack-hub-tactical-spec-sheet.pdf) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions.
+- **Non-traditional cloud service needs:** What options does Microsoft provide for periodically internet free/disconnected environments? **Answer:** In addition to [Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/), which is intended for on-premises deployment and disconnected scenarios, a ruggedized and field-deployable version called [Tactical Azure Stack Hub](https://www.dell.com/support/manuals/en-us/cloud-for-microsoft-azure-stack14g/cas_pub_tech_book/dell-integrated-system-for-microsoft-azure-stack-hub-tactical?guid=guid-3b3ec158-8940-45e4-b637-3d761cfe4a14&lang=en-us) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions.
### Transparency and audit
azure-maps Zoom Levels And Tile Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/zoom-levels-and-tile-grid.md
Each additional zoom level quad-divides the tiles of the previous one, creating
The Azure Maps interactive map controls for web and Android support 25 zoom levels, numbered 0 through 24. Although road data is only available at the zoom levels in when the tiles are available.
-The following table provides the full list of values for zoom levels where the tile size is **512** pixels square at latitude 0:
+The following table provides the full list of values for zoom levels where the tile size is **256** pixels square:
|Zoom level|Meters/pixel|Meters/tile side| | | | |
azure-monitor Prometheus Argo Cd Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-argo-cd-integration.md
+
+ Title: Configure Argo CD integration for Prometheus metrics in Azure Monitor
+description: Describes how to configure Argo CD monitoring using Prometheus metrics in Azure Monitor to Kubernetes cluster.
+ Last updated : 3/25/2024++++
+# Argo CD
+Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes. Argo CD follows the GitOps pattern of using Git repositories as the source of truth for defining the desired application state. It automates the deployment of the desired application states in the specified target environments. Application deployments can track updates to branches, tags, or pinned to a specific version of manifests at a Git commit.
+This article describes how to configure Azure Managed Prometheus with Azure Kubernetes Service(AKS) to monitor Argo CD by scraping prometheus metrics.
+
+## Prerequisites
+++ Argo CD running on AKS++ Azure Managed Prometheus enabled on the AKS cluster - [Enable Azure Managed Prometheus on AKS](kubernetes-monitoring-enable.md#enable-prometheus-and-grafana)+
+### Deploy Service Monitors
+Deploy the following service monitors to configure Azure managed prometheus addon to scrape prometheus metrics from the argocd workload.
+
+> [!NOTE]
+> Please specify the right labels in the matchLabels for the service monitors if they do not match the configured ones in the sample.
+
+```yaml
+apiVersion: azmonitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+ name: azmon-argocd-metrics
+spec:
+ labelLimit: 63
+ labelNameLengthLimit: 511
+ labelValueLengthLimit: 1023
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: argocd-metrics
+ namespaceSelector:
+ any: true
+ endpoints:
+ - port: metrics
+
+apiVersion: azmonitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+ name: azmon-argocd-repo-server-metrics
+spec:
+ labelLimit: 63
+ labelNameLengthLimit: 511
+ labelValueLengthLimit: 1023
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: argocd-repo-server
+ namespaceSelector:
+ any: true
+ endpoints:
+ - port: metrics
+
+apiVersion: azmonitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+ name: azmon-argocd-server-metrics
+spec:
+ labelLimit: 63
+ labelNameLengthLimit: 511
+ labelValueLengthLimit: 1023
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: argocd-server-metrics
+ namespaceSelector:
+ any: true
+ endpoints:
+ - port: metrics
+ ```
+
+> [!NOTE]
+> If you want to configure any other service or pod monitors, please follow the instructions [here](prometheus-metrics-scrape-crd.md#create-a-pod-or-service-monitor).
+
+### Deploy Rules
+1. Download the template and parameter files
+
+ **Alerting Rules**
+ - [Template file](https://github.com/Azure/prometheus-collector/blob/main/Azure-ARM-templates/Workload-Rules/Argo/argocd-alerting-rules.json)
+ - [Parameter file](https://github.com/Azure/prometheus-collector/blob/main/Azure-ARM-templates/Workload-Rules/Alert-Rules-Parameters.json)
++
+2. Edit the following values in the parameter files. Retrieve the resource ID of the resources from the **JSON View** of their **Overview** page.
+
+ | Parameter | Value |
+ |:|:|
+ | `azureMonitorWorkspace` | Resource ID for the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. |
+ | `location` | Location of the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. |
+ | `clusterName` | Name of the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. |
+ | `actionGroupId` | Resource ID for the alert action group. Retrieve from the **JSON view** on the **Overview** page for the action group. Learn more about [action groups](../alerts/action-groups.md) |
+
+3. Deploy the template by using any standard methods for installing ARM templates. For guidance, see [ARM template samples for Azure Monitor](../resource-manager-samples.md).
+
+4. Once deployed, you can view the rules in the Azure portal as described in - [Prometheus Alerts](../essentials/prometheus-rule-groups.md#view-prometheus-rule-groups)
+
+> [!Note]
+> Review the alert thresholds to make sure it suits your cluster/workloads and update it accordingly.
+>
+> Please note that the above rules are not scoped to a cluster. If you would like to scope the rules to a specific cluster, see [Limiting rules to a specific cluster](../essentials/prometheus-rule-groups.md#limiting-rules-to-a-specific-cluster) for more details.
+>
+> Learn more about [Prometheus Alerts](../essentials/prometheus-rule-groups.md).
+>
+> If you want to use any other OSS prometheus alerting/recording rules please use the converter here to create the azure equivalent prometheus rules [az-prom-rules-converter](https://aka.ms/az-prom-rules-converter)
++
+### Import the Grafana Dashboard
+
+To import the grafana dashboards using the ID or JSON, follow the instructions to [Import a dashboard from Grafana Labs](../../managed-grafan#import-a-grafana-dashboard). </br>
+
+[ArgoCD](https://grafana.com/grafana/dashboards/14584-argocd/)(ID-14191)
++
+### Troubleshooting
+When the service monitors is successfully applied, if you want to make sure that the service monitor targets get picked up by the addon, follow the instructions [here](prometheus-metrics-troubleshoot.md#prometheus-interface).
++
azure-monitor Prometheus Elasticsearch Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-elasticsearch-integration.md
+
+ Title: Configure Elasticsearch integration for Prometheus metrics in Azure Monitor
+description: Describes how to configure Elasticsearch monitoring using Prometheus metrics in Azure Monitor to Kubernetes cluster.
+ Last updated : 3/19/2024++++
+# Elasticsearch
+Elasticsearch is the distributed search and analytics engine at the heart of the Elastic Stack. It is where the indexing, search, and analysis magic happen.
+This article describes how to configure Azure Managed Prometheus with Azure Kubernetes Service(AKS) to monitor elastic search clusters by scraping prometheus metrics.
+
+## Prerequisites
+++ Elasticsearch cluster running on AKS++ Azure Managed prometheus enabled on the AKS cluster - [Enable Azure Managed Prometheus on AKS](kubernetes-monitoring-enable.md#enable-prometheus-and-grafana)++
+### Install Elasticsearch Exporter
+Install the [Elasticsearch exporter](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-elasticsearch-exporter) using the helm chart.
+
+```bash
+helm install azmon-elasticsearch-exporter --version 5.7.0 prometheus-community/prometheus-elasticsearch-exporter --set es.uri="https://username:password@elasticsearch-service.namespace:9200" --set podMonitor.enabled=true --set podMonitor.apiVersion=azmonitoring.coreos.com/v1
+```
+
+> [!NOTE]
+> Managed prometheus pod/service monitor configuration with helm chart installation is only supported with the helm chart version >=5.7.0.
+>
+> The [prometheus-elasticsearch-exporter](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-elasticsearch-exporter) helm chart can be configured with [values](https://github.com/prometheus-community/helm-charts/blob/main/charts/prometheus-elasticsearch-exporter/values.yaml) yaml.
+Please specify the right server address where the Elasticsearch server can be reached. Based on your configuration set the username,password or certs used to authenticate with the Elasticsearch server. Set the address where Elasticsearch is reachable using the argument "es.uri" ex - .
+>
+> You could also use service monitor, instead of pod monitor by using the **--set serviceMonitor.enabled=true** helm chart paramaters. Make sure to use the api version supported by Azure Managed Prometheus using the parameter **serviceMonitor.apiVersion=azmonitoring.coreos.com/v1**.
+>
+> If you want to configure any other service or pod monitors, please follow the instructions [here](prometheus-metrics-scrape-crd.md#create-a-pod-or-service-monitor).
++
+### Deploy Rules
+1. Download the template and parameter files
+
+ **Recording Rules**
+ - [Template file](https://github.com/Azure/prometheus-collector/blob/main/Azure-ARM-templates/Workload-Rules/ElasticSearch/elasticsearch-recording-rules.json)
+ - [Parameter file](https://github.com/Azure/prometheus-collector/blob/main/Azure-ARM-templates/Workload-Rules/Recording-Rules-Parameters.json)
+
+ **Alerting Rules**
+ - [Template file](https://github.com/Azure/prometheus-collector/blob/main/Azure-ARM-templates/Workload-Rules/ElasticSearch/elasticsearch-alerting-rules.json)
+ - [Parameter file](https://github.com/Azure/prometheus-collector/blob/main/Azure-ARM-templates/Workload-Rules/Alert-Rules-Parameters.json)
++
+2. Edit the following values in the parameter files. Retrieve the resource ID of the resources from the **JSON View** of their **Overview** page.
+
+ | Parameter | Value |
+ |:|:|
+ | `azureMonitorWorkspace` | Resource ID for the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. |
+ | `location` | Location of the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. |
+ | `clusterName` | Name of the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. |
+ | `actionGroupId` | Resource ID for the alert action group. Retrieve from the **JSON view** on the **Overview** page for the action group. Learn more about [action groups](../alerts/action-groups.md) |
+
+3. Deploy the template by using any standard methods for installing ARM templates. For guidance, see [ARM template samples for Azure Monitor](../resource-manager-samples.md).
+
+4. Once deployed, you can view the rules in the Azure portal as described in - [Prometheus Alerts](../essentials/prometheus-rule-groups.md#view-prometheus-rule-groups)
+
+> [!Note]
+> Review the alert thresholds to make sure it suits your cluster/worklaods and update it accordingly.
+>
+> Please note that the above rules are not scoped to a cluster. If you would like to scope the rules to a specific cluster, see [Limiting rules to a specific cluster](../essentials/prometheus-rule-groups.md#limiting-rules-to-a-specific-cluster) for more details.
+>
+> Learn more about [Prometheus Alerts](../essentials/prometheus-rule-groups.md).
+>
+> If you want to use any other OSS prometheus alerting/recording rules please use the converter here to create the azure equivalent prometheus rules [az-prom-rules-converter](https://aka.ms/az-prom-rules-converter)
+
+### Import the Grafana Dashboard
+
+Follow the instructions on [Import a dashboard from Grafana Labs](../../managed-grafan#import-a-grafana-dashboard) to import the grafana dashboards using the ID or JSON.</br>
+
+[Elastic Search Overview](https://github.com/grafana/jsonnet-libs/blob/master/elasticsearch-mixin/dashboards/elasticsearch-overview.json)(ID-2322)</br>
+[Elasticsearch Exporter Quickstart and Dashboard](https://grafana.com/grafana/dashboards/14191-elasticsearch-overview/)(ID-14191)
++
+### Troubleshooting
+When the service monitors is successfully applied, if you want to make sure that the service monitor targets get picked up by the addon, follow the instructions [here](prometheus-metrics-troubleshoot.md#prometheus-interface).
+
azure-monitor Prometheus Kafka Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-kafka-integration.md
+
+ Title: Configure Kafka integration for Prometheus metrics in Azure Monitor
+description: Describes how to configure Kafka monitoring using Prometheus metrics in Azure Monitor to Kubernetes cluster.
+ Last updated : 3/19/2024++++
+# Apache Kafka
+Apache Kafka is an open-source distributed event streaming platform used by high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.
+This article describes how to configure Azure Managed Prometheus with Azure Kubernetes Service(AKS) to monitor kafka clusters by scraping prometheus metrics.
+
+## Prerequisites
+++ Kafka cluster running on AKS++ Azure Managed prometheus enabled on the AKS cluster - [Enable Azure Managed Prometheus on AKS](kubernetes-monitoring-enable.md#enable-prometheus-and-grafana)++
+### Install Kafka Exporter
+Install the [Kafka Exporter](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-kafka-exporter) using the helm chart.
+
+```bash
+helm install azmon-kafka-exporter --namespace=azmon-kafka-exporter --create-namespace --version 2.10.0 prometheus-community/prometheus-kafka-exporter --set kafkaServer="{kafka-server.namespace.svc:9092,.....}" --set prometheus.serviceMonitor.enabled=true --set prometheus.serviceMonitor.apiVersion=azmonitoring.coreos.com/v1
+```
+
+> [!NOTE]
+> Managed prometheus pod/service monitor configuration with helm chart installation is only supported with the helm chart version >=2.10.0.
+>
+> The [prometheus kafka exporter](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-kafka-exporter) helm chart can be configured with [values](https://github.com/prometheus-community/helm-charts/blob/main/charts/prometheus-kafka-exporter/values.yaml) yaml.
+Please specify the right server addresses where the kafka servers can be reached. Set the server address(es) using the argument "kafkaServer".
+>
+> If you want to configure any other service or pod monitors, please follow the instructions [here](prometheus-metrics-scrape-crd.md#create-a-pod-or-service-monitor).
++
+### Import the Grafana Dashboard
+
+To import the Grafana Dashboards using the ID or JSON, follow the instructions to [Import a dashboard from Grafana Labs](../../managed-grafan#import-a-grafana-dashboard). </br>
+
+[Kafka Exporter Grafana Dashboard](https://grafana.com/grafana/dashboards/7589-kafka-exporter-overview/)(ID-7589)
+
+### Deploy Rules
+1. Download the template and parameter files
+
+ **Alerting Rules**
+ - [Template file](https://github.com/Azure/prometheus-collector/blob/main/Azure-ARM-templates/Workload-Rules/Kafka/kafka-alerting-rules.json)
+ - [Parameter file](https://github.com/Azure/prometheus-collector/blob/main/Azure-ARM-templates/Workload-Rules/Alert-Rules-Parameters.json)
++
+2. Edit the following values in the parameter files. Retrieve the resource ID of the resources from the **JSON View** of their **Overview** page.
+
+ | Parameter | Value |
+ |:|:|
+ | `azureMonitorWorkspace` | Resource ID for the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. |
+ | `location` | Location of the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. |
+ | `clusterName` | Name of the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. |
+ | `actionGroupId` | Resource ID for the alert action group. Retrieve from the **JSON view** on the **Overview** page for the action group. Learn more about [action groups](../alerts/action-groups.md) |
+
+3. Deploy the template by using any standard methods for installing ARM templates. For guidance, see [ARM template samples for Azure Monitor](../resource-manager-samples.md).
+
+4. Once deployed, you can view the rules in the Azure portal as described in - [Prometheus Alerts](../essentials/prometheus-rule-groups.md#view-prometheus-rule-groups)
+
+> [!Note]
+> Review the alert thresholds to make sure it suits your cluster/workloads and update it accordingly.
+>
+> Please note that the above rules are not scoped to a cluster. If you would like to scope the rules to a specific cluster, see [Limiting rules to a specific cluster](../essentials/prometheus-rule-groups.md#limiting-rules-to-a-specific-cluster) for more details.
+>
+> Learn more about [Prometheus Alerts](../essentials/prometheus-rule-groups.md).
+>
+> If you want to use any other OSS prometheus alerting/recording rules please use the converter here to create the azure equivalent prometheus rules [az-prom-rules-converter](https://aka.ms/az-prom-rules-converter)
++
+### More jmx_exporter metrics using strimzi
+If you are using the [strimzi operator](https://github.com/strimzi/strimzi-kafka-operator.git) for deploying the kafka clusters, deploy the pod monitors to get more jmx_exporter metrics.
+> [!Note]
+> Metrics need to be exposed by the kafka cluster deployments like the examples [here](https://github.com/strimzi/strimzi-kafka-operator/tree/main/examples/metrics). Refer to the kafka-.*-metrics.yaml files to configure metrics to be exposed.
+>
+>The pod monitors here also assume that the namespace where the kafka workload is deployed in 'kafka'. Update it accordingly if the workloads are deployed in another namespace.
+
+```yaml
+apiVersion: azmonitoring.coreos.com/v1
+kind: PodMonitor
+metadata:
+ name: azmon-cluster-operator-metrics
+ labels:
+ app: strimzi
+spec:
+ selector:
+ matchLabels:
+ strimzi.io/kind: cluster-operator
+ namespaceSelector:
+ matchNames:
+ - kafka
+ podMetricsEndpoints:
+ - path: /metrics
+ port: http
+
+apiVersion: azmonitoring.coreos.com/v1
+kind: PodMonitor
+metadata:
+ name: azmon-entity-operator-metrics
+ labels:
+ app: strimzi
+spec:
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: entity-operator
+ namespaceSelector:
+ matchNames:
+ - kafka
+ podMetricsEndpoints:
+ - path: /metrics
+ port: healthcheck
+
+apiVersion: azmonitoring.coreos.com/v1
+kind: PodMonitor
+metadata:
+ name: azmon-bridge-metrics
+ labels:
+ app: strimzi
+spec:
+ selector:
+ matchLabels:
+ strimzi.io/kind: KafkaBridge
+ namespaceSelector:
+ matchNames:
+ - kafka
+ podMetricsEndpoints:
+ - path: /metrics
+ port: rest-api
+
+apiVersion: azmonitoring.coreos.com/v1
+kind: PodMonitor
+metadata:
+ name: azmon-kafka-resources-metrics
+ labels:
+ app: strimzi
+spec:
+ selector:
+ matchExpressions:
+ - key: "strimzi.io/kind"
+ operator: In
+ values: ["Kafka", "KafkaConnect", "KafkaMirrorMaker", "KafkaMirrorMaker2"]
+ namespaceSelector:
+ matchNames:
+ - kafka
+ podMetricsEndpoints:
+ - path: /metrics
+ port: tcp-prometheus
+ relabelings:
+ - separator: ;
+ regex: __meta_kubernetes_pod_label_(strimzi_io_.+)
+ replacement: $1
+ action: labelmap
+ - sourceLabels: [__meta_kubernetes_namespace]
+ separator: ;
+ regex: (.*)
+ targetLabel: namespace
+ replacement: $1
+ action: replace
+ - sourceLabels: [__meta_kubernetes_pod_name]
+ separator: ;
+ regex: (.*)
+ targetLabel: kubernetes_pod_name
+ replacement: $1
+ action: replace
+ - sourceLabels: [__meta_kubernetes_pod_node_name]
+ separator: ;
+ regex: (.*)
+ targetLabel: node_name
+ replacement: $1
+ action: replace
+ - sourceLabels: [__meta_kubernetes_pod_host_ip]
+ separator: ;
+ regex: (.*)
+ targetLabel: node_ip
+ replacement: $1
+ action: replace
+```
+
+#### Alerts with strimzi
+Rich set of alerts based off of strimzi metrics can also be configured by refering to the [examples](https://github.com/strimzi/strimzi-kafka-operator/blob/main/examples/metrics/prometheus-install/prometheus-rules.yaml).
+
+> [!NOTE]
+> If using any other way of exposing the jmx_exporter on your kafka cluster, please follow the instructions [here](prometheus-metrics-scrape-crd.md) on how to configure the pod or service monitors accordingly.
+
+### Grafana Dashboards for more jmx metrics with strimzi
+Please also see the [grafana-dashboards-for-strimzi](https://github.com/strimzi/strimzi-kafka-operator/tree/main/examples/metrics/grafana-dashboards) to view dashboards for metrics exposed by strimzi operator.
++
+### Troubleshooting
+When the service monitors or pod monitors are successfully applied, if you want to make sure that the service monitor targets get picked up by the addon, follow the instructions [here](prometheus-metrics-troubleshoot.md#prometheus-interface).
+
azure-monitor Scom Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/scom-managed-instance-overview.md
Previously updated : 11/15/2023 Last updated : 04/04/2024
azure-portal Set Preferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/set-preferences.md
Title: Manage Azure portal settings and preferences description: Change Azure portal settings such as default subscription/directory, timeouts, menu mode, contrast, theme, notifications, language/region and more. Previously updated : 03/14/2024 Last updated : 04/04/2024
To create a new filter, select **Create a filter**. You can create up to ten fil
Each filter must have a unique name that is between 8 and 50 characters long and contains only letters, numbers, and hyphens. - After you've named your filter, enter at least one condition. In the **Filter type** field, select **Management group**, **Subscription ID**, **Subscription name**, or **Subscription state**. Then select an operator and the value to filter on. :::image type="content" source="media/set-preferences/settings-create-filter.png" alt-text="Screenshot showing options for Create a filter.":::
To delete a filter, select the trash can icon in that filter's row. You can't de
## Appearance + startup views
-The **Appearance + startup views** pane has two sections. The **Appearance** section lets you choose menu behavior, your color theme, and whether to use a high-contrast theme.
+The **Appearance + startup views** pane has two sections. The **Appearance** section lets you choose menu behavior, your color theme, and whether to use a high-contrast theme.
:::image type="content" source="media/set-preferences/azure-portal-settings-appearance.png" alt-text="Screenshot showing the Appearance section of Appearance + startup views.":::
Select an option to control the way dates, time, numbers, and currency are shown
The options shown in the **Regional format** drop-down list correspond to the **Language** options. For example, if you select **English** as your language, and then select **English (United States)** as the regional format, currency is shown in U.S. dollars. If you select **English** as your language and then select **English (Europe)** as the regional format, currency is shown in euros. You can also select a regional format that is different from your language selection.
-Once you have made the desired changes to your language and regional format settings, select **Apply**.
+After making the desired changes to your language and regional format settings, select **Apply**.
## My information
Once you have made the desired changes to your language and regional format sett
### Email setting
-The email address you provide here will be used if we need to contact you for updates on Azure services, billing, support, or security issues. You can change this address at any time.
+The email address you provide here is used when we need to contact you for updates on Azure services, billing, support, or security issues. You can change this address at any time.
-Here, you can also indicate whether you'd like to receive additional emails about Microsoft Azure and other Microsoft products and services.
+You can also indicate whether you'd like to receive additional emails about Microsoft Azure and other Microsoft products and services. If you select the checkbox to receive these emails, you'll be prompted to select the country/region in which you'll receive these emails. Note that certain countries/regions may not be available. You only need to specify a country/region if you want to receive these additional emails; selecting a country/region isn't required in order to receive emails about your Azure account at the address you provide in this section.
### Portal personalization
Due to the dynamic nature of user settings and risk of data corruption, you can'
#### Restore default settings
-If you've made changes to the Azure portal settings and want to discard them, select **Restore default settings** from the top of the **My information** pane. You'll be prompted to confirm this action. When you do so, any changes you've made to your Azure portal settings will be lost. This option doesn't affect dashboard customizations.
+If you've made changes to the Azure portal settings and want to discard them, select **Restore default settings** from the top of the **My information** pane. You'll be prompted to confirm this action. If you do so, any changes you've made to your Azure portal settings are lost. This option doesn't affect dashboard customizations.
#### Delete user settings and dashboards
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.BotService/botServices/channels | [listChannelWithKeys](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/botservice/resource-manager/Microsoft.BotService/stable/2020-06-02/botservice.json#L553) | | Microsoft.Cache/redis | [listKeys](/rest/api/redis/redis/list-keys) | | Microsoft.CognitiveServices/accounts | [listKeys](/rest/api/aiservices/accountmanagement/accounts/list-keys) |
-| Microsoft.ContainerRegistry/registries | [listBuildSourceUploadUrl](/rest/api/containerregistry/registries%20(tasks)/get-build-source-upload-url) |
+| Microsoft.ContainerRegistry/registries | listBuildSourceUploadUrl |
| Microsoft.ContainerRegistry/registries | [listCredentials](/rest/api/containerregistry/registries/listcredentials) | | Microsoft.ContainerRegistry/registries | [listUsages](/rest/api/containerregistry/registries/listusages) | | Microsoft.ContainerRegistry/registries/agentpools | listQueueStatus |
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.BotService/botServices/channels | [listChannelWithKeys](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/botservice/resource-manager/Microsoft.BotService/stable/2020-06-02/botservice.json#L553) | | Microsoft.Cache/redis | [listKeys](/rest/api/redis/redis/list-keys) | | Microsoft.CognitiveServices/accounts | [listKeys](/rest/api/aiservices/accountmanagement/accounts/list-keys) |
-| Microsoft.ContainerRegistry/registries | [listBuildSourceUploadUrl](/rest/api/containerregistry/registries%20(tasks)/get-build-source-upload-url) |
+| Microsoft.ContainerRegistry/registries | listBuildSourceUploadUrl |
| Microsoft.ContainerRegistry/registries | [listCredentials](/rest/api/containerregistry/registries/listcredentials) | | Microsoft.ContainerRegistry/registries | [listUsages](/rest/api/containerregistry/registries/listusages) | | Microsoft.ContainerRegistry/registries/agentpools | listQueueStatus |
azure-vmware Architecture Stretched Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/architecture-stretched-clusters.md
Azure VMware Solution stretched clusters are available in the following regions:
- UK South (on AV36, and AV36P) - West Europe (on AV36, and AV36P) -- Germany West Central (on AV36)
+- Germany West Central (on AV36, and AV36P)
- Australia East (on AV36P) ## Storage policies supported
azure-vmware Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-customer-managed-keys.md
Title: Configure customer-managed key encryption at rest in Azure VMware Solution
-description: Learn how to encrypt data in Azure VMware Solution with customer-managed keys using Azure Key Vault.
+ Title: Configure CMK encryption at rest in Azure VMware Solution
+description: Learn how to encrypt data in Azure VMware Solution with customer-managed keys by using Azure Key Vault.
Last updated 12/05/2023
Last updated 12/05/2023
# Configure customer-managed key encryption at rest in Azure VMware Solution
-This article illustrates how to encrypt VMware vSAN Key Encryption Keys (KEKs) with customer-managed keys (CMKs) managed by customer-owned Azure Key Vault.
+This article illustrates how to encrypt VMware vSAN key encryption keys (KEKs) with customer-managed keys (CMKs) managed by a customer-owned Azure Key Vault instance.
-When CMK encryptions are enabled on your Azure VMware Solution private cloud, Azure VMware Solution uses the CMK from your key vault to encrypt the vSAN KEKs. Each ESXi host that participates in the vSAN cluster uses randomly generated Disk Encryption Keys (DEKs) that ESXi uses to encrypt disk data at rest. vSAN encrypts all DEKs with a KEK provided by Azure VMware Solution key management system (KMS). Azure VMware Solution private cloud and Azure Key Vault don't need to be in the same subscription.
+When CMK encryptions are enabled on your Azure VMware Solution private cloud, Azure VMware Solution uses the CMK from your key vault to encrypt the vSAN KEKs. Each ESXi host that participates in the vSAN cluster uses randomly generated disk encryption keys (DEKs) that ESXi uses to encrypt disk data at rest. vSAN encrypts all DEKs with a KEK provided by the Azure VMware Solution key management system. The Azure VMware Solution private cloud and the key vault don't need to be in the same subscription.
-When managing your own encryption keys, you can do the following actions:
+When you manage your own encryption keys, you can:
- Control Azure access to vSAN keys. - Centrally manage the lifecycle of CMKs.-- Revoke Azure from accessing the KEK.
+- Revoke Azure access to the KEK.
-The Customer-managed keys (CMKs) feature supports the following key types. See the following key types, shown by key type and key size.
+The CMKs feature supports the following key types and their key sizes:
-- RSA: 2048, 3072, 4096-- RSA-HSM: 2048, 3072, 4096
+- **RSA**: 2048, 3072, 4096
+- **RSA-HSM**: 2048, 3072, 4096
## Topology
-The following diagram shows how Azure VMware Solution uses Microsoft Entra ID and a key vault to deliver the customer-managed key.
+The following diagram shows how Azure VMware Solution uses Microsoft Entra ID and a key vault to deliver the CMK.
## Prerequisites
-Before you begin to enable customer-managed key (CMK) functionality, ensure the following listed requirements are met:
+Before you begin to enable CMK functionality, ensure that the following requirements are met:
-- You need an Azure Key Vault to use CMK functionality. If you don't have an Azure Key Vault, you can create one using [Quickstart: Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md).-- If you enabled restricted access to key vault, you need to allow Microsoft Trusted Services to bypass the Azure Key Vault firewall. Go to [Configure Azure Key Vault networking settings](../key-vault/general/how-to-azure-key-vault-network-security.md?tabs=azure-portal) to learn more.
+- You need a key vault to use CMK functionality. If you don't have a key vault, you can create one by using [Quickstart: Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md).
+- If you enabled restricted access to Key Vault, you need to allow Microsoft Trusted Services to bypass the Key Vault firewall. Go to [Configure Azure Key Vault networking settings](../key-vault/general/how-to-azure-key-vault-network-security.md?tabs=azure-portal) to learn more.
>[!NOTE]
- >After firewall rules are in effect, users can only perform Key Vault [data plane](../key-vault/general/security-features.md#privileged-access) operations when their requests originate from allowed VMs or IPv4 address ranges. This also applies to accessing key vault from the Azure portal. This also affects the key vault Picker by Azure VMware Solution. Users may be able to see a list of key vaults, but not list keys, if firewall rules prevent their client machine or user does not have list permission in key vault.
+ >After firewall rules are in effect, users can only perform Key Vault [data plane](../key-vault/general/security-features.md#privileged-access) operations when their requests originate from allowed VMs or IPv4 address ranges. This restriction also applies to accessing Key Vault from the Azure portal. It also affects the Key Vault Picker by Azure VMware Solution. Users might be able to see a list of key vaults, but not list keys, if firewall rules prevent their client machine or the user doesn't have list permission in Key Vault.
-- Enable **System Assigned identity** on your Azure VMware Solution private cloud if you didn't enable it during software-defined data center (SDDC) provisioning.
+- Enable System Assigned identity on your Azure VMware Solution private cloud if you didn't enable it during software-defined datacenter (SDDC) provisioning.
# [Portal](#tab/azure-portal)
- Use the following steps to enable System Assigned identity:
+ To enable System Assigned identity:
- 1. Sign in to Azure portal.
+ 1. Sign in to the Azure portal.
- 2. Navigate to **Azure VMware Solution** and locate your SDDC.
+ 1. Go to **Azure VMware Solution** and locate your SDDC.
- 3. From the left navigation, open **Manage** and select **Identity**.
+ 1. On the leftmost pane, open **Manage** and select **Identity**.
- 4. In **System Assigned**, check **Enable** and select **Save**.
- 1. **System Assigned identity** should now be enabled.
+ 1. In **System Assigned**, select **Enable** > **Save**.
+ **System Assigned identity** should now be enabled.
- Once System Assigned identity is enabled, you see the tab for **Object ID**. Make note of the Object ID for use later.
+ After System Assigned identity is enabled, you see the tab for **Object ID**. Make a note of the Object ID for use later.
# [Azure CLI](#tab/azure-cli)
- Get the private cloud resource ID and save it to a variable. You'll need this value in the next step to update resource with system assigned identity.
+ Get the private cloud resource ID and save it to a variable. You need this value in the next step to update the resource with System Assigned identity.
```azurecli-interactive privateCloudId=$(az vmware private-cloud show --name $privateCloudName --resource-group $resourceGroupName --query id | tr -d '"') ```
- To configure the system-assigned identity on Azure VMware Solution private cloud with Azure CLI, call [az-resource-update](/cli/azure/resource?view=azure-cli-latest#az-resource-update&preserve-view=true) and provide the variable for the private cloud resource ID that you previously retrieved.
+ To configure the system-assigned identity on Azure VMware Solution private cloud with the Azure CLI, call [az-resource-update](/cli/azure/resource?view=azure-cli-latest#az-resource-update&preserve-view=true) and provide the variable for the private cloud resource ID that you previously retrieved.
```azurecli-interactive az resource update --ids $privateCloudId --set identity.type=SystemAssigned --api-version "2021-12-01" ``` -- Configure the key vault access policy to grant permissions to the managed identity, You use it to authorize access to the key vault.
+- Configure the key vault access policy to grant permissions to the managed identity. You use it to authorize access to the key vault.
# [Portal](#tab/azure-portal)
- 1. Sign in to Azure portal.
- 1. Navigate to **Key vaults** and locate the key vault you want to use.
- 1. From the left navigation, underΓÇ»**Settings**, selectΓÇ»**Access policies**.
- 1. InΓÇ»**Access policies**, selectΓÇ»**Add Access Policy**.
- 1. From the Key Permissions drop-down, check: **Select**, **Get**, **Wrap Key**, and **Unwrap Key**.
- 1. Under Select principal, select **None selected**. A new **Principal** window with a search box opens.
- 1. In the search box, paste the **Object ID** from the previous step, or search the private cloud name you want to use. Choose **Select** when you're done.
+ 1. Sign in to the Azure portal.
+ 1. Go to **Key vaults** and locate the key vault you want to use.
+ 1. On the leftmost pane, underΓÇ»**Settings**, selectΓÇ»**Access policies**.
+ 1. InΓÇ»**Access policies**, selectΓÇ»**Add Access Policy** and then:
+ 1. In the **Key Permissions** dropdown, choose **Select**, **Get**, **Wrap Key**, and **Unwrap Key**.
+ 1. Under **Select principal**, select **None selected**. A new **Principal** window with a search box opens.
+ 1. In the search box, paste the **Object ID** from the previous step. Or search for the private cloud name you want to use. Choose **Select** when you're finished.
1. Select **ADD**.
- 1. Verify the new policy appears under the current policy's Application section.
+ 1. Verify that the new policy appears under the current policy's Application section.
1. Select **Save** to commit changes. # [Azure CLI](#tab/azure-cli)
- Get the principal ID for the system-assigned managed identity and save it to a variable. You'll need this value in the next step to create the key vault access policy.
+ Get the principal ID for the system-assigned managed identity and save it to a variable. You need this value in the next step to create the key vault access policy.
```azurecli-interactive principalId=$(az vmware private-cloud show --name $privateCloudName --resource-group $resourceGroupName --query identity.principalId | tr -d '"') ```
- To configure the key vault access policy with Azure CLI, call [az keyvault set-policy](/cli/azure/keyvault#az-keyvault-set-policy) and provide the variable for the principal ID that you previously retrieved for the managed identity.
+ To configure the key vault access policy with the Azure CLI, call [az keyvault set-policy](/cli/azure/keyvault#az-keyvault-set-policy). Provide the variable for the principal ID that you previously retrieved for the managed identity.
```azurecli-interactive az keyvault set-policy --name $keyVault --resource-group $resourceGroupName --object-id $principalId --key-permissions get unwrapKey wrapKey ```
- Learn more about how to [Assign an Azure Key Vault access policy](../key-vault/general/assign-access-policy.md?tabs=azure-portal).
-
+ Learn more about how to [assign a Key Vault access policy](../key-vault/general/assign-access-policy.md?tabs=azure-portal).
## Customer-managed key version lifecycle
-You can change the customer-managed key (CMK) by creating a new version of the key. The creation of a new version doesn't interrupt the virtual machine (VM) workflow.
+You can change the CMK by creating a new version of the key. The creation of a new version doesn't interrupt the virtual machine (VM) workflow.
-In Azure VMware Solution, CMK key version rotation depends on the key selection setting you chose during CMK setup.
+In Azure VMware Solution, CMK key version rotation depends on the key selection setting that you chose during CMK setup.
-**Key selection setting 1**
+### Key selection setting 1
-A customer enables CMK encryption without supplying a specific key version for CMK. Azure VMware Solution selects the latest key version for CMK from the customer's key vault to encrypt the vSAN Key Encryption Keys (KEKs). Azure VMware Solution tracks the CMK for version rotation. When a new version of the CMK key in Azure Key Vault is created, it gets captured by Azure VMware Solution automatically to encrypt vSAN KEKs.
+A customer enables CMK encryption without supplying a specific key version for CMK. Azure VMware Solution selects the latest key version for CMK from the customer's key vault to encrypt the vSAN KEKs. Azure VMware Solution tracks the CMK for version rotation. When a new version of the CMK key in Key Vault is created, it gets captured by Azure VMware Solution automatically to encrypt vSAN KEKs.
>[!NOTE]
->Azure VMware Solution can take up to ten minutes to detect a new auto-rotated key version.
+>Azure VMware Solution can take up to 10 minutes to detect a new autorotated key version.
-**Key selection setting 2**
+### Key selection setting 2
A customer can enable CMK encryption for a specified CMK key version to supply the full key version URI under the **Enter Key from URI** option. When the customer's current key expires, they need to extend the CMK key expiration or disable CMK.
A customer can enable CMK encryption for a specified CMK key version to supply t
System-assigned identity is restricted to one per resource and is tied to the lifecycle of the resource. You can grant permissions to the managed identity on Azure resource. The managed identity is authenticated with Microsoft Entra ID, so you don't have to store any credentials in code. >[!IMPORTANT]
-> Ensure that key vault is in the same region as the Azure VMware Solution private cloud.
+> Ensure that Key Vault is in the same region as the Azure VMware Solution private cloud.
# [Portal](#tab/azure-portal)
-Navigate to your **Azure Key Vault** and provide access to the SDDC on Azure Key Vault using the Principal ID captured in the **Enable MSI** tab.
+Go to your Key Vault instance and provide access to the SDDC on Key Vault by using the principal ID captured on the **Enable MSI** tab.
-1. From your Azure VMware Solution private cloud, under **Manage**, select **Encryption**, then select **Customer-managed keys (CMK)**.
-1. CMK provides two options for **Key Selection** from Azure Key Vault.
+1. From your Azure VMware Solution private cloud, under **Manage**, select **Encryption**. Then select **Customer-managed keys (CMKs)**.
+1. CMK provides two options for **Key Selection** from Key Vault:
- **Option 1**
+ Option 1:
- 1. Under **Encryption key**, choose the **select from Key Vault** button.
- 1. Select the encryption type, then the **Select Key Vault and key** option.
- 1. Select the **Key Vault and key** from the drop-down, then choose **Select**.
+ 1. Under **Encryption key**, choose **select from Key Vault**.
+ 1. Select the encryption type. Then select the **Select Key Vault and key** option.
+ 1. Select the **Key Vault and key** from the dropdown. Then choose **Select**.
- **Option 2**
+ Option 2:
- 1. Under **Encryption key**, choose the **Enter key from URI** button.
- 1. Enter a specific Key URI in the **Key URI** box.
+ 1. Under **Encryption key**, select **Enter key from URI**.
+ 1. Enter a specific Key URI in the **Key URI** box.
> [!IMPORTANT]
- > If you want to select a specific key version instead of the automatically selected latest version, you'll need to specify the key URI with key version. This will affect the CMK key version life cycle.
+ > If you want to select a specific key version instead of the automatically selected latest version, you need to specify the Key URI with the key version. This choice affects the CMK key version lifecycle.
- > [!NOTE]
- > The Azure key vault Managed HSM option is only supported with the Key URI option.
+ The Key Vault Managed Hardware Security Module (HSM) option is only supported with the Key URI option.
1. Select **Save** to grant access to the resource. # [Azure CLI](#tab/azure-cli)
-To configure customer-managed keys for an Azure VMware Solution private cloud with automatic updating of the key version, call [az vmware private-cloud add-cmk-encryption](/cli/azure/vmware/private-cloud?view=azure-cli-latest#az-vmware-private-cloud-add-cmk-encryption&preserve-view=true). Get the key vault URL and save it to a variable. You'll need this value in the next step to enable CMK.
+To configure CMKs for an Azure VMware Solution private cloud with automatic updating of the key version, call [az vmware private-cloud add-cmk-encryption](/cli/azure/vmware/private-cloud?view=azure-cli-latest#az-vmware-private-cloud-add-cmk-encryption&preserve-view=true). Get the key vault URL and save it to a variable. You need this value in the next step to enable CMK.
```azurecli-interactive keyVaultUrl =$(az keyvault show --name <keyvault_name> --resource-group <resource_group_name> --query properties.vaultUri --output tsv)
keyVaultUrl =$(az keyvault show --name <keyvault_name> --resource-group <resourc
The following options 1 and 2 demonstrate the difference between not providing a specific key version and providing one.
-**Option 1**
+### Option 1
This example shows the customer not providing a specific key version.
This example shows the customer not providing a specific key version.
az vmware private-cloud add-cmk-encryption --private-cloud <private_cloud_name> --resource-group <resource_group_name> --enc-kv-url $keyVaultUrl --enc-kv-key-name <keyvault_key_name> ```
-**Option 2**
+### Option 2
-Supply key version as argument to use customer-managed keys with a specific key version, same as previously mentioned in the Azure portal option 2. The following example shows the customer providing a specific key version.
+Supply the key version as an argument to use CMKs with a specific key version, as previously mentioned in the Azure portal option 2. The following example shows the customer providing a specific key version.
```azurecli-interactive az vmware private-cloud add-cmk-encryption --private-cloud <private_cloud_name> --resource-group <resource_group_name> --enc-kv-url $keyVaultUrl --enc-kv-key-name --enc-kv-key-version <keyvault_key_keyVersion> ```
-## Change from customer-managed key to Microsoft managed key
+## Change from a customer-managed key to a Microsoft managed key
-When a customer wants to change from a customer-managed key (CMK) to a Microsoft managed key (MMK), it doesn't interrupt VM workload. To make the change from CMK to MMK, use the following steps.
+When a customer wants to change from a CMK to a Microsoft-managed key (MMK), the VM workload isn't interrupted. To make the change from a CMK to an MMK:
-1. Select **Encryption**, located under **Manage** from your Azure VMware Solution private cloud.
-2. Select **Microsoft-managed keys (MMK)**.
-3. Select **Save**.
+1. Under **Manage**, select **Encryption** from your Azure VMware Solution private cloud.
+1. Select **Microsoft-managed keys (MMK)**.
+1. Select **Save**.
## Limitations
-The Azure Key Vault must be configured as recoverable.
+Key Vault must be configured as recoverable. You need to:
-- Configure Azure Key Vault with the **Soft Delete** option.
+- Configure Key Vault with the **Soft Delete** option.
- Turn on **Purge Protection** to guard against force deletion of the secret vault, even after soft delete. Updating CMK settings don't work if the key is expired or the Azure VMware Solution access key was revoked. ## Troubleshooting and best practices
-**Accidental deletion of a key**
+Here are troubleshooting tips for some common issues you might encounter and also best practices to follow.
-If you accidentally delete your key in the Azure Key Vault, private cloud isn't able to perform some cluster modification operations. To avoid this scenario, we recommend that you keep soft deletes enabled on key vault. This option ensures that, if a key is deleted, it can be recovered within a 90-day period as part of the default soft-delete retention. If you are within the 90-day period, you can restore the key in order to resolve the issue.
+### Accidental deletion of a key
-**Restore key vault permission**
+If you accidentally delete your key in the key vault, the private cloud can't perform some cluster modification operations. To avoid this scenario, we recommend that you keep soft deletes enabled in the key vault. This option ensures that if a key is deleted, it can be recovered within a 90-day period as part of the default soft-delete retention. If you're within the 90-day period, you can restore the key to resolve the issue.
-If you have a private cloud that lost access to the customer managed key, check if Managed System Identity (MSI) requires permissions in key vault. The error notification returned from Azure might not correctly indicate MSI requiring permissions in key vault as the root cause. Remember, the required permissions are: get, wrapKey, and unwrapKey. See step 4 in [Prerequisites](#prerequisites).
+### Restore key vault permission
-**Fix expired key**
+If you have a private cloud that has lost access to the CMK, check if Managed System Identity (MSI) requires permissions in the key vault. The error notification returned from Azure might not correctly indicate MSI requiring permissions in the key vault as the root cause. Remember, the required permissions are `get`, `wrapKey`, and `unwrapKey`. See step 4 in [Prerequisites](#prerequisites).
-If you aren't using the autorotate function and the Customer Managed Key expired in key vault, you can change the expiration date on key.
+### Fix an expired key
-**Restore key vault access**
+If you aren't using the autorotate function and the CMK expired in Key Vault, you can change the expiration date on the key.
-Ensure Managed System Identity (MSI) is used for providing private cloud access to key vault.
+### Restore key vault access
-**Deletion of MSI**
+Ensure that the MSI is used for providing private cloud access to the key vault.
-If you accidentally delete the Managed System Identity (MSI) associated with private cloud, you need to disable CMK, then follow the steps to enable CMK from start.
+### Deletion of MSI
-## Next steps
+If you accidentally delete the MSI associated with a private cloud, you need to disable the CMK. Then follow the steps to enable the CMK from the start.
-Learn about [Azure Key Vault backup and restore](../key-vault/general/backup.md?tabs=azure-cli)
+## Next steps
-Learn about [Azure Key Vault recovery](../key-vault/general/key-vault-recovery.md?tabs=azure-portal#list-recover-or-purge-a-soft-deleted-key-vault)
+- Learn about [Azure Key Vault backup and restore](../key-vault/general/backup.md?tabs=azure-cli).
+- Learn about [Azure Key Vault recovery](../key-vault/general/key-vault-recovery.md?tabs=azure-portal#list-recover-or-purge-a-soft-deleted-key-vault).
azure-vmware Reserved Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/reserved-instance.md
Title: Reserved instances of Azure VMware Solution
description: Learn how to buy a reserved instance for Azure VMware Solution. The reserved instance covers only the compute part of your usage and includes software licensing costs. Previously updated : 12/19/2023 Last updated : 4/4/2024
You can also split a reservation into smaller chunks or merge reservations. None
For details about CSP-managed reservations, see [Sell Microsoft Azure reservations to customers using Partner Center, the Azure portal, or APIs](/partner-center/azure-reservations). - >[!NOTE] >Once you've purchased your reservation, you won't be able to make these types of changes directly: >
For details about CSP-managed reservations, see [Sell Microsoft Azure reservatio
## Cancel, exchange, or refund reservations
-You can cancel, exchange, or refund reservations with certain limitations. For more information, see [Self-service exchanges and refunds for Azure Reservations](../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md).
+You can cancel, exchange, or refund reservations with certain limitations. For more information, see [Self-service exchanges and refunds for Azure Reservations (Note: Azure VMware Solution reservations do not fall into his category and therefore the new exchange rules donΓÇÖt apply).](../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md)
CSPs can cancel, exchange, or refund reservations, with certain limitations, purchased for their customer. For more information, see [Manage, cancel, exchange, or refund Microsoft Azure reservations for customers](/partner-center/azure-reservations-manage).
azure-vmware Rotate Cloudadmin Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/rotate-cloudadmin-credentials.md
Last updated 3/22/2024
# Rotate the cloudadmin credentials for Azure VMware Solution -
-In this article, learn how to rotate the cloudadmin credentials (vCenter Server and NSX *CloudAdmin* credentials) for your Azure VMware Solution private cloud. Although the password for this account doesn't expire, you can generate a new one at any time.
+In this article, you learn how to rotate the cloudadmin credentials (vCenter Server and VMware NSX cloudadmin credentials) for your Azure VMware Solution private cloud. Although the password for this account doesn't expire, you can generate a new one at any time.
>[!CAUTION]
->If you use your cloudadmin credentials to connect services to vCenter Server or NSX in your private cloud, those connections will stop working once you rotate your password. Those connections will also lock out the cloudadmin account unless you stop those services before rotating the password.
+>If you use your cloudadmin credentials to connect services to vCenter Server or NSX in your private cloud, those connections stop working after you rotate your password. Those connections also lock out the cloudadmin account unless you stop those services before you rotate the password.
## Prerequisites
-Consider and determine which services connect to vCenter Server as *cloudadmin@vsphere.local* or NSX as cloudadmin before you rotate the password. Services can include VMware services like: HCX, vRealize Orchestrator, vRealize Operations Manager, VMware Horizon, or other non-Microsoft tools used for monitoring or provisioning.
+Consider and determine which services connect to vCenter Server as `cloudadmin@vsphere.local` or NSX as cloudadmin before you rotate the password. Services can include VMware services like HCX, vRealize Orchestrator, vRealize Operations Manager, VMware Horizon, or other non-Microsoft tools that are used for monitoring or provisioning.
-One way to determine which services authenticate to vCenter Server with the cloudadmin user is to inspect vSphere events using the vSphere Client for your private cloud. After you identify such services, and before rotating the password, you must stop these services. Otherwise, the services won't work after you rotate the password. You can also experience temporary locks on your vCenter Server CloudAdmin account, as these services continuously attempt to authenticate using a cached version of the old credentials.
+One way to determine which services authenticate to vCenter Server with the cloudadmin user is to inspect vSphere events by using the vSphere Client for your private cloud. After you identify such services, and before you rotate the password, you must stop these services. Otherwise, the services won't work after you rotate the password. You can also experience temporary locks on your vCenter Server cloudadmin account. Locks occur because these services continuously attempt to authenticate by using a cached version of the old credentials.
-Instead of using the cloudadmin user to connect services to vCenter Server or NSX, we recommend individual accounts for each service. For more information about setting up separate accounts for connected services, see [Access and identity architecture](./architecture-identity.md).
+Instead of using the cloudadmin user to connect services to vCenter Server or NSX, we recommend that you use individual accounts for each service. For more information about setting up separate accounts for connected services, see [Access and identity architecture](./architecture-identity.md).
## Reset your vCenter Server credentials ### [Portal](#tab/azure-portal)
-
+ 1. In your Azure VMware Solution private cloud, select **VMware credentials**. 1. Select **Generate new password** under vCenter Server credentials. 1. Select the confirmation checkbox and then select **Generate password**. - ### [Azure CLI](#tab/azure-cli)
-To begin using Azure CLI:
+To begin using the Azure CLI:
[!INCLUDE [azure-cli-prepare-your-environment-no-header](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] 1. In your Azure VMware Solution private cloud, open an Azure Cloud Shell session.
-2. Update your vCenter Server *CloudAdmin* credentials. Remember to replace **{SubscriptionID}**, **{ResourceGroup}**, and **{PrivateCloudName}** with your private cloud information.
+1. Update your vCenter Server cloudadmin credentials. Remember to replace `{SubscriptionID}`, `{ResourceGroup}`, and `{PrivateCloudName}` with your private cloud information.
```azurecli-interactive az resource invoke-action --action rotateVcenterPassword --ids "/subscriptions/{SubscriptionID}/resourceGroups/{ResourceGroup}/providers/Microsoft.AVS/privateClouds/{PrivateCloudName}" --api-version "2020-07-17-preview"
To begin using Azure CLI:
-### Update HCX Connector
+### Update HCX Connector
+
+1. Go to the on-premises HCX Connector and sign in by using the new credentials.
+
+ Be sure to use port **443**.
+
+1. On the VMware HCX dashboard, select **Site Pairing**.
-1. Go to the on-premises HCX Connector and sign in using the new credentials.
+ :::image type="content" source="media/tutorial-vmware-hcx/site-pairing-complete.png" alt-text="Screenshot that shows the VMware HCX dashboard with Site Pairing highlighted.":::
- Be sure to use port **443**.
+1. Select the correct connection to Azure VMware Solution and select **Edit Connection**.
-2. On the VMware HCX Dashboard, select **Site Pairing**.
-
- :::image type="content" source="media/tutorial-vmware-hcx/site-pairing-complete.png" alt-text="Screenshot of VMware HCX Dashboard with Site Pairing highlighted.":::
-
-3. Select the correct connection to Azure VMware Solution and select **Edit Connection**.
-
-4. Provide the new vCenter Server user credentials and select **Edit**, which saves the credentials. Save should show successful.
+1. Provide the new vCenter Server user credentials. Select **Edit** to save the credentials. Save should show as successful.
## Reset your NSX Manager credentials 1. In your Azure VMware Solution private cloud, select **VMware credentials**.
-1. Select **Generate new password** under NSX Manager credentials.
+1. Under NSX Manager credentials, select **Generate new password**.
1. Select the confirmation checkbox and then select **Generate password**. ## Next steps
-Now that you learned how to reset your vCenter Server and NSX Manager credentials for Azure VMware Solution, consider learning more about:
+Now that you've learned how to reset your vCenter Server and NSX Manager credentials for Azure VMware Solution, consider learning more about:
- [Integrating Azure native services in Azure VMware Solution](integrate-azure-native-services.md) - [Deploying disaster recovery for Azure VMware Solution workloads using VMware HCX](deploy-disaster-recovery-using-vmware-hcx.md)
azure-vmware Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/security-recommendations.md
# Security recommendations for Azure VMware Solution
-It's important that proper measures are taken to secure your Azure VMware Solution deployments. Use this information as a high-level guide to achieve your security goals.
+It's important to take proper measures to secure your Azure VMware Solution deployments. Use the information in this article as a high-level guide to achieve your security goals.
## General Use the following guidelines and links for general security recommendations for both Azure VMware Solution and VMware best practices.
-| **Recommendation** | **Comments** |
+| Recommendation | Comments |
| :-- | :-- |
-| Review and follow VMware Security Best Practices | It's important to stay updated on Azure security practices and [VMware Security Best Practices](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-412EF981-D4F1-430B-9D09-A4679C2D04E7.html). |
-| Keep up to date on VMware Security Advisories | Subscribe to VMware notifications in my.vmware.com and regularly review and remediate any [VMware Security Advisories](https://www.vmware.com/security/advisories.html). |
-| Enable Microsoft Defender for Cloud | [Microsoft Defender for Cloud](../defender-for-cloud/index.yml) provides unified security management and advanced threat protection across hybrid cloud workloads. |
-| Follow the Microsoft Security Response Center blog | [Microsoft Security Response Center](https://msrc-blog.microsoft.com/) |
-| Review and implement recommendations within the Azure Security Baseline for Azure VMware Solution | [Azure security baseline for VMware Solution](/security/benchmark/azure/baselines/vmware-solution-security-baseline/) |
-
+| Review and follow VMware Security Best Practices. | It's important to stay updated on Azure security practices and [VMware Security Best Practices](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-412EF981-D4F1-430B-9D09-A4679C2D04E7.html). |
+| Keep up to date on VMware Security Advisories. | Subscribe to VMware notifications in `my.vmware.com`. Regularly review and remediate any [VMware Security Advisories](https://www.vmware.com/security/advisories.html). |
+| Enable Microsoft Defender for Cloud. | [Microsoft Defender for Cloud](../defender-for-cloud/index.yml) provides unified security management and advanced threat protection across hybrid cloud workloads. |
+| Follow the Microsoft Security Response Center blog. | [Microsoft Security Response Center](https://msrc-blog.microsoft.com/) |
+| Review and implement recommendations within the Azure security baseline for Azure VMware Solution. | [Azure security baseline for VMware Solution](/security/benchmark/azure/baselines/vmware-solution-security-baseline/) |
## Network
-The following are network-related security recommendations for Azure VMware Solution.
+The following recommendations for network-related security apply to Azure VMware Solution.
-| **Recommendation** | **Comments** |
+| Recommendation | Comments |
| :-- | :-- |
-| Only allow trusted networks | Only allow access to your environments over ExpressRoute or other secured networks. Avoid exposing your management services like vCenter Server, for example, on the internet. |
-| Use Azure Firewall Premium | If you must expose management services on the internet, use [Azure Firewall Premium](../firewall/premium-migrate.md) with both IDPS Alert and Deny mode along with TLS inspection for proactive threat detection. |
-| Deploy and configure Network Security Groups on virtual network | Ensure any virtual network deployed has [Network Security Groups](../virtual-network/network-security-groups-overview.md) configured to control ingress and egress to your environment. |
-| Review and implement recommendations within the Azure security baseline for Azure VMware Solution | [Azure security baseline for Azure VMware Solution](/security/benchmark/azure/baselines/vmware-solution-security-baseline/) |
+| Only allow trusted networks. | Only allow access to your environments over Azure ExpressRoute or other secured networks. Avoid exposing your management services like vCenter Server, for example, on the internet. |
+| Use Azure Firewall Premium. | If you must expose management services on the internet, use [Azure Firewall Premium](../firewall/premium-migrate.md) with both intrusion detection and detention system (IDPS) Alert and Deny mode along with Transport Layer Security (TLS) inspection for proactive threat detection. |
+| Deploy and configure network security groups on a virtual network. | Ensure that any deployed virtual network has [network security groups](../virtual-network/network-security-groups-overview.md) configured to control ingress and egress to your environment. |
+| Review and implement recommendations within the Azure security baseline for Azure VMware Solution. | [Azure security baseline for Azure VMware Solution](/security/benchmark/azure/baselines/vmware-solution-security-baseline/) |
## VMware HCX See the following information for recommendations to secure your VMware HCX deployment.
-| **Recommendation** | **Comments** |
+| Recommendation | Comments |
| :-- | :-- |
-| Stay current with VMware HCX service updates | VMware HCX service updates can include new features, software fixes, and security patches. Apply service updates during a maintenance window where no new VMware HCX operations are queued up by following these [steps](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-F4AEAACB-212B-4FB6-AC36-9E5106879222.html). |
+| Stay current with VMware HCX service updates. | VMware HCX service updates can include new features, software fixes, and security patches. To apply service updates during a maintenance window where no new VMware HCX operations are queued up, follow [these steps](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-F4AEAACB-212B-4FB6-AC36-9E5106879222.html). |
azure-vmware Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/vulnerability-management.md
Title: How Azure VMware Solution Addresses Vulnerabilities in the Infrastructure
-description: The process that Azure VMware Solution follows to address security vulnerabilities.
+ Title: Azure VMware Solution addresses vulnerabilities in the infrastructure
+description: Learn about the process that Azure VMware Solution follows to address security vulnerabilities.
Last updated 3/22/2024
-# How Azure VMware Solution Addresses Vulnerabilities in the Infrastructure
+# Azure VMware Solution addresses vulnerabilities in the infrastructure
-At a high level, Azure VMware Solution is a Microsoft Azure service and therefore must follow all the same policies and requirements that Azure follows. Azure policies and procedures dictate that Azure VMware Solution must follow the [SDL](https://www.microsoft.com/securityengineering/sdl) and must meet several regulatory requirements as promised by Microsoft Azure.
+At a high level, Azure VMware Solution is an Azure service, so it must follow all the same policies and requirements that Azure follows. Azure policies and procedures dictate that Azure VMware Solution must follow the [Security Development Lifecycle (SDL)](https://www.microsoft.com/securityengineering/sdl) and must meet several regulatory requirements as promised by Azure.
## Our approach to vulnerabilities
-Azure VMware Solution takes a defense in depth approach to vulnerability and risk management. We follow the [SDL](https://www.microsoft.com/securityengineering/sdl) to ensure we're building securely from the start, including any third party solutions, and our services are continually assessed through both automation and manual reviews on a regular basis. Additionally, we also partner with third party partners on security hardening and early notifications of vulnerabilities within their solutions.
+Azure VMware Solution takes an in-depth approach to vulnerability and risk management. We follow the [SDL](https://www.microsoft.com/securityengineering/sdl) to ensure that we're building securely from the start. This focus on security includes working with any third-party solutions. Our services are continually assessed through automatic and manual reviews on a regular basis. We also partner with third-party partners on security hardening and early notifications of vulnerabilities within their solutions.
### Vulnerability management -- Engineering and Security Teams triage any signal of vulnerabilities.-- Details within the signal are adjudicated and assigned a CVSS score and risk rating according to compensating controls within the service.-- The risk rating is used against internal bug bars, internal policies and regulations to establish a timeline for implementing a fix.-- Internal engineering teams partner with appropriate parties to qualify and roll out any fixes, patches and other configuration updates necessary.
+- Engineering and security teams triage any signal of vulnerabilities.
+- Details within the signal are adjudicated and assigned a Common Vulnerability Scoring System (CVSS) score and risk rating according to compensating controls within the service.
+- The risk rating is used against internal bug bars, internal policies, and regulations to establish a timeline for implementing a fix.
+- Internal engineering teams partner with appropriate parties to qualify and roll out any fixes, patches, and other configuration updates necessary.
- Communications are drafted when necessary and published according to the risk rating assigned. > [!TIP]
-> Communications are surfaced through [Azure Service Health Portal](/azure/service-health/service-health-portal-update), [Known Issues](/azure/azure-vmware/azure-vmware-solution-known-issues) or Email.
+> Communications are surfaced through [Azure Service Health portal](/azure/service-health/service-health-portal-update), [known issues](/azure/azure-vmware/azure-vmware-solution-known-issues), or email.
### Subset of regulations governing vulnerability and risk management
-Azure VMware Solution is in scope for the following certifications and regulatory requirements. The regulations listed aren't a complete list of certifications Azure VMware Solution holds, rather it's a list with specific requirements around vulnerability management. These regulations don't rely on other regulations for the same purpose. IE, certain regional certifications may point to ISO requirements for vulnerability management.
+Azure VMware Solution is in scope for the following certifications and regulatory requirements. The regulations listed aren't a complete list of certifications that Azure VMware Solution holds. Instead, it's a list with specific requirements around vulnerability management. These regulations don't rely on other regulations for the same purpose. For example, certain regional certifications might point to ISO requirements for vulnerability management.
> [!NOTE]
-> To access the following audit reports hosted in the Service Trust Portal, you must be an active Microsoft customer.
+> You must be an active Microsoft customer to access the following audit reports hosted in the Service Trust Portal:
- [ISO](https://servicetrust.microsoft.com/DocumentPage/38a05a38-6181-432e-a5ec-aa86008c56c9)-- [PCI](https://servicetrust.microsoft.com/viewpage/PCI) \- See the packages for DSS and 3DS for Audit Information.
+- [PCI](https://servicetrust.microsoft.com/viewpage/PCI): See the packages for DSS and 3DS for audit information.
- [SOC](https://servicetrust.microsoft.com/DocumentPage/f9858c69-b9c4-4097-9d09-1b95d3f994eb) - [NIST Cybersecurity Framework](https://servicetrust.microsoft.com/DocumentPage/bc0f7af3-5be8-427b-ac37-b84b86b6cc6b) - [Cyber Essentials Plus](https://servicetrust.microsoft.com/DocumentPage/d2758787-1e65-4894-891d-c11194721102) ## More information
-[Azure VMware Solution Security Recommendations](/azure/azure-vmware/concepts-security-recommendations)
-[Azure VMware Solution Security Baseline](/security/benchmark/azure/baselines/azure-vmware-solution-security-baseline?toc=%2Fazure%2Fazure-vmware%2Ftoc.json)
-
-[Microsoft AzureΓÇÖs defense in depth approach to cloud vulnerabilities](https://azure.microsoft.com/blog/microsoft-azures-defense-in-depth-approach-to-cloud-vulnerabilities/)
-
-[Microsoft Azure Compliance Offerings](/azure/compliance/)
-
-[Azure Service Health Portal](/azure/service-health/service-health-portal-update)
+- [Azure VMware Solution security recommendations](/azure/azure-vmware/concepts-security-recommendations)
+- [Azure VMware Solution security baseline](/security/benchmark/azure/baselines/azure-vmware-solution-security-baseline?toc=%2Fazure%2Fazure-vmware%2Ftoc.json)
+- [Azure defense in-depth approach to cloud vulnerabilities](https://azure.microsoft.com/blog/microsoft-azures-defense-in-depth-approach-to-cloud-vulnerabilities/)
+- [Azure compliance offerings](/azure/compliance/)
+- [Azure Service Health portal](/azure/service-health/service-health-portal-update)
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
Title: Restore VMs by using the Azure portal
+ Title: Restore VMs by using the Azure portal using Azure Backup
description: Restore an Azure virtual machine from a recovery point by using the Azure portal, including the Cross Region Restore feature. Previously updated : 03/21/2024 Last updated : 04/04/2024
As one of the [restore options](#restore-options), you can replace an existing V
![Restore configuration wizard Replace Existing](./media/backup-azure-arm-restore-vms/restore-configuration-replace-existing.png)
+## Assign network access settings during restore (preview)
+
+Azure Backup also allows you to configure the access options for the restored disks once the restore operation is complete. You can set the disk access preferences at the time of initiating the restore.
+
+>[!Note]
+>This feature is currently in preview and is available only for backed-up VMs that use private endpoint-enabled disks.
+
+To enable disk access on restored disks during [VM restore](#choose-a-vm-restore-configuration), choose one of the following options:
+
+- **Use the same network configurations as the source disk(s)**: This option allows the restored disks to use the disk access and network configurations same as that of the source disks.
+- **Enable public access from all networks**: This option allows the restored disk to be publicly accessible from all networks.
+- **Disable public access and enable private access (using disk access)**: This option allows you to disable the public access and assign disk access to the restored disks for private access.
+
+ :::image type="content" source="./media/backup-azure-arm-restore-vms/restored-disk-access-configuration-options.png" alt-text="Screenshot shows the access configuration options for restored disks." lightbox="./media/backup-azure-arm-restore-vms/restored-disk-access-configuration-options.png":::
+ ## Cross Region Restore As one of the [restore options](#restore-options), Cross Region Restore (CRR) allows you to restore Azure VMs in a secondary region, which is an Azure paired region.
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backups description: Get a summary of support settings and limitations for backing up Azure VMs by using the Azure Backup service. Previously updated : 03/14/2024 Last updated : 04/04/2024
Storage type | Standard HDD, Standard SSD, Premium SSD. <br><br> Backup and res
Managed disks | Supported. Encrypted disks | Supported.<br/><br/> Azure VMs enabled with Azure Disk Encryption can be backed up (with or without the Microsoft Entra app).<br/><br/> Encrypted VMs can't be recovered at the file or folder level. You must recover the entire VM.<br/><br/> You can enable encryption on VMs that Azure Backup is already protecting. <br><br> You can back up and restore disks encrypted via platform-managed keys or customer-managed keys. You can also assign a disk-encryption set while restoring in the same region. That is, providing a disk-encryption set while performing cross-region restore is currently not supported. However, you can assign the disk-encryption set to the restored disk after the restore is complete. Disks with a write accelerator enabled | Azure VMs with disk backup for a write accelerator became available in all Azure public regions on May 18, 2022. If disk backup for a write accelerator is not required as part of VM backup, you can choose to remove it by using the [selective disk feature](selective-disk-backup-restore.md). <br><br>**Important** <br> Virtual machines with write accelerator disks need internet connectivity for a successful backup, even though those disks are excluded from the backup.
-Disks enabled for access with a private endpoint | Not supported.
+Disks enabled for access with a private endpoint | Supported.
Backup and restore of deduplicated VMs or disks | Azure Backup doesn't support deduplication. For more information, see [this article](./backup-support-matrix.md#disk-deduplication-support). <br/> <br/> Azure Backup doesn't deduplicate across VMs in the Recovery Services vault. <br/> <br/> If there are VMs in a deduplication state during restore, the files can't be restored because the vault doesn't understand the format. However, you can successfully perform the full VM restore. Adding a disk to a protected VM | Supported. Resizing a disk on a protected VM | Supported.
backup Blob Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-support-matrix.md
Vaulted backup (preview) for blobs is currently available in all public regions
Operational backup of blobs uses blob point-in-time restore, blob versioning, soft delete for blobs, change feed for blobs and delete lock to provide a local backup solution. Hence, the limitations that apply to these capabilities also apply to operational backup.
-**Supported scenarios:** Operational backup supports block blobs in standard general-purpose v2 storage accounts only. Storage accounts with hierarchical namespace enabled (that is, ADLS Gen2 accounts) aren't supported. <br><br> Also, any page blobs, append blobs, and premium blobs in your storage account won't be restored and only block blobs will be restored.
+**Supported scenarios**:
-**Other limitations:**
+- Operational backup supports block blobs in standard general-purpose v2 storage accounts only. Storage accounts with hierarchical namespace enabled (that is, ADLS Gen2 accounts) aren't supported. <br><br> Also, any page blobs, append blobs, and premium blobs in your storage account won't be restored and only block blobs will be restored.
+
+- Blob backup is also supported when the storage account has private endpoints.
+
+**Other limitations**:
- If you've deleted a container during the retention period, that container won't be restored with the point-in-time restore operation. If you attempt to restore a range of blobs that includes blobs in a deleted container, the point-in-time restore operation will fail. For more information about protecting containers from deletion, see [Soft delete for containers](../storage/blobs/soft-delete-container-overview.md). - If a blob has moved between the hot and cool tiers in the period between the present moment and the restore point, the blob is restored to its previous tier. Restoring block blobs in the archive tier isn't supported. For example, if a blob in the hot tier was moved to the archive tier two days ago, and a restore operation restores to a point three days ago, the blob isn't restored to the hot tier. To restore an archived blob, first move it out of the archive tier. For more information, see [Rehydrate blob data from the archive tier](../storage/blobs/archive-rehydrate-overview.md).
Operational backup of blobs uses blob point-in-time restore, blob versioning, so
- A blob with an active lease can't be restored. If a blob with an active lease is included in the range of blobs to restore, the restore operation will fail automatically. Break any active leases before starting the restore operation. - Snapshots aren't created or deleted as part of a restore operation. Only the base blob is restored to its previous state. - If there are [immutable blobs](../storage/blobs/immutable-storage-overview.md#about-immutable-storage-for-blobs) among those being restored, such immutable blobs won't be restored to their state as per the selected recovery point. However, other blobs that don't have immutability enabled will be restored to the selected recovery point as expected.-- Blob backup is also supported when the storage account has private endpoints.+ # [Vaulted backup](#tab/vaulted-backup)
baremetal-infrastructure Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/get-started.md
Once you've satisfied the [requirements](requirements.md), go to
[Nutanix Cloud Clusters on Azure Deployment and User Guide](https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Cloud-Clusters-Azure:Nutanix-Cloud-Clusters-Azure) to sign up.
-To learn about Microsoft BareMetal hardware pricing, and to purchase Nutanix software, go to [Azure Marketplace](https://aka.ms/Nutanix-AzureMarketplace).
+To learn about Microsoft BareMetal hardware pricing, and to purchase Nutanix software, go to [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/nutanixinc.nc2_azure?tab=Overview).
## Set up NC2 on Azure
chaos-studio Chaos Studio Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-configure-customer-managed-keys.md
When configured, Chaos Studio uses Azure Storage, which uses the CMK to encrypt
- You need to use our *2023-10-27-preview REST API* to create and use CMK-enabled experiments only. There's *no* support for CMK-enabled experiments in our general availability-stable REST API until H1 2024. - Chaos Studio currently *only supports creating Chaos Studio CMK experiments via the command line by using our 2023-10-27-preview REST API*. As a result, you *can't* create a Chaos Studio experiment with CMK enabled via the Azure portal. We plan to add this functionality in H1 of 2024. - The storage account must have *public access from all networks* enabled for Chaos Studio experiments to be able to use it. If you have a hard requirement from your organization, reach out to your CSA for potential solutions.
+- Experiment data will appear in ARG even after using CMK. This is a known issue, but the visbility is limited to only the active subcription using CMK.
## Configure your storage account
chaos-studio Chaos Studio Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-limitations.md
The following are known limitations in Chaos Studio.
- **Azure CLI** - Chaos Studio doesn't have dedicated AzCLI modules at this time. Use our REST API from AzCLI - **Azure Policy** - Chaos Studio doesn't support the applicable built-in policies for our service (audit policy for customer-managed keys and Private Link) at this time. - **Private Link** - We don't support Azure portal UI experiments for Agent-based experiments using Private Link. These restrictions do NOT apply to our Service-direct faults-- **Customer-Managed Keys** You need to use our 2023-10-27-preview REST API via a CLI to create CMK-enabled experiments. We don't support portal UI experiments using CMK at this time.
+- **Customer-Managed Keys** You need to use our 2023-10-27-preview REST API via a CLI to create CMK-enabled experiments. We don't support portal UI experiments using CMK at this time. Experiment info will appear in ARG within the subscription - this is a known issue today, but is limited to only ARG and only viewable by the subscription.
- **Java SDK** At present, we don't have a dedicated Java SDK. If this is something you would use, reach out to us with your feature request. - **Built-in roles** - Chaos Studio doesn't currently have its own built-in roles. Permissions can be attained to run a chaos experiment by either assigning an [Azure built-in role](chaos-studio-fault-providers.md) or a created custom role to the experiment's identity. - **Agent Service Tags** Currently we don't have service tags available for our Agent-based faults.
cosmos-db Vector Search Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/vector-search-ai.md
Title: Open-source vector databases
-description: Open-source vector databases
+description: Open-source vector database functionalities, examples, challenges, and solutions.
Therefore, while free initially, open-source vector databases incur significant
## Addressing the challenges
-A fully managed database service helps developers avoid the hassles from setting up, maintaining, and relying on community assistance for an open-source vector database. The Integrated Vector Database in Azure Cosmos DB for MongoDB vCore offers a life-time free tier. It allows developers to enjoy the same financial benefit associated with open-source vector databases, while the service provider handles maintenance, updates, and scalability. When itΓÇÖs time to scale up operations, upgrading is quick and easy while keeping a low [total cost of ownership (TCO)](introduction.md#low-total-cost-of-ownership-tco).
+A fully managed database service helps developers avoid the hassles from setting up, maintaining, and relying on community assistance for an open-source vector database; moreover, some managed vector database services offer a life-time free tier. An example is the Integrated Vector Database in Azure Cosmos DB for MongoDB. It allows developers to enjoy the same financial benefit associated with open-source vector databases, while the service provider handles maintenance, updates, and scalability. When itΓÇÖs time to scale up operations, upgrading is quick and easy while keeping a low [total cost of ownership (TCO)](introduction.md#low-total-cost-of-ownership-tco).
## Next steps > [!div class="nextstepaction"]
cosmos-db Vector Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/vector-database.md
Title: Vector database
-description: Vector database
+description: Vector database functionalities, implementation, and comparison.
A vector database is a database designed to store and manage [vector embeddings]
In a vector database, embeddings are indexed and queried through [vector search](#vector-search) algorithms based on their vector distance or similarity. A robust mechanism is necessary to identify the most relevant data. Some well-known vector search algorithms include Hierarchical Navigable Small World (HNSW), Inverted File (IVF), DiskANN, etc.
-Besides the typical vector database functionalities above, an integrated vector database in a highly performant NoSQL or relational database converts the existing raw data in your account into embeddings and stores them alongside your original data. This way, you can avoid the extra cost of replicating your data in a separate vector database. Moreover, this architecture keeps your vector embeddings and original data together, which better facilitates multi-modal data operations, and you can achieve greater data consistency, scale, and performance.
+### Integrated vector database vs pure vector database
+
+There are two common types of vector database implementations - pure vector database and integrated vector database in a NoSQL or relational database.
+
+A pure vector database is designed to efficiently store and manage vector embeddings, along with a small amount of metadata; it is separate from the data source from which the embeddings are derived.
+
+A vector database that is integrated in a highly performant NoSQL or relational database provides additional capabilities. The integrated vector database converts the existing data in a NoSQL or relational database into embeddings and stores them alongside the original data. This approach eliminates the extra cost of replicating data in a separate pure vector database. Moreover, this architecture keeps the vector embeddings and original data together, which better facilitates multi-modal data operations, and enables greater data consistency, scale, and performance.
## What are some vector database use cases?
cost-management-billing Capabilities Allocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-allocation.md
Last updated 03/21/2024 --+
cost-management-billing Capabilities Analysis Showback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-analysis-showback.md
Last updated 03/21/2024 --+
cost-management-billing Capabilities Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-anomalies.md
Last updated 03/21/2024 --+
cost-management-billing Capabilities Budgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-budgets.md
Last updated 03/21/2024 --+
So far, you've defined granular and targeted cost alerts for each scope and appl
## Learn more at the FinOps Foundation
-This capability is a part of the FinOps Framework by the FinOps Foundation, a non-profit organization dedicated to advancing cloud cost management and optimization. For more information about FinOps, including useful playbooks, training and certification programs, and more, see to the [Budget management](https://www.finops.org/framework/capabilities/budget-management) article in the FinOps Framework documentation.
+This capability is a part of the FinOps Framework by the FinOps Foundation, a non-profit organization dedicated to advancing cloud cost management and optimization. For more information about FinOps, including useful playbooks, training and certification programs, and more, see the [Budget management](https://www.finops.org/framework/capabilities/budgeting/) article in the FinOps Framework documentation.
## Next steps
cost-management-billing Capabilities Chargeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-chargeback.md
Last updated 03/21/2024 --+
cost-management-billing Capabilities Commitment Discounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-commitment-discounts.md
Last updated 03/21/2024 --+
cost-management-billing Capabilities Culture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-culture.md
Last updated 03/21/2024 --+
cost-management-billing Capabilities Education https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-education.md
Last updated 03/21/2024 --+
cost-management-billing Capabilities Efficiency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-efficiency.md
Last updated 03/21/2024 --+
cost-management-billing Capabilities Forecasting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-forecasting.md
Last updated 03/21/2024 --+
cost-management-billing Capabilities Frameworks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-frameworks.md
Last updated 03/25/2024 --+
cost-management-billing Capabilities Ingestion Normalization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-ingestion-normalization.md
Last updated 03/21/2024 --+
cost-management-billing Capabilities Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-onboarding.md
Last updated 03/21/2024 --+
cost-management-billing Capabilities Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-policy.md
Last updated 03/21/2024 --+
cost-management-billing Capabilities Shared Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-shared-cost.md
Last updated 03/21/2024 --+
cost-management-billing Capabilities Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-structure.md
Last updated 03/21/2024 --+
cost-management-billing Capabilities Unit Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-unit-costs.md
Last updated 03/21/2024 --+
cost-management-billing Capabilities Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-workloads.md
Last updated 03/21/2024 --+
At this point, you have setup autoscaling and autostop behaviors. As you move be
## Learn more at the FinOps Foundation
-This capability is a part of the FinOps Framework by the FinOps Foundation, a non-profit organization dedicated to advancing cloud cost management and optimization. For more information about FinOps, including useful playbooks, training and certification programs, and more, see the [Workload management and automation capability](https://www.finops.org/framework/capabilities/workload-management-automation) article in the FinOps Framework documentation.
+This capability is a part of the FinOps Framework by the FinOps Foundation, a non-profit organization dedicated to advancing cloud cost management and optimization. For more information about FinOps, including useful playbooks, training and certification programs, and more, see the [Workload Optimization](https://www.finops.org/framework/capabilities/workload-optimization/) article in the FinOps Framework documentation.
## Next steps
cost-management-billing Conduct Finops Iteration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/conduct-finops-iteration.md
Last updated 03/21/2024 --+
cost-management-billing Cost Optimization Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/cost-optimization-workbook.md
Last updated 03/21/2024 --+
cost-management-billing Overview Finops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/overview-finops.md
Last updated 06/21/2023 --+
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
Previously updated : 03/26/2024 Last updated : 04/03/2024
Dev/Test products aren't shown in the following table. Transfers for Dev/Test pr
| MCA - individual | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers are supported. | | MCA - individual | EA | ΓÇó The transfer isnΓÇÖt supported by Microsoft, so you must move resources yourself. For more information, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. | | MCA - individual | MCA - Enterprise | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br>ΓÇó Self-service reservation and savings plan transfers are supported. |
+| MCA - Enterprise | EA | ΓÇó The transfer isnΓÇÖt supported by Microsoft, so you must move resources yourself. For more information, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. |
| MCA - Enterprise | MOSP | ΓÇó Requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. | | MCA - Enterprise | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers are supported. | | MCA - Enterprise | MCA - Enterprise | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers are supported. |
databox Data Box Disk Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-limits.md
Consider these limits as you deploy and operate your Microsoft Azure Data Box Di
- Data Box service is available in the Azure regions listed in [Region availability](data-box-disk-overview.md#region-availability). - A single storage account is supported with Data Box Disk.
+ - Data Box Disk can store a maximum of 100,000 files
- Data Box Disk supports a maximum of 512 containers or shares in the cloud. The top-level directories within the user share become containers or Azure file shares in the cloud. ## Data Box Disk performance
defender-for-cloud Concept Regulatory Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-regulatory-compliance.md
This benchmark builds on the cloud security principles defined by the Azure Secu
:::image type="content" source="media/concept-regulatory-compliance/microsoft-security-benchmark.png" alt-text="Image that shows the components that make up the Microsoft cloud security benchmark." lightbox="media/concept-regulatory-compliance/microsoft-security-benchmark.png":::
-The compliance dashboard gives you a view of your overall compliance standing. Security for non-Azure platforms follows the same cloud-neutral security principles as Azure. Each control within the benchmark provides the same granularity and scope of technical guidance across Azure and other cloud resources.
+The compliance dashboard gives you a view of your overall compliance standing. Security for non-Azure platforms follows the same cloud-neutral security principles as Azure. Each control within the benchmark provides the same granularity and scope of technical guidance across Azure and other cloud resources.
:::image type="content" source="media/concept-regulatory-compliance/compliance-dashboard.png" alt-text="Screenshot of a sample regulatory compliance page in Defender for Cloud." lightbox="media/concept-regulatory-compliance/compliance-dashboard.png":::
defender-for-cloud Connect Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/connect-servicenow.md
Microsoft Defender for Cloud's integration with ServiceNow allows customers to c
## Prerequisites -- Have an [application registry in ServiceNow](https://docs.servicenow.com/bundle/utah-employee-service-management/page/product/meeting-extensibility/task/create-app-registry-meeting-extensibility.html).
+- Have an [application registry in ServiceNow](https://docs.servicenow.com/bundle/utah-employee-service-management/page/product/meeting-extensibility/task/create-app-registry-meeting-extensibility.html).
- Enable [Defender Cloud Security Posture Management (CSPM)](tutorial-enable-cspm-plan.md) on your Azure subscription. - The following roles are required:
- - To create the integration: Security Admin, Contributor, or Owner.
+ - To create the integration: Security Admin, Contributor, or Owner.
-## Connect ServiceNow to Defender for Cloud
+## Connect a ServiceNow account to Defender for Cloud
To connect a ServiceNow account to a Defender for Cloud account:
defender-for-cloud Container Image Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/container-image-mapping.md
When a vulnerability is identified in a container image stored in a container re
- An Azure account with Defender for Cloud onboarded. If you don't already have an Azure account, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - [Azure DevOps](quickstart-onboard-devops.md) or [GitHub](quickstart-onboard-github.md) environment onboarded to Microsoft Defender for Cloud.
- - When an Azure DevOps environment is onboarded to Microsoft Defender for Cloud, the Microsoft Defender for DevOps Container Mapping will be automatically shared and installed in all connected Azure DevOps organizations. This will automatically inject tasks into all Azure Pipelines to collect data for container mapping.
-
-- For Azure DevOps, [Microsoft Security DevOps (MSDO) Extension](azure-devops-extension.md) installed on the Azure DevOps organization.
+ - When an Azure DevOps environment is onboarded to Microsoft Defender for Cloud, the Microsoft Defender for DevOps Container Mapping will be automatically shared and installed in all connected Azure DevOps organizations. This will automatically inject tasks into all Azure Pipelines to collect data for container mapping.
-- For GitHub, [Microsoft Security DevOps (MSDO) Action](github-action.md) configured in your GitHub repositories. Additionally, the GitHub Workflow must have "**id-token: write"** permissions for federation with Defender for Cloud. For an example, see [this YAML](https://github.com/microsoft/security-devops-action/blob/7e3060ae1e6a9347dd7de6b28195099f39852fe2/.github/workflows/on-push-verification.yml).
+- For Azure DevOps, [Microsoft Security DevOps (MSDO) Extension](azure-devops-extension.md) installed on the Azure DevOps organization.
+
+- For GitHub, [Microsoft Security DevOps (MSDO) Action](github-action.md) configured in your GitHub repositories. Additionally, the GitHub Workflow must have "**id-token: write"** permissions for federation with Defender for Cloud. For an example, see [this YAML](https://github.com/microsoft/security-devops-action/blob/7e3060ae1e6a9347dd7de6b28195099f39852fe2/.github/workflows/on-push-verification.yml).
- [Defender CSPM](tutorial-enable-cspm-plan.md) enabled. - The container images must be built using [Docker](https://www.docker.com/) and the Docker client must be able to access the Docker server during the build.
The following is an example of an advanced query that utilizes container image m
## Next steps - Learn more about [DevOps security in Defender for Cloud](defender-for-devops-introduction.md).-
defender-for-cloud Create Custom Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/create-custom-recommendations.md
You can create custom recommendations and standards in Defender for cloud by cre
Here's how you do that:
-1. Create one or more policy definitions in the [Azure Policy portal](../governance/policy/tutorials/create-custom-policy-definition.md), or [programatically](../governance/policy/how-to/programmatically-create.md).
+1. Create one or more policy definitions in the [Azure Policy portal](../governance/policy/tutorials/create-custom-policy-definition.md), or [programmatically](../governance/policy/how-to/programmatically-create.md).
1. [Create a policy initiative](../governance/policy/concepts/initiative-definition-structure.md) that contains the custom policy definitions. ### Onboard the initiative as a custom standard (legacy)
defender-for-cloud Create Governance Rule Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/create-governance-rule-servicenow.md
ai-usage: ai-assisted
# Create automatic tickets with governance rules
-The integration of SeviceNow and Defender for Cloud allow you to create governance rules that automatically open tickets in SeviceNow for specific recommendations or severity levels. ServiceNow tickets can be created, viewed, and linked to recommendations directly from Defender for Cloud, enabling seamless collaboration between the two platforms and facilitating efficient incident management.
+The integration of ServiceNow and Defender for Cloud allow you to create governance rules that automatically open tickets in ServiceNow for specific recommendations or severity levels. ServiceNow tickets can be created, viewed, and linked to recommendations directly from Defender for Cloud, enabling seamless collaboration between the two platforms and facilitating efficient incident management.
## Prerequisites -- Have an [application registry in ServiceNow](https://docs.servicenow.com/bundle/utah-employee-service-management/page/product/meeting-extensibility/task/create-app-registry-meeting-extensibility.html).
+- Have an [application registry in ServiceNow](https://docs.servicenow.com/bundle/utah-employee-service-management/page/product/meeting-extensibility/task/create-app-registry-meeting-extensibility.html).
- Enable [Defender Cloud Security Posture Management (CSPM)](tutorial-enable-cspm-plan.md) on your Azure subscription. - The following roles are required:
- - To create an assignment: Admin permissions to ServiceNow.
+ - To create an assignment: Admin permissions to ServiceNow.
## Assign an owner with a governance rule
defender-for-cloud Create Ticket Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/create-ticket-servicenow.md
ai-usage: ai-assisted
#customer intent: As a user, I want to learn how to Create a ticket in Defender for Cloud for my ServiceNow account.
-# Create a ticket in Defender for Cloud
+# Create a ticket in Defender for Cloud
The integration between Defender for Cloud and ServiceNow allows Defender for Cloud customers to create tickets in Defender for Cloud that connects to a ServiceNow account. ServiceNow tickets are linked to recommendations directly from Defender for Cloud, allowing the two platforms to facilitate efficient incident management. ## Prerequisites -- Have an [application registry in ServiceNow](https://docs.servicenow.com/bundle/utah-employee-service-management/page/product/meeting-extensibility/task/create-app-registry-meeting-extensibility.html).
+- Have an [application registry in ServiceNow](https://docs.servicenow.com/bundle/utah-employee-service-management/page/product/meeting-extensibility/task/create-app-registry-meeting-extensibility.html).
- Enable [Defender Cloud Security Posture Management (CSPM)](tutorial-enable-cspm-plan.md) on your Azure subscription. - The following roles are required:
- - To create an assignment: Admin permissions to ServiceNow.
+ - To create an assignment: Admin permissions to ServiceNow.
## Create a new ticket based on a recommendation to ServiceNow
defender-for-cloud Defender For Cloud Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-glossary.md
Amazon Elastic Kubernetes Service, Amazon's managed service for running Kubernet
### **eBPF**
-Extended Berkley Packet Filter [What is eBPF?](https://ebpf.io/)
+Extended Berkeley Packet Filter [What is eBPF?](https://ebpf.io/)
## F
defender-for-cloud Defender For Databases Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-databases-introduction.md
Check out the [pricing page](https://azure.microsoft.com/pricing/details/defende
Defender for open-source relational database is supported on PaaS environments and not on Azure Arc-enabled machines. **Protected versions of PostgreSQL include**:-- Single Server - General Purpose and Memory Optimized. Learn more in [PostgreSQL Single Server pricing tiers](../postgresql/concepts-pricing-tiers.md). +
+- Single Server - General Purpose and Memory Optimized. Learn more in [PostgreSQL Single Server pricing tiers](../postgresql/concepts-pricing-tiers.md).
- Flexible Server - all pricing tiers. **Protected versions of MySQL include**:+ - Single Server - General Purpose and Memory Optimized. Learn more in [MySQL pricing tiers](../mysql/concepts-pricing-tiers.md). - Flexible Server - all pricing tiers. **Protected versions of MariaDB include**:+ - General Purpose and Memory Optimized. Learn more in [MariaDB pricing tiers](../mariadb/concepts-pricing-tiers.md). View [cloud availability](support-matrix-cloud-environment.md#cloud-support) for Defender for open-source relational databases
defender-for-cloud Enable Pull Request Annotations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-pull-request-annotations.md
Annotations can be added by a user with access to the repository, and can be use
**For GitHub**: - An Azure account. If you don't already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).-- Be a [GitHub Advanced Security](https://docs.github.com/en/get-started/learning-about-github/about-github-advanced-security) customer.
+- Be a [GitHub Advanced Security](https://docs.github.com/en/get-started/learning-about-github/about-github-advanced-security) customer.
- [Connect your GitHub repositories to Microsoft Defender for Cloud](quickstart-onboard-github.md). - [Configure the Microsoft Security DevOps GitHub action](github-action.md). **For Azure DevOps**: - An Azure account. If you don't already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).-- [Have write access (owner/contributer) to the Azure subscription](../active-directory/privileged-identity-management/pim-how-to-activate-role.md).
+- [Have write access (owner/contributer) to the Azure subscription](../active-directory/privileged-identity-management/pim-how-to-activate-role.md).
- [Connect your Azure DevOps repositories to Microsoft Defender for Cloud](quickstart-onboard-devops.md). - [Configure the Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md).
Before you can enable pull request annotations, your main branch must have enabl
:::image type="content" source="media/tutorial-enable-pr-annotations/branch-policies.png" alt-text="Screenshot that shows where to locate the branch policies." lightbox="media/tutorial-enable-pr-annotations/branch-policies.png":::
-1. Locate the Build Validation section.
+1. Locate the Build Validation section.
1. Ensure the build validation for your repository is toggled to **On**. :::image type="content" source="media/tutorial-enable-pr-annotations/build-validation.png" alt-text="Screenshot that shows where the CI Build toggle is located." lightbox="media/tutorial-enable-pr-annotations/build-validation.png":::
-1. Select **Save**.
+1. Select **Save**.
:::image type="content" source="media/tutorial-enable-pr-annotations/validation-policy.png" alt-text="Screenshot that shows the build validation.":::
All annotations on your pull requests will be displayed from now on based on you
**To enable pull request annotations for my Projects and Organizations in Azure DevOps**:
-You can do this programatically by calling the Update Azure DevOps Resource API exposed the Microsoft. Security
+You can do this programmatically by calling the Update Azure DevOps Resource API exposed the Microsoft. Security
Resource Provider. API Info: **Http Method**: PATCH **URLs**:+ - Azure DevOps Project Update: `https://management.azure.com/subscriptions/<subId>/resourcegroups/<resourceGroupName>/providers/Microsoft.Security/securityConnectors/<connectorName>/devops/default/azureDevOpsOrgs/<adoOrgName>/projects/<adoProjectName>?api-version=2023-09-01-preview` - Azure DevOps Org Update]: `https://management.azure.com/subscriptions/<subId>/resourcegroups/<resourceGroupName>/providers/Microsoft.Security/securityConnectors/<connectorName>/devops/default/azureDevOpsOrgs/<adoOrgName>?api-version=2023-09-01-preview`
Parameters / Options Available
**Options**: Enabled | Disabled **`<Category>`**
-**Description**: Category of Findings that will be annotated on pull requests.
+**Description**: Category of Findings that will be annotated on pull requests.
**Options**: IaC | Code | Artifacts | Dependencies | Containers **Note**: Only IaC is supported currently **`<Severity>`**
-**Description**: The minimum severity of a finding that will be considered when creating PR annotations.
+**Description**: The minimum severity of a finding that will be considered when creating PR annotations.
**Options**: High | Medium | Low Example of enabling an Azure DevOps Org's PR Annotations for the IaC category with a minimum severity of Medium using the az cli tool.
defender-for-cloud Endpoint Detection Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/endpoint-detection-response.md
ai-usage: ai-assisted
Microsoft Defender for Cloud provides recommendations to secure and configure your endpoint detection and response solutions. By remediating these recommendations, you can ensure that your endpoint detection and response solution are compliant and secure across all environments.
-The endpoint detection and response recommendations allow you to:
+The endpoint detection and response recommendations allow you to:
- Identify if an endpoint detection and response solution is installed on your multicloud machines
The recommendations mentioned in this article are only available if you have the
- [Defender for Cloud](connect-azure-subscription.md) enabled on your Azure account. -- You must have either of the following plans enabled on Defender for Cloud enabled on your subscription:
- - [Defender for Servers plan 2](tutorial-enable-servers-plan.md)
- - [Defender Cloud Security Posture Management (CSPM)](tutorial-enable-cspm-plan.md)
+- You must have either of the following plans enabled on Defender for Cloud enabled on your subscription:
+ - [Defender for Servers plan 2](tutorial-enable-servers-plan.md)
+ - [Defender Cloud Security Posture Management (CSPM)](tutorial-enable-cspm-plan.md)
- You must enable [agentless scanning for virtual machines](enable-agentless-scanning-vms.md#enabling-agentless-scanning-for-machines). > [!NOTE] > The feature described on this page is the replacement feature for the [MMA based feature](endpoint-protection-recommendations-technical.md), which is set to be retired along with the MMA retirement in August 2024. >
-> Learn more about the migration and the [deprecation process of the endpoint protection related recommendations](prepare-deprecation-log-analytics-mma-agent.md#endpoint-protection-recommendations-experience).
+> Learn more about the migration and the [deprecation process of the endpoint protection related recommendations](prepare-deprecation-log-analytics-mma-agent.md#endpoint-protection-recommendations-experience).
## Review and remediate endpoint detection and response discovery recommendations
This recommended action is available when:
**To enable the Defender for Endpoint integration on your Defender for Servers plan on the affected VM**:
-1. Select the affected machine.
+1. Select the affected machine.
1. (Optional) Select multiple affected machines that have the `Upgrade Defender plan` recommended action.
This recommended action is available when:
:::image type="content" source="media/endpoint-detection-response/remediation-steps.png" alt-text="Screenshot that shows where the remediation steps are located in the recommendation." lightbox="media/endpoint-detection-response/remediation-steps.png":::
-1. Follow the instructions to troubleshoot Microsoft Defender for Endpoint onboarding issues for [Windows](/microsoft-365/security/defender-endpoint/troubleshoot-onboarding?view=o365-worldwide&WT.mc_id=Portal-Microsoft_Azure_Security) or [Linux](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint-linux?view=o365-worldwide&WT.mc_id=Portal-Microsoft_Azure_Security).
+1. Follow the instructions to troubleshoot Microsoft Defender for Endpoint onboarding issues for [Windows](/microsoft-365/security/defender-endpoint/troubleshoot-onboarding) or [Linux](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint-linux).
After the process is completed, it can take up to 24 hours until your machine appears in the Healthy resources tab.
When Defender for Cloud finds misconfigurations in your endpoint detection and r
1. Follow the remediation steps.
-After the process is completed, it can take up to 24 hours until your machine appears in the Healthy resources tab.
+After the process is completed, it can take up to 24 hours until your machine appears in the Healthy resources tab.
## Next step
defender-for-cloud Plan Defender For Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers.md
Title: Plan a Defender for Servers deployment to protect on-premises and multicloud servers
-description: Design a solution to protect on-premises and multicloud servers with Microsoft Defender for Servers.
+description: Design a solution to protect on-premises and multicloud servers with Microsoft Defender for Servers.
Last updated 05/11/2023
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
April 3, 2024
To support the new [risk-based prioritization](risk-prioritization.md) experience for recommendations, we've created new recommendations for container vulnerability assessments in Azure, AWS, and GCP. They report on container images for registry and container workloads for runtime: -- [[Container images in Azure registry should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/33422d8f-ab1e-42be-bc9a-38685bb567b9)](recommendations-reference.md#container-images-in-azure-registry-should-have-vulnerability-findings-resolvedhttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkey33422d8f-ab1e-42be-bc9a-38685bb567b9)-- [[Containers running in Azure should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e9acaf48-d2cf-45a3-a6e7-3caa2ef769e0)](recommendations-reference.md#containers-running-in-azure-should-have-vulnerability-findings-resolvedhttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkeye9acaf48-d2cf-45a3-a6e7-3caa2ef769e0)-- [[Container images in AWS registry should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2a139383-ec7e-462a-90ac-b1b60e87d576)](recommendations-reference-aws.md#container-images-in-aws-registry-should-have-vulnerability-findings-resolvedhttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkey2a139383-ec7e-462a-90ac-b1b60e87d576)-- [[Containers running in AWS should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d5d1e526-363a-4223-b860-f4b6e710859f)](recommendations-reference-aws.md#containers-running-in-aws-should-have-vulnerability-findings-resolvedhttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkeyd5d1e526-363a-4223-b860-f4b6e710859f)-- [[Container images in GCP registry should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/24e37609-dcf5-4a3b-b2b0-b7d76f2e4e04)](recommendations-reference-gcp.md#container-images-in-gcp-registry-should-have-vulnerability-findings-resolvedhttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkey24e37609-dcf5-4a3b-b2b0-b7d76f2e4e04)-- [[Containers running in GCP should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c7c1d31d-a604-4b86-96df-63448618e165)](recommendations-reference-gcp.md#containers-running-in-gcp-should-have-vulnerability-findings-resolvedhttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkeyc7c1d31d-a604-4b86-96df-63448618e165)
+- [Container images in Azure registry should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/33422d8f-ab1e-42be-bc9a-38685bb567b9)
+- [Containers running in Azure should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e9acaf48-d2cf-45a3-a6e7-3caa2ef769e0)
+- [Container images in AWS registry should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2a139383-ec7e-462a-90ac-b1b60e87d576)
+- [Containers running in AWS should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/d5d1e526-363a-4223-b860-f4b6e710859f)
+- [Container images in GCP registry should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/24e37609-dcf5-4a3b-b2b0-b7d76f2e4e04)
+- [Containers running in GCP should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c7c1d31d-a604-4b86-96df-63448618e165)
The previous container vulnerability assessment recommendations are on a retirement path and will be removed when the new recommendations are generally available. -- [[Azure registry container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5)](recommendations-reference.md#azure-registry-container-images-should-have-vulnerabilities-resolved-powered-by-microsoft-defender-vulnerability-managementhttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkeyc0b7cfc6-3172-465a-b378-53c7ff2cc0d5)-- [[Azure running container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5)](recommendations-reference.md#azure-running-container-images-should-have-vulnerabilities-resolved-powered-by-microsoft-defender-vulnerability-managementhttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkeyc609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5)
+- [Azure registry container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5)
+- [Azure running container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5)
- [AWS registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/AwsContainerRegistryRecommendationDetailsBlade/assessmentKey/c27441ae-775c-45be-8ffa-655de37362ce) - [AWS running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/AwsContainersRuntimeRecommendationDetailsBlade/assessmentKey/682b2595-d045-4cff-b5aa-46624eb2dd8f) - [GCP registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/GcpContainerRegistryRecommendationDetailsBlade/assessmentKey/5cc3a2c1-8397-456f-8792-fe9d0d4c9145)
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important upcoming changes
-description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan.
+description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan.
Last updated 04/03/2024
Unified Disk Encryption recommendations will be released for General Availabilit
| Linux virtual machines should enable Azure Disk Encryption or EncryptionAtHost | a40cc620-e72c-fdf4-c554-c6ca2cd705c0 | | Windows virtual machines should enable Azure Disk Encryption or EncryptionAtHost | 0cb5f317-a94b-6b80-7212-13a9cc8826af |
-Azure Disk Encryption (ADE) and EncryptionAtHost provide encryption at rest coverage, as described in [Overview of managed disk encryption options - Azure Virtual Machines](/azure/virtual-machines/disk-encryption-overview), and we recommend enabling either of these on virtual machines.
+Azure Disk Encryption (ADE) and EncryptionAtHost provide encryption at rest coverage, as described in [Overview of managed disk encryption options - Azure Virtual Machines](/azure/virtual-machines/disk-encryption-overview), and we recommend enabling either of these on virtual machines.
-The recommendations depend on [Guest Configuration](/azure/governance/machine-configuration/overview). Prerequisites to onboard to Guest configuration should be enabled on virtual machines for the recommendations to complete compliance scans as expected.
+The recommendations depend on [Guest Configuration](/azure/governance/machine-configuration/overview). Prerequisites to onboard to Guest configuration should be enabled on virtual machines for the recommendations to complete compliance scans as expected.
-These recommendations will replace the recommendation "Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources."
+These recommendations will replace the recommendation "Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources."
## Changes in where you access Compliance offerings and Microsoft Actions
defender-for-cloud View And Remediate Vulnerability Registry Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/view-and-remediate-vulnerability-registry-images.md
Last updated 07/11/2023
> [!NOTE] > This page describes the new risk-based approach to vulnerability management in Defender for Cloud. Defender for CSPM customers should use this method. To use the classic secure score approach, see [View and remediate vulnerabilities for registry images (Secure Score)](view-and-remediate-vulnerability-assessment-findings-secure-score.md).
-Defender for Cloud offers customers the capability to remediate vulnerabilities in container images while they're still stored in the registry. Additionally, it conducts contextual analysis of the vulnerabilities in your environment, aiding in prioritizing remediation efforts based on the risk level associated with each vulnerability.
+Defender for Cloud offers customers the capability to remediate vulnerabilities in container images while they're still stored in the registry. Additionally, it conducts contextual analysis of the vulnerabilities in your environment, aiding in prioritizing remediation efforts based on the risk level associated with each vulnerability.
In this article, we review the [Container images in Azure registry should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/33422d8f-ab1e-42be-bc9a-38685bb567b9) recommendation. For the other clouds, see the parallel recommendations in [Vulnerability assessments for AWS with Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-aws.md) and [Vulnerability assessments for GCP with Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-gcp.md).
defender-for-iot How To Troubleshoot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-sensor.md
The **Cloud connectivity troubleshooting** pane covers the following types of is
|**Proxy authentication issues** | Occurs when a proxy demands authentication, but no credentials, or incorrect credentials, are provided. <br><br>In such cases, make sure that you've configured the proxy credentials correctly. For more information, see [Update the OT sensor network configuration](how-to-manage-individual-sensors.md#update-the-ot-sensor-network-configuration). | |**Name resolution failures** | Occurs when the sensor can't perform name resolution for a specific endpoint. <br><br>In such cases, if your DNS server is reachable, make sure that the DNS server is configured on your sensor correctly. If the configuration is correct, we recommend reaching out to your DNS administrator. <br><br>For more information, see [Update the OT sensor network configuration](how-to-manage-individual-sensors.md#update-the-ot-sensor-network-configuration). | |**Unreachable proxy server errors** | Occurs when the sensor can't establish a connection with the proxy server. In such cases, confirm the reachability of your proxy server with your network team. <br><br>For more information, see [Update the OT sensor network configuration](how-to-manage-individual-sensors.md#update-the-ot-sensor-network-configuration). |-
+|**Time drift detected** |Occurs when the UTC time of the sensor isn't synchronized with Defender for IoT on the Azure portal.<br><br>In this case, configure a Network Time Protocol (NTP) server to synchronize the sensor in UTC time.<br><br>For more information, see [Configure OT sensor settings from the Azure portal](configure-sensor-settings-portal.md#ntp). |
## Check system health
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Cloud features may be dependent on a specific sensor version. Such features are
| Version / Patch | Release date | Scope | Supported until | | - | | -- | - | | **24.1** | | | |
+| 24.1.3 |04/2024 | Major |03/2025 |
| 24.1.2 |02/2024 | Major |01/2025 | | **23.1** | | | | | 23.1.3 | 09/2023 | Patch | 08/2024 |
To understand whether a feature is supported in your sensor version, check the r
## Versions 24.1.x
-### Version 24.1.0
+### Version 24.1.3
-**Release date**: 02/2024
+**Release date**: 04/2024
**Supported until**: 03/2025 This version includes the following updates and enhancements:
+- [Sensor time drift detection](whats-new.md#sensor-time-drift-detection)
+- Bug fixes for stability improvements
+
+### Version 24.1.2
+
+**Release date**: 02/2024
+
+**Supported until**: 01/2025
+
+This version includes the following updates and enhancements:
+ - [Alert suppression rules from the Azure portal](how-to-accelerate-alert-incident-response.md#suppress-irrelevant-alerts) - [Focused alerts in OT/IT environments](alerts.md#focused-alerts-in-otit-environments) - [Alert ID (ID field) is now aligned on the Azure portal and sensor console](how-to-manage-cloud-alerts.md#view-alerts-on-the-azure-portal)
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Features released earlier than nine months ago are described in the [What's new
> Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
+## March 2024
+
+|Service area |Updates |
+|||
+| **OT networks** | [Sensor time drift detection](#sensor-time-drift-detection) |
+
+### Sensor time drift detection
+
+This version introduces a new troubleshooting test in the connectivity tool feature, specifically designed to identify time drift issues.
+
+One common challenge when connecting sensors to Defender for IoT in the Azure portal arises from discrepancies in the sensorΓÇÖs UTC time, which can lead to connectivity problems. To address this issue, we recommend that you configure a Network Time Protocol (NTP) server [in the sensor settings](configure-sensor-settings-portal.md#ntp).
+ ## February 2024 |Service area |Updates |
energy-data-services How To Enable Cors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-enable-cors.md
You can set CORS rules for each Azure Data Manager for Energy instance. When you
## Enabling CORS on Azure Data Manager for Energy instance 1. Create an **Azure Data Manager for Energy** instance.
-2. Select the **Resource Sharing(CORS)** tab.
+1. Select the **Resource Sharing(CORS)** tab.
[![Screenshot of Resource Sharing(CORS) tab while creating Azure Data Manager for Energy.](media/how-to-enable-cors/enable-cors-1.png)](media/how-to-enable-cors/enable-cors-1.png#lightbox)
-3. In the Resource Sharing(CORS) tab, select **Allowed Origins**.
-4. There can be upto 5 **Allowed Origins** added for a given instance.
+1. In the Resource Sharing(CORS) tab, select **Allowed Origins**.
+1. There can be upto 5 **Allowed Origins** added for a given instance.
[![Screenshot of 1 allowed origin selected.](media/how-to-enable-cors/enable-cors-2.png)](media/how-to-enable-cors/enable-cors-2.png#lightbox)
-5. If you explicitly want to have ***(Wildcard)**, then in the allowed origin * can be added.
-6. If no setting is enabled on CORS page it's defaulted to Wildcard*, allow all.
-7. The other values of CORS policy like **Allowed Methods**, **Allowed Headers**, **Exposed Headers**, **Max age in seconds** are set with default values displayed on the screen.
-7. Next, select ΓÇ£**Review+Create**ΓÇ¥ after completing other tabs.
-8. Select the "**Create**" button.
-9. An **Azure Data Manager for Energy** instance is created with CORS policy.
-10. Next, once the instance is created the CORS policy set can be viewed in instance **overview** page.
-11. You can navigate to **Resource Sharing(CORS)** and see that CORS is enabled with required **Allowed Origins**.
- [![Screenshot of viewing the CORS policy set out.](media/how-to-enable-cors/enable-cors-3.png)](media/how-to-enable-cors/enable-cors-3.png#lightbox)
+1. If you explicitly want to have ***(Wildcard)**, then in the allowed origin * can be added.
+1. If no setting is enabled on CORS page it's defaulted to Wildcard*, allow all.
+1. The other values of CORS policy like **Allowed Methods**, **Allowed Headers**, **Exposed Headers**, **Max age in seconds** are set with default values displayed on the screen.
+1. Next, select ΓÇ£**Review+Create**ΓÇ¥ after completing other tabs.
+1. Select the "**Create**" button.
+1. An **Azure Data Manager for Energy** instance is created with CORS policy.
+1. Next, once the instance is created the CORS policy set can be viewed in instance **overview** page.
+1. You can navigate to **Resource Sharing(CORS)** and see that CORS is enabled with required **Allowed Origins**.
+ [![Screenshot of navigation to CORS update page.](media/how-to-enable-cors/enable-cors-4.png)](media/how-to-enable-cors/enable-cors-4.png#lightbox)
+1. You can modify the Allowed Origins in CORS page at any time after Azure data manager for Energy instance is provisioned.
+ 1. For adding a new origin type on the box below.
+ [![Screenshot of adding new origin.](media/how-to-enable-cors/enable-cors-5.png)](media/how-to-enable-cors/enable-cors-5.png#lightbox)
+ 1. For deleting an existing allowed origin use the icon.
+ [![Screenshot of deleting the existing origin.](media/how-to-enable-cors/enable-cors-6.png)](media/how-to-enable-cors/enable-cors-6.png#lightbox)
+ 1. If * ( wildcard all) is added in any of the allowed origins then please ensure to delete all the other individual allowed origins.
+1. Once the Allowed origin is added, the state of resource provisioning is in ΓÇ£AcceptedΓÇ¥ and during this time further modifications of CORS policy will not be possible. It takes 15 mins for CORS policies to be updated before update CORS window is available again for modifications.
+ [![Screenshot of CORS update window set out.](media/how-to-enable-cors/enable-cors-7.png)](media/how-to-enable-cors/enable-cors-7.png#lightbox)
## How are CORS rules evaluated? CORS rules are evaluated as follows:
CORS rules are evaluated as follows:
## Limitations on CORS policy The following limitations apply to CORS rules:-- You can specify up to five CORS rules per instance. - The maximum size of all CORS rules settings on the request, excluding XML tags, shouldn't exceed 2 KiB. - The length of allowed origin shouldn't exceed 256 characters. - ## Next steps-- CORS policy once set up during provisioning can be modified only through a Support request
- > [!div class="nextstepaction"]
- > [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md)
- To learn more about CORS > [!div class="nextstepaction"] > [CORS overview](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services)
expressroute Metro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/metro.md
The following diagram allows for a comparison between the standard ExpressRoute
| Metro location | Peering locations | Location address | Zone | Local Azure Region | ER Direct | Service Provider | |--|--|--|--|--|--|--|
-| Amsterdam Metro | Amsterdam<br>Amsterdam2 | Equinix AM5<br>Digital Realty AMS8 | 1 | West Europe | &check; | Megaport<br>Equinix<sup>1</sup><br>Colt<sup>1</sup><br>Console Connect<sup>1</sup><br>Digital Reality<sup>1</sup> |
+| Amsterdam Metro | Amsterdam<br>Amsterdam2 | Equinix AM5<br>Digital Realty AMS8 | 1 | West Europe | &check; | Megaport<br>Equinix<sup>1</sup><br>Colt<sup>1</sup><br>Console Connect<sup>1</sup><br>Digital Realty<sup>1</sup> |
| Singapore Metro | Singapore<br>Singapore2 | Equinix SG1<br>Global Switch Tai Seng | 2 | Southeast Asia | &check; | Megaport<sup>1</sup><br>Equinix<sup>1</sup><br>Console Connect<sup>1</sup> | | Zurich Metro | Zurich<br>Zurich2 | Digital Realty ZUR2<br>Equinix ZH5 | 1 | Switzerland North | &check; | Colt<sup>1</sup><br>Digital Realty<sup>1</sup> |
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md
You can identify what category a given FQDN or URL is by using the **Web Categor
:::image type="content" source="media/premium-features/firewall-category-search.png" alt-text="Firewall category search dialog"::: > [!IMPORTANT]
-> To use **Web Category Check** feature, user must have an access of Microsoft.Network/azureWebCategories/getwebcategory/action for **subscription** level, not resource group level.
+> To use the **Web Category Check** feature, the user must have an access of Microsoft.Network/azureWebCategories/* for **subscription** level, not resource group level.
### Category change
governance Assignment Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/assignment-structure.md
Overrides have the following properties:
- `notIn`: The list of not-allowed values for the specified `kind`. Can't be used with `in`. Can contain up to 50 values.
-Note that one override can be used to replace the effect of many policies by specifying multiple values in the policyDefinitionReferenceId array. A single override can be used for up to 50 policyDefinitionReferenceIds, and a single policy assignment can contain up to 10 overrides, evaluated in the order in which they're specified. Before the assignment is created, the effect chosen in the override is validated against the policy rule and parameter allowed value list (in cases where the effect is [parameterized](definition-structure.md#parameters)).
+Note that one override can be used to replace the effect of many policies by specifying multiple values in the policyDefinitionReferenceId array. A single override can be used for up to 50 policyDefinitionReferenceIds, and a single policy assignment can contain up to 10 overrides, evaluated in the order in which they're specified. Before the assignment is created, the effect chosen in the override is validated against the policy rule and parameter allowed value list (in cases where the effect is [parameterized](./definition-structure-parameters.md)).
## Enforcement mode
the initiative definition. For details, see
## Parameters This segment of the policy assignment provides the values for the parameters defined in the
-[policy definition or initiative definition](./definition-structure.md#parameters). This design
+[policy definition or initiative definition](./definition-structure-parameters.md). This design
makes it possible to reuse a policy or initiative definition with different resources, but check for different business values or outcomes.
governance Definition Structure Alias https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure-alias.md
+
+ Title: Details of the policy definition structure aliases
+description: Describes how policy definition aliases are used to establish conventions for Azure resources in your organization.
Last updated : 04/01/2024+++
+# Azure Policy definition structure aliases
+
+You use property aliases to access specific properties for a resource type. Aliases enable you to restrict what values or conditions are allowed for a property on a resource. Each alias maps to paths in different API versions for a given resource type. During policy evaluation, the policy engine gets the property path for that API version.
+
+The list of aliases is always growing. To find which aliases Azure Policy supports, use one of the following methods:
+
+- Azure Policy extension for Visual Studio Code (recommended)
+
+ Use the [Azure Policy extension for Visual Studio Code](../how-to/extension-for-vscode.md) to view and discover aliases for resource properties.
+
+ :::image type="content" source="../media/extension-for-vscode/extension-hover-shows-property-alias.png" alt-text="Screenshot of the Azure Policy extension for Visual Studio Code hovering over a property to display the alias names.":::
+
+- Azure PowerShell
+
+ ```azurepowershell-interactive
+ # Login first with Connect-AzAccount if not using Cloud Shell
+
+ # Use Get-AzPolicyAlias to list available providers
+ Get-AzPolicyAlias -ListAvailable
+
+ # Use Get-AzPolicyAlias to list aliases for a Namespace (such as Azure Compute -- Microsoft.Compute)
+ (Get-AzPolicyAlias -NamespaceMatch 'compute').Aliases
+ ```
+
+ > [!NOTE]
+ > To find aliases that can be used with the [modify](./effects.md#modify) effect, use the
+ > following command in Azure PowerShell **4.6.0** or higher:
+ >
+ > ```azurepowershell-interactive
+ > Get-AzPolicyAlias | Select-Object -ExpandProperty 'Aliases' | Where-Object { $_.DefaultMetadata.Attributes -eq 'Modifiable' }
+ > ```
+
+- Azure CLI
+
+ ```azurecli-interactive
+ # Login first with az login if not using Cloud Shell
+
+ # List namespaces
+ az provider list --query [*].namespace
+
+ # Get Azure Policy aliases for a specific Namespace (such as Azure Compute -- Microsoft.Compute)
+ az provider show --namespace Microsoft.Compute --expand "resourceTypes/aliases" --query "resourceTypes[].aliases[].name"
+ ```
+
+- REST API
+
+ ```http
+ GET https://management.azure.com/providers/?api-version=2019-10-01&$expand=resourceTypes/aliases
+ ```
+
+## Understanding the array alias
+
+Several of the aliases that are available have a version that appears as a _normal_ name and another that has `[*]` attached to it, which is an array alias. For example:
+
+- `Microsoft.Storage/storageAccounts/networkAcls.ipRules`
+- `Microsoft.Storage/storageAccounts/networkAcls.ipRules[*]`
+
+- The _normal_ alias represents the field as a single value. This field is for exact match comparison scenarios when the entire set of values must be exactly as defined.
+- The array alias `[*]` represents a collection of values selected from the elements of an array resource property. For example:
+
+| Alias | Selected values |
+|:|:|
+| `Microsoft.Storage/storageAccounts/networkAcls.ipRules[*]` | The elements of the `ipRules` array. |
+| `Microsoft.Storage/storageAccounts/networkAcls.ipRules[*].action` | The values of the `action` property from each element of the `ipRules` array. |
+
+When used in a [field](./definition-structure-policy-rule.md#fields) condition, array aliases make it possible to compare each individual array element to a target value. When used with [count](./definition-structure-policy-rule.md#count) expression, it's possible to:
+
+- Check the size of an array.
+- Check if all\any\none of the array elements meet a complex condition.
+- Check if exactly `n` array elements meet a complex condition.
+
+For more information and examples, see [Referencing array resource properties](../how-to/author-policies-for-arrays.md#referencing-array-resource-properties).
+
+## Next steps
+
+- For more information about policy definition structure, go to [basics](./definition-structure-basics.md), [parameters](./definition-structure-parameters.md), and [policy rule](./definition-structure-policy-rule.md).
+- For initiatives, go to [initiative definition structure](./initiative-definition-structure.md).
+- Review examples at [Azure Policy samples](../samples/index.md).
+- Review [Understanding policy effects](effects.md).
+- Understand how to [programmatically create policies](../how-to/programmatically-create.md).
+- Learn how to [get compliance data](../how-to/get-compliance-data.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
+- Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).
governance Definition Structure Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure-basics.md
+
+ Title: Details of the policy definition structure basics
+description: Describes how policy definition basics are used to establish conventions for Azure resources in your organization.
Last updated : 04/01/2024+++
+# Azure Policy definition structure basics
+
+Azure Policy definitions describe resource compliance [conditions](./definition-structure-policy-rule.md#conditions) and the effect to take if a condition is met. A condition compares a resource property [field](./definition-structure-policy-rule.md#fields) or a [value](./definition-structure-policy-rule.md#value) to a required value. Resource property fields are accessed by using [aliases](./definition-structure-alias.md). When a resource property field is an array, a special [array alias](./definition-structure-alias.md#understanding-the-array-alias) can be used to select values from all array members and apply a condition to each one. Learn more about [conditions](./definition-structure-policy-rule.md#conditions).
+
+By using policy assignments, you can control costs and manage your resources. For example, you can specify that only certain types of virtual machines are allowed. Or, you can require that resources have a particular tag. Assignments at a scope apply to all resources at that scope and below. If a policy assignment is applied to a resource group, it's applicable to all the resources in that resource group.
+
+You use JSON to create a policy definition that contains elements for:
+
+- `displayName`
+- `description`
+- `mode`
+- `metadata`
+- `parameters`
+- `policyRule`
+ - logical evaluations
+ - `effect`
+
+For example, the following JSON shows a policy that limits where resources are deployed:
+
+```json
+{
+ "properties": {
+ "displayName": "Allowed locations",
+ "description": "This policy enables you to restrict the locations your organization can specify when deploying resources.",
+ "mode": "Indexed",
+ "metadata": {
+ "version": "1.0.0",
+ "category": "Locations"
+ },
+ "parameters": {
+ "allowedLocations": {
+ "type": "array",
+ "metadata": {
+ "description": "The list of locations that can be specified when deploying resources",
+ "strongType": "location",
+ "displayName": "Allowed locations"
+ },
+ "defaultValue": [
+ "westus2"
+ ]
+ }
+ },
+ "policyRule": {
+ "if": {
+ "not": {
+ "field": "location",
+ "in": "[parameters('allowedLocations')]"
+ }
+ },
+ "then": {
+ "effect": "deny"
+ }
+ }
+ }
+}
+```
+
+For more information, go to the [policy definition schema](https://schema.management.azure.com/schemas/2020-10-01/policyDefinition.json). Azure Policy built-ins and patterns are at [Azure Policy samples](../samples/index.md).
+
+## Display name and description
+
+You use `displayName` and `description` to identify the policy definition and provide context for when the definition is used. The `displayName` has a maximum length of _128_ characters and `description` a maximum length of _512_ characters.
+
+> [!NOTE]
+> During the creation or updating of a policy definition, `id`, `type`, and `name` are defined
+> by properties external to the JSON and aren't necessary in the JSON file. Fetching the policy
+> definition via SDK returns the `id`, `type`, and `name` properties as part of the JSON, but
+> each are read-only information related to the policy definition.
+
+## Policy type
+
+While the `policyType` property can't be set, there are three values returned by SDK and visible in the portal:
+
+- `Builtin`: Microsoft provides and maintains these policy definitions.
+- `Custom`: All policy definitions created by customers have this value.
+- `Static`: Indicates a [Regulatory Compliance](./regulatory-compliance.md) policy definition with
+ Microsoft **Ownership**. The compliance results for these policy definitions are the results of
+ non-Microsoft audits of Microsoft infrastructure. In the Azure portal, this value is sometimes
+ displayed as **Microsoft managed**. For more information, see
+ [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
+
+## Mode
+
+The `mode` is configured depending on if the policy is targeting an Azure Resource Manager property or a Resource Provider property.
+
+### Resource Manager modes
+
+The `mode` determines which resource types are evaluated for a policy definition. The supported modes are:
+
+- `all`: evaluate resource groups, subscriptions, and all resource types
+- `indexed`: only evaluate resource types that support tags and location
+
+For example, resource `Microsoft.Network/routeTables` supports tags and location and is evaluated in both modes. However, resource `Microsoft.Network/routeTables/routes` can't be tagged and isn't evaluated in `Indexed` mode.
+
+We recommend that you set `mode` to `all` in most cases. All policy definitions created through the portal use the `all` mode. If you use PowerShell or Azure CLI, you can specify the `mode` parameter manually. If the policy definition doesn't include a `mode` value, it defaults to `all` in Azure PowerShell and to `null` in Azure CLI. A `null` mode is the same as using `indexed` to support backward compatibility.
+
+`indexed` should be used when creating policies that enforce tags or locations. While not required, it prevents resources that don't support tags and locations from showing up as non-compliant in the compliance results. The exception is resource groups and subscriptions. Policy definitions that enforce location or tags on a resource group or subscription should set `mode` to `all` and specifically target the `Microsoft.Resources/subscriptions/resourceGroups` or `Microsoft.Resources/subscriptions` type. For an example, see [Pattern: Tags - Sample #1](../samples/pattern-tags.md). For a list of resources that support tags, see [Tag support for Azure resources](../../../azure-resource-manager/management/tag-support.md).
+
+### Resource Provider modes
+
+The following Resource Provider modes are fully supported:
+
+- `Microsoft.Kubernetes.Data` for managing Kubernetes clusters and components such as pods, containers, and ingresses. Supported for Azure Kubernetes Service clusters and [Azure Arc-enabled Kubernetes clusters](../../../aks/intro-kubernetes.md). Definitions using this Resource Provider mode use the effects _audit_, _deny_, and _disabled_.
+- `Microsoft.KeyVault.Data` for managing vaults and certificates in [Azure Key Vault](../../../key-vault/general/overview.md). For more information on these policy definitions, see [Integrate Azure Key Vault with Azure Policy](../../../key-vault/general/azure-policy.md).
+- `Microsoft.Network.Data` for managing [Azure Virtual Network Manager](../../../virtual-network-manager/overview.md) custom membership policies using Azure Policy.
+
+The following Resource Provider modes are currently supported as a [preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/):
+
+- `Microsoft.ManagedHSM.Data` for managing [Managed Hardware Security Module (HSM)](../../../key-vault/managed-hsm/azure-policy.md) keys using Azure Policy.
+- `Microsoft.DataFactory.Data` for using Azure Policy to deny [Azure Data Factory](../../../data-factory/introduction.md) outbound traffic domain names not specified in an allowlist. This Resource Provider mode is enforcement only and doesn't report compliance in public preview.
+- `Microsoft.MachineLearningServices.v2.Data` for managing [Azure Machine Learning](../../../machine-learning/overview-what-is-azure-machine-learning.md) model deployments. This Resource Provider mode reports compliance for newly created and updated components. During public preview, compliance records remain for 24 hours. Model deployments that exist before these policy definitions are assigned don't report compliance.
+
+> [!NOTE]
+>Unless explicitly stated, Resource Provider modes only support built-in policy definitions, and exemptions are not supported at the component-level.
+
+## Metadata
+
+The optional `metadata` property stores information about the policy definition. Customers can define any properties and values useful to their organization in `metadata`. However, there are some _common_ properties used by Azure Policy and in built-ins. Each `metadata` property has a limit of 1,024 characters.
+
+### Common metadata properties
+
+- `version` (string): Tracks details about the version of the contents of a policy definition.
+- `category` (string): Determines under which category in the Azure portal the policy definition is displayed.
+- `preview` (boolean): True or false flag for if the policy definition is _preview_.
+- `deprecated` (boolean): True or false flag for if the policy definition is marked as _deprecated_.
+- `portalReview` (string): Determines whether parameters should be reviewed in the portal, regardless of the required input.
+
+> [!NOTE]
+> The Azure Policy service uses `version`, `preview`, and `deprecated` properties to convey level of
+> change to a built-in policy definition or initiative and state. The format of `version` is:
+> `{Major}.{Minor}.{Patch}`. Specific states, such as _deprecated_ or _preview_, are appended to the
+> `version` property or in another property as a **boolean**. For more information about the way
+> Azure Policy versions built-ins, see
+> [Built-in versioning](https://github.com/Azure/azure-policy/blob/master/built-in-policies/README.md).
+> To learn more about what it means for a policy to be _deprecated_ or in _preview_, see [Preview and deprecated policies](https://github.com/Azure/azure-policy/blob/master/built-in-policies/README.md#preview-and-deprecated-policies).
+
+## Definition location
+
+While creating an initiative or policy, it's necessary to specify the definition location. The definition location must be a management group or a subscription. This location determines the scope to which the initiative or policy can be assigned. Resources must be direct members of or children within the hierarchy of the definition location to target for assignment.
+
+If the definition location is a:
+
+- **Subscription** - Only resources within that subscription can be assigned the policy definition.
+- **Management group** - Only resources within child management groups and child subscriptions can be assigned the policy definition. If you plan to apply the policy definition to several subscriptions, the location must be a management group that contains each subscription.
+
+For more information, see [Understand scope in Azure Policy](./scope.md#definition-location).
+
+## Next steps
+
+- For more information about policy definition structure, go to [parameters](./definition-structure-parameters.md), [policy rule](./definition-structure-policy-rule.md), and [alias](./definition-structure-alias.md).
+- For initiatives, go to [initiative definition structure](./initiative-definition-structure.md).
+- Review examples at [Azure Policy samples](../samples/index.md).
+- Review [Understanding policy effects](effects.md).
+- Understand how to [programmatically create policies](../how-to/programmatically-create.md).
+- Learn how to [get compliance data](../how-to/get-compliance-data.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
+- Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).
governance Definition Structure Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure-parameters.md
+
+ Title: Details of the policy definition structure parameters
+description: Describes how policy definition parameters are used to establish conventions for Azure resources in your organization.
Last updated : 04/01/2024+++
+# Azure Policy definition structure parameters
+
+Parameters help simplify your policy management by reducing the number of policy definitions. Think of parameters like the fields on a form: `name`, `address`, `city`, `state`. These parameters always stay the same but their values change based on the individual filling out the form. Parameters work the same way when building policies. By including parameters in a policy definition, you can reuse that policy for different scenarios by using different values.
+
+## Adding or removing parameters
+
+Parameters might be added to an existing and assigned definition. The new parameter must include the `defaultValue` property. This property prevents existing assignments of the policy or initiative from indirectly being made invalid.
+
+Parameters can't be removed from a policy definition because there might be an assignment that sets the parameter value, and that reference would become broken. Some built-in policy definitions deprecate parameters using metadata `"deprecated": true`, which hides the parameter when assigning the definition in Azure portal. While this method isn't supported for custom policy definitions, another option is to duplicate and create a new custom policy definition without the parameter.
+
+## Parameter properties
+
+A parameter uses the following properties in a policy definition:
+
+- `name`: The name of your parameter. Used by the `parameters` deployment function within the policy rule. For more information, see [using a parameter value](#using-a-parameter-value).
+- `type`: Determines if the parameter is a `string`, `array`, `object`, `boolean`, `integer`, `float`, or `dateTime`.
+- `metadata`: Defines subproperties primarily used by the Azure portal to display user-friendly information:
+ - `description`: The explanation of what the parameter is used for. Can be used to provide examples of acceptable values.
+ - `displayName`: The friendly name shown in the portal for the parameter.
+ - `strongType`: (Optional) Used when assigning the policy definition through the portal. Provides a context aware list. For more information, see [strongType](#strongtype).
+ - `assignPermissions`: (Optional) Set as _true_ to have Azure portal create role assignments during policy assignment. This property is useful in case you wish to assign permissions outside the assignment scope. There's one role assignment per role definition in the policy (or per role definition in all of the initiative's policies). The parameter value must be a valid resource or scope.
+ - `deprecated`: A boolean flag to indicate whether a parameter is deprecated in a built-in definition.
+- `defaultValue`: (Optional) Sets the value of the parameter in an assignment if no value is given. Required when updating an existing policy definition that is assigned. For oject-type parameters, the value must match the appropriate schema.
+- `allowedValues`: (Optional) Provides an array of values that the parameter accepts during assignment.
+ - Case sensitivity: Allowed value comparisons are case-sensitive when assigning a policy, meaning that the selected parameter values in the assignment must match the casing of values in the `allowedValues` array in the definition. However, once values are selected for the assignment, evaluation of string comparisons might be case insensitive depending on the [condition](./definition-structure-policy-rule.md#conditions) used. For example, if the parameter specifies `Dev` as an allowed tag value in an assignment, and this value is compared to an input string using the `equals` condition, then Azure Policy would later evaluate a tag value of `dev` as a match even though it's lowercase because `notEquals` is case insensitive.
+ - For object-type parameters, the values must match the appropriate schema.
+- `schema`: (Optional) Provides validation of parameter inputs during assignment using a self-defined JSON schema. This property is only supported for object-type parameters and follows the [Json.NET Schema](https://www.newtonsoft.com/jsonschema) 2019-09 implementation. You can learn more about using schemas at https://json-schema.org/ and test draft schemas at https://www.jsonschemavalidator.net/.
+
+## Sample parameters
+
+### Example 1
+
+As an example, you could define a policy definition to limit the locations where resources can be deployed. A parameter for that policy definition could be `allowedLocations` and used by each assignment of the policy definition to limit the accepted values. The use of `strongType` provides an enhanced experience when completing the assignment through the portal:
+
+```json
+"parameters": {
+ "allowedLocations": {
+ "type": "array",
+ "metadata": {
+ "description": "The list of allowed locations for resources.",
+ "displayName": "Allowed locations",
+ "strongType": "location"
+ },
+ "defaultValue": [
+ "westus2"
+ ],
+ "allowedValues": [
+ "eastus2",
+ "westus2",
+ "westus"
+ ]
+ }
+}
+```
+
+A sample input for this array-type parameter (without `strongType`) at assignment time might be `["westus", "eastus2"]`.
+
+### Example 2
+
+In a more advanced scenario, you could define a policy that requires Kubernetes cluster pods to use specified labels. A parameter for that policy definition could be `labelSelector` and used by each assignment of the policy definition to specify Kubernetes resources in question based on label keys and values:
+
+```json
+"parameters": {
+ "labelSelector": {
+ "type": "Object",
+ "metadata": {
+ "displayName": "Kubernetes label selector",
+ "description": "Label query to select Kubernetes resources for policy evaluation. An empty label selector matches all Kubernetes resources."
+ },
+ "defaultValue": {},
+ "schema": {
+ "description": "A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all resources.",
+ "type": "object",
+ "properties": {
+ "matchLabels": {
+ "description": "matchLabels is a map of {key,value} pairs.",
+ "type": "object",
+ "additionalProperties": {
+ "type": "string"
+ },
+ "minProperties": 1
+ },
+ "matchExpressions": {
+ "description": "matchExpressions is a list of values, a key, and an operator.",
+ "type": "array",
+ "items": {
+ "type": "object",
+ "properties": {
+ "key": {
+ "description": "key is the label key that the selector applies to.",
+ "type": "string"
+ },
+ "operator": {
+ "description": "operator represents a key's relationship to a set of values.",
+ "type": "string",
+ "enum": [
+ "In",
+ "NotIn",
+ "Exists",
+ "DoesNotExist"
+ ]
+ },
+ "values": {
+ "description": "values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty.",
+ "type": "array",
+ "items": {
+ "type": "string"
+ }
+ }
+ },
+ "required": [
+ "key",
+ "operator"
+ ],
+ "additionalProperties": false
+ },
+ "minItems": 1
+ }
+ },
+ "additionalProperties": false
+ }
+ },
+}
+```
+
+A sample input for this object-type parameter at assignment time would be in JSON format, validated by the specified schema, and might be:
+
+```json
+{
+ "matchLabels": {
+ "poolID": "abc123",
+ "nodeGroup": "Group1",
+ "region": "southcentralus"
+ },
+ "matchExpressions": [
+ {
+ "key": "name",
+ "operator": "In",
+ "values": [
+ "payroll",
+ "web"
+ ]
+ },
+ {
+ "key": "environment",
+ "operator": "NotIn",
+ "values": [
+ "dev"
+ ]
+ }
+ ]
+}
+```
+
+## Using a parameter value
+
+In the policy rule, you reference parameters with the following `parameters` function syntax:
+
+```json
+{
+ "field": "location",
+ "in": "[parameters('allowedLocations')]"
+}
+```
+
+This sample references the `allowedLocations` parameter that was demonstrated in [parameter properties](#parameter-properties).
+
+## strongType
+
+Within the `metadata` property, you can use `strongType` to provide a multiselect list of options within the Azure portal. `strongType` can be a supported _resource type_ or an allowed value. To determine whether a _resource type_ is valid for `strongType`, use [Get-AzResourceProvider](/powershell/module/az.resources/get-azresourceprovider). The format for a _resource type_ `strongType` is `<Resource Provider>/<Resource Type>`. For example, `Microsoft.Network/virtualNetworks/subnets`.
+
+Some _resource types_ not returned by `Get-AzResourceProvider` are supported. Those types are:
+
+- `Microsoft.RecoveryServices/vaults/backupPolicies`
+
+The non _resource type_ allowed values for `strongType` are:
+
+- `location`
+- `resourceTypes`
+- `storageSkus`
+- `vmSKUs`
+- `existingResourceGroups`
+
+## Next steps
+
+- For more information about policy definition structure, go to [basics](./definition-structure-basics.md), [policy rule](./definition-structure-policy-rule.md), and [alias](./definition-structure-alias.md).
+- For initiatives, go to [initiative definition structure](./initiative-definition-structure.md).
+- Review examples at [Azure Policy samples](../samples/index.md).
+- Review [Understanding policy effects](effects.md).
+- Understand how to [programmatically create policies](../how-to/programmatically-create.md).
+- Learn how to [get compliance data](../how-to/get-compliance-data.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
+- Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).
governance Definition Structure Policy Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure-policy-rule.md
+
+ Title: Details of the policy definition structure policy rules
+description: Describes how policy definition policy rules are used to establish conventions for Azure resources in your organization.
Last updated : 04/01/2024+++
+# Azure Policy definition structure policy rule
+
+The policy rule consists of `if` and `then` blocks. In the `if` block, you define one or more conditions that specify when the policy is enforced. You can apply logical operators to these conditions to precisely define the scenario for a policy.
+
+For complete details on each effect, order of evaluation, properties, and examples, see [Understanding Azure Policy Effects](effects.md).
+
+In the `then` block, you define the effect that happens when the `if conditions are fulfilled.
+
+```json
+{
+ "if": {
+ <condition> | <logical operator>
+ },
+ "then": {
+ "effect": "deny | audit | modify | denyAction | append | auditIfNotExists | deployIfNotExists | disabled"
+ }
+}
+```
+
+For more information about _policyRule_, go to the [policy definition schema](https://schema.management.azure.com/schemas/2020-10-01/policyDefinition.json).
+
+### Logical operators
+
+Supported logical operators are:
+
+- `"not": {condition or operator}`
+- `"allOf": [{condition or operator},{condition or operator}]`
+- `"anyOf": [{condition or operator},{condition or operator}]`
+
+The `not` syntax inverts the result of the condition. The `allOf` syntax (similar to the logical `and` operation) requires all conditions to be true. The `anyOf` syntax (similar to the logical `or` operation) requires one or more conditions to be true.
+
+You can nest logical operators. The following example shows a `not` operation that is nested within an `allOf` operation.
+
+```json
+"if": {
+ "allOf": [
+ {
+ "not": {
+ "field": "tags",
+ "containsKey": "application"
+ }
+ },
+ {
+ "field": "type",
+ "equals": "Microsoft.Storage/storageAccounts"
+ }
+ ]
+},
+```
+
+## Conditions
+
+A condition evaluates whether a value meets certain criteria. The supported conditions are:
+
+- `"equals": "stringValue"`
+- `"notEquals": "stringValue"`
+- `"like": "stringValue"`
+- `"notLike": "stringValue"`
+- `"match": "stringValue"`
+- `"matchInsensitively": "stringValue"`
+- `"notMatch": "stringValue"`
+- `"notMatchInsensitively": "stringValue"`
+- `"contains": "stringValue"`
+- `"notContains": "stringValue"`
+- `"in": ["stringValue1","stringValue2"]`
+- `"notIn": ["stringValue1","stringValue2"]`
+- `"containsKey": "keyName"`
+- `"notContainsKey": "keyName"`
+- `"less": "dateValue"` | `"less": "stringValue"` | `"less": intValue`
+- `"lessOrEquals": "dateValue"` | `"lessOrEquals": "stringValue"` | `"lessOrEquals": intValue`
+- `"greater": "dateValue"` | `"greater": "stringValue"` | `"greater": intValue`
+- `"greaterOrEquals": "dateValue"` | `"greaterOrEquals": "stringValue"` |
+ `"greaterOrEquals": intValue`
+- `"exists": "bool"`
+
+For `less`, `lessOrEquals`, `greater`, and `greaterOrEquals`, if the property type doesn't match the condition type, an error is thrown. String comparisons are made using `InvariantCultureIgnoreCase`.
+
+When using the `like` and `notLike` conditions, you provide a wildcard character (`*`) in the value. The value shouldn't have more than one wildcard character.
+
+When using the `match` and `notMatch` conditions, provide a hashtag (`#`) to match a digit, question mark (`?`) for a letter, and a dot (`.`) to match any character, and any other character to match that actual character. While `match` and `notMatch` are case-sensitive, all other conditions that evaluate a `stringValue` are case-insensitive. Case-insensitive alternatives are available in `matchInsensitively` and `notMatchInsensitively`.
+
+## Fields
+
+Conditions that evaluate whether the values of properties in the resource request payload meet certain criteria can be formed using a `field` expression. The following fields are supported:
+
+- `name`
+- `fullName`
+ - Returns the full name of the resource. The full name of a resource is the resource name prepended by any parent resource names (for example `myServer/myDatabase`).
+- `kind`
+- `type`
+- `location`
+ - Location fields are normalized to support various formats. For example, `East US 2` is considered equal to `eastus2`.
+ - Use **global** for resources that are location agnostic.
+- `id`
+ - Returns the resource ID of the resource that is being evaluated.
+ - Example: `/subscriptions/11111111-1111-1111-1111-111111111111/resourceGroups/myRG/providers/Microsoft.KeyVault/vaults/myVault`
+- `identity.type`
+ - Returns the type of
+ [managed identity](../../../active-directory/managed-identities-azure-resources/overview.md)
+ enabled on the resource.
+- `tags`
+- `tags['<tagName>']`
+ - This bracket syntax supports tag names that have punctuation such as a hyphen, period, or space.
+ - Where `tagName` is the name of the tag to validate the condition for.
+ - Examples: `tags['Acct.CostCenter']` where `Acct.CostCenter` is the name of the tag.
+- `tags['''<tagName>''']`
+ - This bracket syntax supports tag names that have apostrophes in it by escaping with double apostrophes.
+ - Where `tagName` is the name of the tag to validate the condition for.
+ - Example: `tags['''My.Apostrophe.Tag''']` where `'My.Apostrophe.Tag'` is the name of the tag.
+
+ > [!NOTE]
+ > `tags.<tagName>`, `tags[tagName]`, and `tags[tag.with.dots]` are still acceptable ways of
+ > declaring a tags field. However, the preferred expressions are those listed above.
+- property aliases - for a list, see [Aliases](./definition-structure-alias.md).
+ > [!NOTE]
+ > In `field` expressions referring to array alias `[*]` each element in the array is evaluated
+ > individually with logical `and` between elements. For more information, see
+ > [Referencing array resource properties](../how-to/author-policies-for-arrays.md#referencing-array-resource-properties).
++
+Conditions that use `field` expressions can replace the legacy policy definition syntax `"source": "action"`, which used to work for write operations. For example, this is no longer supported:
+
+```json
+{
+ "source": "action",
+ "like": "Microsoft.Network/publicIPAddresses/*"
+}
+```
+
+But the desired behavior can be achieved using `field` logic:
+```json
+{
+ "field": "type",
+ "equals": "Microsoft.Network/publicIPAddresses"
+}
+```
+
+### Use tags with parameters
+
+A parameter value can be passed to a tag field. Passing a parameter to a tag field increases the flexibility of the policy definition during policy assignment.
+
+In the following example, `concat` is used to create a tags field lookup for the tag named the value of the `tagName` parameter. If that tag doesn't exist, the `modify` effect is used to add the tag using the value of the same named tag set on the audited resources parent resource group by using the `resourcegroup()` lookup function.
+
+```json
+{
+ "if": {
+ "field": "[concat('tags[', parameters('tagName'), ']')]",
+ "exists": "false"
+ },
+ "then": {
+ "effect": "modify",
+ "details": {
+ "operations": [
+ {
+ "operation": "add",
+ "field": "[concat('tags[', parameters('tagName'), ']')]",
+ "value": "[resourcegroup().tags[parameters('tagName')]]"
+ }
+ ],
+ "roleDefinitionIds": [
+ "/providers/microsoft.authorization/roleDefinitions/4a9ae827-6dc8-4573-8ac7-8239d42aa03f"
+ ]
+ }
+ }
+}
+```
+
+## Value
+
+Conditions that evaluate whether a value meets certain criteria can be formed using a `value` expression. Values can be literals, the values of [parameters](./definition-structure-parameters.md), or the returned values of any [supported template functions](#policy-functions).
+
+> [!WARNING]
+> If the result of a _template function_ is an error, policy evaluation fails. A failed evaluation
+> is an implicit `deny`. For more information, see
+> [avoiding template failures](#avoiding-template-failures). Use
+> [enforcementMode](./assignment-structure.md#enforcement-mode) of `doNotEnforce` to prevent
+> impact of a failed evaluation on new or updated resources while testing and validating a new
+> policy definition.
+
+### Value examples
+
+This policy rule example uses `value` to compare the result of the `resourceGroup()` function and the returned `name` property to a `like` condition of `*netrg`. The rule denies any resource not of the `Microsoft.Network/*` `type` in any resource group whose name ends in `*netrg`.
+
+```json
+{
+ "if": {
+ "allOf": [
+ {
+ "value": "[resourceGroup().name]",
+ "like": "*netrg"
+ },
+ {
+ "field": "type",
+ "notLike": "Microsoft.Network/*"
+ }
+ ]
+ },
+ "then": {
+ "effect": "deny"
+ }
+}
+```
+
+This policy rule example uses `value` to check if the result of multiple nested functions `equals` `true`. The rule denies any resource that doesn't have at least three tags.
+
+```json
+{
+ "mode": "indexed",
+ "policyRule": {
+ "if": {
+ "value": "[less(length(field('tags')), 3)]",
+ "equals": "true"
+ },
+ "then": {
+ "effect": "deny"
+ }
+ }
+}
+```
+
+### Avoiding template failures
+
+The use of _template functions_ in `value` allows for many complex nested functions. If the result of a _template function_ is an error, policy evaluation fails. A failed evaluation is an implicit `deny`. An example of a `value` that fails in certain scenarios:
+
+```json
+{
+ "policyRule": {
+ "if": {
+ "value": "[substring(field('name'), 0, 3)]",
+ "equals": "abc"
+ },
+ "then": {
+ "effect": "audit"
+ }
+ }
+}
+```
+
+The example policy rule above uses [substring()](../../../azure-resource-manager/templates/template-functions-string.md#substring) to compare the first three characters of `name` to `abc`. If `name` is shorter than three characters, the `substring()` function results in an error. This error causes the policy to become a `deny` effect.
+
+Instead, use the [if()](../../../azure-resource-manager/templates/template-functions-logical.md#if) function to check if the first three characters of `name` equal `abc` without allowing a `name` shorter than three characters to cause an error:
+
+```json
+{
+ "policyRule": {
+ "if": {
+ "value": "[if(greaterOrEquals(length(field('name')), 3), substring(field('name'), 0, 3), 'not starting with abc')]",
+ "equals": "abc"
+ },
+ "then": {
+ "effect": "audit"
+ }
+ }
+}
+```
+
+With the revised policy rule, `if()` checks the length of `name` before trying to get a `substring()` on a value with fewer than three characters. If `name` is too short, the value "not starting with abc" is returned instead and compared to `abc`. A resource with a short name that doesn't begin with `abc` still fails the policy rule, but no longer causes an error during evaluation.
+
+## Count
+
+Conditions that count how many members of an array meet certain criteria can be formed using a `count` expression. Common scenarios are checking whether 'at least one of', 'exactly one of', 'all of', or 'none of' the array members satisfy a condition. The `count` evaluates each array member for a condition expression and sums the _true_ results, which is then compared to the expression operator.
+
+### Field count
+
+Count how many members of an array in the request payload satisfy a condition expression. The structure of `field count` expressions is:
+
+```json
+{
+ "count": {
+ "field": "<[*] alias>",
+ "where": {
+ /* condition expression */
+ }
+ },
+ "<condition>": "<compare the count of true condition expression array members to this value>"
+}
+```
+
+The following properties are used with `field count`:
+
+- `count.field` (required): Contains the path to the array and must be an array alias.
+- `count.where` (optional): The condition expression to individually evaluate for each [array alias](./definition-structure-alias.md#understanding-the-array-alias) array member of `count.field`. If this property isn't provided, all array members with the path of 'field' are evaluated to _true_. Any [condition](#conditions) can be used inside this property. [Logical operators](#logical-operators) can be used inside this property to create complex evaluation requirements.
+- `condition` (required): The value is compared to the number of items that met the
+ `count.where` condition expression. A numeric [condition](#conditions) should be used.
+
+For more details on how to work with array properties in Azure Policy, including detailed explanation on how the `field count` expression is evaluated, see [Referencing array resource properties](../how-to/author-policies-for-arrays.md#referencing-array-resource-properties).
+
+### Value count
+
+Count how many members of an array satisfy a condition. The array can be a literal array or a [reference to array parameter](./definition-structure-parameters.md#using-a-parameter-value). The structure of `value count` expressions is:
+
+```json
+{
+ "count": {
+ "value": "<literal array | array parameter reference>",
+ "name": "<index name>",
+ "where": {
+ /* condition expression */
+ }
+ },
+ "<condition>": "<compare the count of true condition expression array members to this value>"
+}
+```
+
+The following properties are used with `value count`:
+
+- `count.value` (required): The array to evaluate.
+- `count.name` (required): The index name, composed of English letters and digits. Defines a name for the value of the array member evaluated in the current iteration. The name is used for referencing the current value inside the `count.where` condition. Optional when the `count` expression isn't in a child of another `count` expression. When not provided, the index name is implicitly set to `"default"`.
+- `count.where` (optional): The condition expression to individually evaluate for each array member of `count.value`. If this property isn't provided, all array members are evaluated to _true_. Any [condition](#conditions) can be used inside this property. [Logical operators](#logical-operators) can be used inside this property to create complex evaluation requirements. The value of the currently enumerated array member can be accessed by calling the [current](#the-current-function) function.
+- `condition` (required): The value is compared to the number of items that met the `count.where` condition expression. A numeric [condition](#conditions) should be used.
+
+### The current function
+
+The `current()` function is only available inside the `count.where` condition. It returns the value of the array member that is currently enumerated by the `count` expression evaluation.
+
+**Value count usage**
+
+- `current(<index name defined in count.name>)`. For example: `current('arrayMember')`.
+- `current()`. Allowed only when the `value count` expression isn't a child of another `count` expression. Returns the same value as above.
+
+If the value returned by the call is an object, property accessors are supported. For example: `current('objectArrayMember').property`.
+
+**Field count usage**
+
+- `current(<the array alias defined in count.field>)`. For example,
+ `current('Microsoft.Test/resource/enumeratedArray[*]')`.
+- `current()`. Allowed only when the `field count` expression isn't a child of another `count` expression. Returns the same value as above.
+- `current(<alias of a property of the array member>)`. For example,
+ `current('Microsoft.Test/resource/enumeratedArray[*].property')`.
+
+### Field count examples
+
+Example 1: Check if an array is empty
+
+```json
+{
+ "count": {
+ "field": "Microsoft.Network/networkSecurityGroups/securityRules[*]"
+ },
+ "equals": 0
+}
+```
+
+Example 2: Check for only one array member to meet the condition expression
+
+```json
+{
+ "count": {
+ "field": "Microsoft.Network/networkSecurityGroups/securityRules[*]",
+ "where": {
+ "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].description",
+ "equals": "My unique description"
+ }
+ },
+ "equals": 1
+}
+```
+
+Example 3: Check for at least one array member to meet the condition expression
+
+```json
+{
+ "count": {
+ "field": "Microsoft.Network/networkSecurityGroups/securityRules[*]",
+ "where": {
+ "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].description",
+ "equals": "My common description"
+ }
+ },
+ "greaterOrEquals": 1
+}
+```
+
+Example 4: Check that all object array members meet the condition expression
+
+```json
+{
+ "count": {
+ "field": "Microsoft.Network/networkSecurityGroups/securityRules[*]",
+ "where": {
+ "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].description",
+ "equals": "description"
+ }
+ },
+ "equals": "[length(field('Microsoft.Network/networkSecurityGroups/securityRules[*]'))]"
+}
+```
+
+Example 5: Check that at least one array member matches multiple properties in the condition expression
+
+```json
+{
+ "count": {
+ "field": "Microsoft.Network/networkSecurityGroups/securityRules[*]",
+ "where": {
+ "allOf": [
+ {
+ "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].direction",
+ "equals": "Inbound"
+ },
+ {
+ "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].access",
+ "equals": "Allow"
+ },
+ {
+ "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].destinationPortRange",
+ "equals": "3389"
+ }
+ ]
+ }
+ },
+ "greater": 0
+}
+```
+
+Example 6: Use `current()` function inside the `where` conditions to access the value of the currently enumerated array member in a template function. This condition checks whether a virtual network contains an address prefix that isn't under the 10.0.0.0/24 CIDR range.
+
+```json
+{
+ "count": {
+ "field": "Microsoft.Network/virtualNetworks/addressSpace.addressPrefixes[*]",
+ "where": {
+ "value": "[ipRangeContains('10.0.0.0/24', current('Microsoft.Network/virtualNetworks/addressSpace.addressPrefixes[*]'))]",
+ "equals": false
+ }
+ },
+ "greater": 0
+}
+```
+
+Example 7: Use `field()` function inside the `where` conditions to access the value of the currently enumerated array member. This condition checks whether a virtual network contains an address prefix that isn't under the 10.0.0.0/24 CIDR range.
+
+```json
+{
+ "count": {
+ "field": "Microsoft.Network/virtualNetworks/addressSpace.addressPrefixes[*]",
+ "where": {
+ "value": "[ipRangeContains('10.0.0.0/24', first(field(('Microsoft.Network/virtualNetworks/addressSpace.addressPrefixes[*]')))]",
+ "equals": false
+ }
+ },
+ "greater": 0
+}
+```
+
+### Value count examples
+
+Example 1: Check if resource name matches any of the given name patterns.
+
+```json
+{
+ "count": {
+ "value": [
+ "prefix1_*",
+ "prefix2_*"
+ ],
+ "name": "pattern",
+ "where": {
+ "field": "name",
+ "like": "[current('pattern')]"
+ }
+ },
+ "greater": 0
+}
+```
+
+Example 2: Check if resource name matches any of the given name patterns. The `current()` function doesn't specify an index name. The outcome is the same as the previous example.
+
+```json
+{
+ "count": {
+ "value": [
+ "prefix1_*",
+ "prefix2_*"
+ ],
+ "where": {
+ "field": "name",
+ "like": "[current()]"
+ }
+ },
+ "greater": 0
+}
+```
+
+Example 3: Check if resource name matches any of the given name patterns provided by an array parameter.
+
+```json
+{
+ "count": {
+ "value": "[parameters('namePatterns')]",
+ "name": "pattern",
+ "where": {
+ "field": "name",
+ "like": "[current('pattern')]"
+ }
+ },
+ "greater": 0
+}
+```
+
+Example 4: Check if any of the virtual network address prefixes isn't under the list of approved prefixes.
+
+```json
+{
+ "count": {
+ "field": "Microsoft.Network/virtualNetworks/addressSpace.addressPrefixes[*]",
+ "where": {
+ "count": {
+ "value": "[parameters('approvedPrefixes')]",
+ "name": "approvedPrefix",
+ "where": {
+ "value": "[ipRangeContains(current('approvedPrefix'), current('Microsoft.Network/virtualNetworks/addressSpace.addressPrefixes[*]'))]",
+ "equals": true
+ },
+ },
+ "equals": 0
+ }
+ },
+ "greater": 0
+}
+```
+
+Example 5: Check that all the reserved NSG rules are defined in an NSG. The properties of the reserved NSG rules are defined in an array parameter containing objects.
+
+Parameter value:
+
+```json
+[
+ {
+ "priority": 101,
+ "access": "deny",
+ "direction": "inbound",
+ "destinationPortRange": 22
+ },
+ {
+ "priority": 102,
+ "access": "deny",
+ "direction": "inbound",
+ "destinationPortRange": 3389
+ }
+]
+```
+
+Policy:
+
+```json
+{
+ "count": {
+ "value": "[parameters('reservedNsgRules')]",
+ "name": "reservedNsgRule",
+ "where": {
+ "count": {
+ "field": "Microsoft.Network/networkSecurityGroups/securityRules[*]",
+ "where": {
+ "allOf": [
+ {
+ "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].priority",
+ "equals": "[current('reservedNsgRule').priority]"
+ },
+ {
+ "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].access",
+ "equals": "[current('reservedNsgRule').access]"
+ },
+ {
+ "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].direction",
+ "equals": "[current('reservedNsgRule').direction]"
+ },
+ {
+ "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].destinationPortRange",
+ "equals": "[current('reservedNsgRule').destinationPortRange]"
+ }
+ ]
+ }
+ },
+ "equals": 1
+ }
+ },
+ "equals": "[length(parameters('reservedNsgRules'))]"
+}
+```
+
+## Policy functions
+
+Functions can be used to introduce additional logic into a policy rule. They are resolved within the policy rule of a policy definition and within [parameter values assigned to policy definitions in an initiative](initiative-definition-structure.md#passing-a-parameter-value-to-a-policy-definition).
+
+All [Resource Manager template functions](../../../azure-resource-manager/templates/template-functions.md) are available to use within a policy rule, except the following functions and user-defined functions:
+
+- `copyIndex()`
+- `dateTimeAdd()`
+- `dateTimeFromEpoch`
+- `dateTimeToEpoch`
+- `deployment()`
+- `environment()`
+- `extensionResourceId()`
+- `lambda()` For more information, go to [lambda](../../../azure-resource-manager/templates/template-functions-lambda.md)
+- `listAccountSas()`
+- `listKeys()`
+- `listSecrets()`
+- `list*`
+- `managementGroup()`
+- `newGuid()`
+- `pickZones()`
+- `providers()`
+- `reference()`
+- `resourceId()`
+- `subscriptionResourceId()`
+- `tenantResourceId()`
+- `tenant()`
+- `variables()`
+
+> [!NOTE]
+> These functions are still available within the `details.deployment.properties.template` portion of
+> the template deployment in a `deployIfNotExists` policy definition.
+
+The following function is available to use in a policy rule, but differs from use in an Azure Resource Manager template (ARM template):
+
+- `utcNow()` - Unlike an ARM template, this property can be used outside _defaultValue_.
+ - Returns a string that is set to the current date and time in Universal ISO 8601 DateTime format `yyyy-MM-ddTHH:mm:ss.fffffffZ`.
+
+The following functions are only available in policy rules:
+
+- `addDays(dateTime, numberOfDaysToAdd)`
+ - `dateTime`: [Required] string - String in the Universal ISO 8601 DateTime format 'yyyy-MM-ddTHH:mm:ss.FFFFFFFZ'
+ - `numberOfDaysToAdd`: [Required] integer - Number of days to add
+
+- `field(fieldName)`
+ - `fieldName`: [Required] string - Name of the [field](./definition-structure-policy-rule.md#fields) to retrieve
+ - Returns the value of that field from the resource that is being evaluated by the If condition.
+ - `field` is primarily used with `auditIfNotExists` and `deployIfNotExists` to reference fields on the resource that are being evaluated. An example of this use can be seen in the [DeployIfNotExists example](effects.md#deployifnotexists-example).
+
+- `requestContext().apiVersion`
+ - Returns the API version of the request that triggered policy evaluation (example: `2021-09-01`). This value is the API version that was used in the PUT/PATCH request for evaluations on resource creation/update. The latest API version is always used during compliance evaluation on existing resources.
+
+- `policy()`
+ - Returns the following information about the policy that is being evaluated. Properties can be accessed from the returned object (example: `[policy().assignmentId]`).
+
+ ```json
+ {
+ "assignmentId": "/subscriptions/11111111-1111-1111-1111-111111111111/providers/Microsoft.Authorization/policyAssignments/myAssignment",
+ "definitionId": "/providers/Microsoft.Authorization/policyDefinitions/34c877ad-507e-4c82-993e-3452a6e0ad3c",
+ "setDefinitionId": "/providers/Microsoft.Authorization/policySetDefinitions/42a694ed-f65e-42b2-aa9e-8052e9740a92",
+ "definitionReferenceId": "StorageAccountNetworkACLs"
+ }
+ ```
+
+- `ipRangeContains(range, targetRange)`
+ - `range`: [Required] string - String specifying a range of IP addresses to check if the _targetRange_ is within.
+ - `targetRange`: [Required] string - String specifying a range of IP addresses to validate as included within the _range_.
+ - Returns a _boolean_ for whether the _range_ IP address range contains the _targetRange_ IP address range. Empty ranges, or mixing between IP families isn't allowed and results in evaluation failure.
+
+ Supported formats:
+ - Single IP address (examples: `10.0.0.0`, `2001:0DB8::3:FFFE`)
+ - CIDR range (examples: `10.0.0.0/24`, `2001:0DB8::/110`)
+ - Range defined by start and end IP addresses (examples: `192.168.0.1-192.168.0.9`, `2001:0DB8::-2001:0DB8::3:FFFF`)
+
+- `current(indexName)`
+ - Special function that may only be used inside [count expressions](./definition-structure-policy-rule.md#count).
+
+### Policy function example
+
+This policy rule example uses the `resourceGroup` resource function to get the `name` property, combined with the `concat` array and object function to build a `like` condition that enforces the resource name to start with the resource group name.
+
+```json
+{
+ "if": {
+ "not": {
+ "field": "name",
+ "like": "[concat(resourceGroup().name,'*')]"
+ }
+ },
+ "then": {
+ "effect": "deny"
+ }
+}
+```
+
+## Policy rule limits
+
+### Limits enforced during authoring
+
+Limits to the structure of policy rules are enforced during the authoring or assignment of a policy. Attempts to create or assign policy definitions that exceed these limits will fail.
+
+| Limit | Value | Additional details |
+|:|:|:|
+| Condition expressions in the `if` condition | 4096 | |
+| Condition expressions in the `then` block | 128 | Applies to the `existenceCondition` of `auditIfNotExists` and `deployIfNotExists` policies |
+| Policy functions per policy rule | 2048 | |
+| Policy function number of parameters | 128 | Example: `[function('parameter1', 'parameter2', ...)]` |
+| Nested policy functions depth | 64 | Example: `[function(nested1(nested2(...)))]` |
+| Policy functions expression string length | 81920 | Example: the length of `"[function(....)]"` |
+| `Field count` expressions per array | 5 | |
+| `Value count` expressions per policy rule | 10 | |
+| `Value count` expression iteration count | 100 | For nested `Value count` expressions, this also includes the iteration count of the parent expression |
+
+### Limits enforced during evaluation
+
+Limits to the size of objects that are processed by policy functions during policy evaluation. These limits can't always be enforced during authoring since they depend on the evaluated content. For example:
+
+```json
+{
+ "field": "name",
+ "equals": "[concat(field('stringPropertyA'), field('stringPropertyB'))]"
+}
+```
+
+The length of the string created by the `concat()` function depends on the value of properties in the evaluated resource.
+
+| Limit | Value | Example |
+|:|:|:|
+| Length of string returned by a function | 131072 | `[concat(field('longString1'), field('longString2'))]`|
+| Depth of complex objects provided as a parameter to, or returned by a function | 128 | `[union(field('largeObject1'), field('largeObject2'))]` |
+| Number of nodes of complex objects provided as a parameter to, or returned by a function | 32768 | `[concat(field('largeArray1'), field('largeArray2'))]` |
+
+> [!WARNING]
+> Policy that exceed the above limits during evaluation will effectively become a `deny` policy and can block incoming requests.
+> When writing policies with complex functions, be mindful of these limits and test your policies against resources that have the potential to exceed them.
+
+## Next steps
+
+- For more information about policy definition structure, go to [basics](./definition-structure-basics.md), [parameters](./definition-structure-parameters.md), and [alias](./definition-structure-alias.md).
+- For initiatives, go to [initiative definition structure](./initiative-definition-structure.md).
+- Review examples at [Azure Policy samples](../samples/index.md).
+- Review [Understanding policy effects](effects.md).
+- Understand how to [programmatically create policies](../how-to/programmatically-create.md).
+- Learn how to [get compliance data](../how-to/get-compliance-data.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
+- Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md
- Title: Details of the policy definition structure
-description: Describes how policy definitions are used to establish conventions for Azure resources in your organization.
Previously updated : 03/21/2024---
-# Azure Policy definition structure
-
-Azure Policy establishes conventions for resources. Policy definitions describe resource compliance
-[conditions](#conditions) and the effect to take if a condition is met. A condition compares a
-resource property [field](#fields) or a [value](#value) to a required value. Resource property
-fields are accessed by using [aliases](#aliases). When a resource property field is an array, a
-special [array alias](#understanding-the--alias) can be used to select values from all array members
-and apply a condition to each one. Learn more about [conditions](#conditions).
-
-By defining conventions, you can control costs and more easily manage your resources. For example,
-you can specify that only certain types of virtual machines are allowed. Or, you can require that
-resources have a particular tag. Policy assignments are inherited by child resources. If a policy
-assignment is applied to a resource group, it's applicable to all the resources in that resource
-group.
-
-The policy definition _policyRule_ schema is found here:
-[https://schema.management.azure.com/schemas/2020-10-01/policyDefinition.json](https://schema.management.azure.com/schemas/2020-10-01/policyDefinition.json)
-
-You use JSON to create a policy definition. The policy definition contains elements for:
--- display name-- description-- mode-- metadata-- parameters-- policy rule
- - logical evaluation
- - effect
-
-For example, the following JSON shows a policy that limits where resources are deployed:
-
-```json
-{
- "properties": {
- "displayName": "Allowed locations",
- "description": "This policy enables you to restrict the locations your organization can specify when deploying resources.",
- "mode": "Indexed",
- "metadata": {
- "version": "1.0.0",
- "category": "Locations"
- },
- "parameters": {
- "allowedLocations": {
- "type": "array",
- "metadata": {
- "description": "The list of locations that can be specified when deploying resources",
- "strongType": "location",
- "displayName": "Allowed locations"
- },
- "defaultValue": [ "westus2" ]
- }
- },
- "policyRule": {
- "if": {
- "not": {
- "field": "location",
- "in": "[parameters('allowedLocations')]"
- }
- },
- "then": {
- "effect": "deny"
- }
- }
- }
-}
-```
-
-Azure Policy built-ins and patterns are at [Azure Policy samples](../samples/index.md).
-
-## Display name and description
-
-You use `displayName` and `description` to identify the policy definition and provide context
-for when it's used. `displayName` has a maximum length of _128_ characters and `description`
-a maximum length of _512_ characters.
-
-> [!NOTE]
-> During the creation or updating of a policy definition, `id`, `type`, and `name` are defined
-> by properties external to the JSON and aren't necessary in the JSON file. Fetching the policy
-> definition via SDK returns the `id`, `type`, and `name` properties as part of the JSON, but
-> each are read-only information related to the policy definition.
-
-## Policy type
-
-While the `policyType` property can't be set, there are three values that are returned by SDK and
-visible in the portal:
--- `Builtin`: These policy definitions are provided and maintained by Microsoft.-- `Custom`: All policy definitions created by customers have this value.-- `Static`: Indicates a [Regulatory Compliance](./regulatory-compliance.md) policy definition with
- Microsoft **Ownership**. The compliance results for these policy definitions are the results of
- third-party audits on Microsoft infrastructure. In the Azure portal, this value is sometimes
- displayed as **Microsoft managed**. For more information, see
- [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
-
-## Mode
-
-**Mode** is configured depending on if the policy is targeting an Azure Resource Manager property or
-a Resource Provider property.
-
-### Resource Manager modes
-
-The **mode** determines which resource types are evaluated for a policy definition. The supported
-modes are:
--- `all`: evaluate resource groups, subscriptions, and all resource types-- `indexed`: only evaluate resource types that support tags and location-
-For example, resource `Microsoft.Network/routeTables` supports tags and location and is evaluated in
-both modes. However, resource `Microsoft.Network/routeTables/routes` can't be tagged and isn't
-evaluated in `Indexed` mode.
-
-We recommend that you set **mode** to `all` in most cases. All policy definitions created through
-the portal use the `all` mode. If you use PowerShell or Azure CLI, you can specify the **mode**
-parameter manually. If the policy definition doesn't include a **mode** value, it defaults to `all`
-in Azure PowerShell and to `null` in Azure CLI. A `null` mode is the same as using `indexed` to
-support backward compatibility.
-
-`indexed` should be used when creating policies that enforce tags or locations. While not required,
-it prevents resources that don't support tags and locations from showing up as non-compliant in the
-compliance results. The exception is **resource groups** and **subscriptions**. Policy definitions
-that enforce location or tags on a resource group or subscription should set **mode** to `all` and
-specifically target the `Microsoft.Resources/subscriptions/resourceGroups` or
-`Microsoft.Resources/subscriptions` type. For an example, see
-[Pattern: Tags - Sample #1](../samples/pattern-tags.md). For a list of resources that support tags,
-see [Tag support for Azure resources](../../../azure-resource-manager/management/tag-support.md).
-
-### Resource Provider modes
-
-The following Resource Provider modes are fully supported:
--- `Microsoft.Kubernetes.Data` for managing Kubernetes clusters and components such as pods, containers, and ingresses. Supported for Azure Kubernetes Service clusters and [Azure Arc-enabled Kubernetes clusters](../../../aks/intro-kubernetes.md). Definitions
- using this Resource Provider mode use effects _audit_, _deny_, and _disabled_.
-- `Microsoft.KeyVault.Data` for managing vaults and certificates in
- [Azure Key Vault](../../../key-vault/general/overview.md). For more information on these policy
- definitions, see
- [Integrate Azure Key Vault with Azure Policy](../../../key-vault/general/azure-policy.md).
-- `Microsoft.Network.Data` for managing [Azure Virtual Network Manager](../../../virtual-network-manager/overview.md) custom membership policies using Azure Policy.-
-The following Resource Provider modes are currently supported as a [preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/):
--- `Microsoft.ManagedHSM.Data` for managing [Managed HSM](../../../key-vault/managed-hsm/azure-policy.md) keys using Azure Policy.-- `Microsoft.DataFactory.Data` for using Azure Policy to deny [Azure Data Factory](../../../data-factory/introduction.md) outbound traffic domain names not specified in an allow list. This RP mode is enforcement only and does not report compliance in public preview.-- `Microsoft.MachineLearningServices.v2.Data` for managing [Azure Machine Learning](../../../machine-learning/overview-what-is-azure-machine-learning.md) model deployments. This RP mode reports compliance for newly created and updated components. During public preview, compliance records remain for 24 hours. Model deployments that exist before these policy definitions are assigned will not report compliance.-
-> [!NOTE]
->Unless explicitly stated, Resource Provider modes only support built-in policy definitions, and exemptions are not supported at the component-level.
-
-## Metadata
-
-The optional `metadata` property stores information about the policy definition. Customers can
-define any properties and values useful to their organization in `metadata`. However, there are some
-_common_ properties used by Azure Policy and in built-ins. Each `metadata` property has a limit of
-1024 characters.
-
-### Common metadata properties
--- `version` (string): Tracks details about the version of the contents of a policy definition.-- `category` (string): Determines under which category in the Azure portal the policy definition is
- displayed.
-- `preview` (boolean): True or false flag for if the policy definition is _preview_.-- `deprecated` (boolean): True or false flag for if the policy definition has been marked as
- _deprecated_.
-- `portalReview` (string): Determines whether parameters should be reviewed in the portal, regardless of the required input.-
-> [!NOTE]
-> The Azure Policy service uses `version`, `preview`, and `deprecated` properties to convey level of
-> change to a built-in policy definition or initiative and state. The format of `version` is:
-> `{Major}.{Minor}.{Patch}`. Specific states, such as _deprecated_ or _preview_, are appended to the
-> `version` property or in another property as a **boolean**. For more information about the way
-> Azure Policy versions built-ins, see
-> [Built-in versioning](https://github.com/Azure/azure-policy/blob/master/built-in-policies/README.md).
-> To learn more about what it means for a policy to be _deprecated_ or in _preview_, see [Preview and deprecated policies](https://github.com/Azure/azure-policy/blob/master/built-in-policies/README.md#preview-and-deprecated-policies).
-
-## Parameters
-
-Parameters help simplify your policy management by reducing the number of policy definitions. Think
-of parameters like the fields on a form - `name`, `address`, `city`, `state`. These parameters
-always stay the same, however their values change based on the individual filling out the form.
-Parameters work the same way when building policies. By including parameters in a policy definition,
-you can reuse that policy for different scenarios by using different values.
-
-### Adding or removing parameters
-
-Parameters may be added to an existing and assigned definition. The new parameter must include the
-**defaultValue** property. This prevents existing assignments of the policy or initiative from
-indirectly being made invalid.
-
-Parameters can't be removed from a policy definition because there may be an assignment that sets the parameter value, and that reference would become broken. Some built-in policy definitions have deprecated parameters using metadata `"deprecated": true`, which hides the parameter when assigning the definition in Azure Portal. While this is not supported for custom policy definitions, another option is to duplicate and create a new custom policy definition without the parameter.
-
-### Parameter properties
-
-A parameter has the following properties that are used in the policy definition:
--- `name`: The name of your parameter. Used by the `parameters` deployment function within the
- policy rule. For more information, see [using a parameter value](#using-a-parameter-value).
-- `type`: Determines if the parameter is a **string**, **array**, **object**, **boolean**,
- **integer**, **float**, or **datetime**.
-- `metadata`: Defines subproperties primarily used by the Azure portal to display user-friendly
- information:
- - `description`: The explanation of what the parameter is used for. Can be used to provide
- examples of acceptable values.
- - `displayName`: The friendly name shown in the portal for the parameter.
- - `strongType`: (Optional) Used when assigning the policy definition through the portal. Provides
- a context aware list. For more information, see [strongType](#strongtype).
- - `assignPermissions`: (Optional) Set as _true_ to have Azure portal create role assignments
- during policy assignment. This property is useful in case you wish to assign permissions outside
- the assignment scope. There's one role assignment per role definition in the policy (or per role
- definition in all of the policies in the initiative). The parameter value must be a valid
- resource or scope.
- - `deprecated`: A boolean flag to indicate whether a parameter is deprecated in a built-in definition.
-- `defaultValue`: (Optional) Sets the value of the parameter in an assignment if no value is given. Required when updating an existing policy definition that is assigned. For oject-type parameters, the value must match the appropriate schema.-- `allowedValues`: (Optional) Provides an array of values that the parameter accepts during
- assignment.
- - Case sensitivity: Allowed value comparisons are case-sensitive when assigning a policy, meaning that the selected parameter values in the assignment must match the casing of values in the `allowedValues` array in the definition. However, once values are selected for the assignment, evaluation of string comparisons may be case-insensitive depending on the [condition](#conditions) used. For example, if the parameter specifies `Dev` as an allowed tag value in an assignment, and this value is compared to an input string using the `equals` condition, then Azure Policy would later evaluate a tag value of `dev` as a match even though it is lowercase because `notEquals ` is case insensitive.
- - For object-type parameters, the values must match the appropriate schema.
-- `schema`: (Optional) Provides validation of parameter inputs during assignment using a self-defined JSON schema. This property is only supported for object-type parameters and follows the [Json.NET Schema](https://www.newtonsoft.com/jsonschema) 2019-09 implementation. You can learn more about using schemas at https://json-schema.org/ and test draft schemas at https://www.jsonschemavalidator.net/.-
-### Sample Parameters
-
-#### Example 1
-
-As an example, you could define a policy definition to limit the locations where resources can be
-deployed. A parameter for that policy definition could be **allowedLocations**. This parameter would
-be used by each assignment of the policy definition to limit the accepted values. The use of
-**strongType** provides an enhanced experience when completing the assignment through the portal:
-
-```json
-"parameters": {
- "allowedLocations": {
- "type": "array",
- "metadata": {
- "description": "The list of allowed locations for resources.",
- "displayName": "Allowed locations",
- "strongType": "location"
- },
- "defaultValue": [ "westus2" ],
- "allowedValues": [
- "eastus2",
- "westus2",
- "westus"
- ]
- }
-}
-```
-
-A sample input for this array-type parameter (without strongType) at assignment time might be ["westus", "eastus2"].
-
-#### Example 2
-
-In a more advanced scenario, you could define a policy that requires Kubernetes cluster pods to use specified labels. A parameter for that policy definition could be **labelSelector**, which would be used by each assignment of the policy definition to specify Kubernetes resources in question based on label keys and values:
-
-```json
-"parameters": {
- "labelSelector": {
- "type": "Object",
- "metadata": {
- "displayName": "Kubernetes label selector",
- "description": "Label query to select Kubernetes resources for policy evaluation. An empty label selector matches all Kubernetes resources."
- },
- "defaultValue": {},
- "schema": {
- "description": "A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all resources.",
- "type": "object",
- "properties": {
- "matchLabels": {
- "description": "matchLabels is a map of {key,value} pairs.",
- "type": "object",
- "additionalProperties": {
- "type": "string"
- },
- "minProperties": 1
- },
- "matchExpressions": {
- "description": "matchExpressions is a list of values, a key, and an operator.",
- "type": "array",
- "items": {
- "type": "object",
- "properties": {
- "key": {
- "description": "key is the label key that the selector applies to.",
- "type": "string"
- },
- "operator": {
- "description": "operator represents a key's relationship to a set of values.",
- "type": "string",
- "enum": [
- "In",
- "NotIn",
- "Exists",
- "DoesNotExist"
- ]
- },
- "values": {
- "description": "values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty.",
- "type": "array",
- "items": {
- "type": "string"
- }
- }
- },
- "required": [
- "key",
- "operator"
- ],
- "additionalProperties": false
- },
- "minItems": 1
- }
- },
- "additionalProperties": false
- }
- },
-}
-```
-
-A sample input for this object-type parameter at assignment time would be in JSON format, validated by the specified schema, and might be:
-
-```json
-{
- "matchLabels": {
- "poolID": "abc123",
- "nodeGroup": "Group1",
- "region": "southcentralus"
- },
- "matchExpressions": [
- {
- "key": "name",
- "operator": "In",
- "values": ["payroll", "web"]
- },
- {
- "key": "environment",
- "operator": "NotIn",
- "values": ["dev"]
- }
- ]
-}
-```
-
-### Using a parameter value
-
-In the policy rule, you reference parameters with the following `parameters` function syntax:
-
-```json
-{
- "field": "location",
- "in": "[parameters('allowedLocations')]"
-}
-```
-
-This sample references the **allowedLocations** parameter that was demonstrated in [parameter
-properties](#parameter-properties).
-
-### strongType
-
-Within the `metadata` property, you can use **strongType** to provide a multiselect list of options
-within the Azure portal. **strongType** can be a supported _resource type_ or an allowed value. To
-determine whether a _resource type_ is valid for **strongType**, use
-[Get-AzResourceProvider](/powershell/module/az.resources/get-azresourceprovider). The format for a
-_resource type_ **strongType** is `<Resource Provider>/<Resource Type>`. For example,
-`Microsoft.Network/virtualNetworks/subnets`.
-
-Some _resource types_ not returned by **Get-AzResourceProvider** are supported. Those types are:
--- `Microsoft.RecoveryServices/vaults/backupPolicies`-
-The non _resource type_ allowed values for **strongType** are:
--- `location`-- `resourceTypes`-- `storageSkus`-- `vmSKUs`-- `existingResourceGroups`-
-## Definition location
-
-While creating an initiative or policy, it's necessary to specify the definition location. The
-definition location must be a management group or a subscription. This location determines the scope
-to which the initiative or policy can be assigned. Resources must be direct members of or children
-within the hierarchy of the definition location to target for assignment.
-
-If the definition location is a:
--- **Subscription** - Only resources within that subscription can be assigned the policy definition.-- **Management group** - Only resources within child management groups and child subscriptions can
- be assigned the policy definition. If you plan to apply the policy definition to several
- subscriptions, the location must be a management group that contains each subscription.
-
-For more information, see [Understand scope in Azure Policy](./scope.md#definition-location).
-
-## Policy rule
-
-The policy rule consists of **If** and **Then** blocks. In the **If** block, you define one or more
-conditions that specify when the policy is enforced. You can apply logical operators to these
-conditions to precisely define the scenario for a policy.
-
-In the **Then** block, you define the effect that happens when the **If** conditions are fulfilled.
-
-```json
-{
- "if": {
- <condition> | <logical operator>
- },
- "then": {
- "effect": "deny | audit | modify | denyAction | append | auditIfNotExists | deployIfNotExists | disabled"
- }
-}
-```
-
-### Logical operators
-
-Supported logical operators are:
--- `"not": {condition or operator}`-- `"allOf": [{condition or operator},{condition or operator}]`-- `"anyOf": [{condition or operator},{condition or operator}]`-
-The **not** syntax inverts the result of the condition. The **allOf** syntax (similar to the logical
-**And** operation) requires all conditions to be true. The **anyOf** syntax (similar to the logical
-**Or** operation) requires one or more conditions to be true.
-
-You can nest logical operators. The following example shows a **not** operation that is nested
-within an **allOf** operation.
-
-```json
-"if": {
- "allOf": [{
- "not": {
- "field": "tags",
- "containsKey": "application"
- }
- },
- {
- "field": "type",
- "equals": "Microsoft.Storage/storageAccounts"
- }
- ]
-},
-```
-
-### Conditions
-
-A condition evaluates whether a value meets certain criteria. The supported conditions are:
--- `"equals": "stringValue"`-- `"notEquals": "stringValue"`-- `"like": "stringValue"`-- `"notLike": "stringValue"`-- `"match": "stringValue"`-- `"matchInsensitively": "stringValue"`-- `"notMatch": "stringValue"`-- `"notMatchInsensitively": "stringValue"`-- `"contains": "stringValue"`-- `"notContains": "stringValue"`-- `"in": ["stringValue1","stringValue2"]`-- `"notIn": ["stringValue1","stringValue2"]`-- `"containsKey": "keyName"`-- `"notContainsKey": "keyName"`-- `"less": "dateValue"` | `"less": "stringValue"` | `"less": intValue`-- `"lessOrEquals": "dateValue"` | `"lessOrEquals": "stringValue"` | `"lessOrEquals": intValue`-- `"greater": "dateValue"` | `"greater": "stringValue"` | `"greater": intValue`-- `"greaterOrEquals": "dateValue"` | `"greaterOrEquals": "stringValue"` |
- `"greaterOrEquals": intValue`
-- `"exists": "bool"`-
-For **less**, **lessOrEquals**, **greater**, and **greaterOrEquals**, if the property type doesn't
-match the condition type, an error is thrown. String comparisons are made using
-`InvariantCultureIgnoreCase`.
-
-When using the **like** and **notLike** conditions, you provide a wildcard `*` in the value. The
-value shouldn't have more than one wildcard `*`.
-
-When using the **match** and **notMatch** conditions, provide `#` to match a digit, `?` for a
-letter, `.` to match any character, and any other character to match that actual character. While
-**match** and **notMatch** are case-sensitive, all other conditions that evaluate a _stringValue_
-are case-insensitive. Case-insensitive alternatives are available in **matchInsensitively** and
-**notMatchInsensitively**.
-
-### Fields
-
-Conditions that evaluate whether the values of properties in the resource request payload meet
-certain criteria can be formed using a **field** expression. The following fields are supported:
--- `name`-- `fullName`
- - Returns the full name of the resource. The full name of a resource is the resource name
- prepended by any parent resource names (for example "myServer/myDatabase").
-- `kind`-- `type`-- `location`
- - Location fields are normalized to support various formats. For example, `East US 2` is
- considered equal to `eastus2`.
- - Use **global** for resources that are location agnostic.
-- `id`
- - Returns the resource ID of the resource that is being evaluated.
- - Example: `/subscriptions/06be863d-0996-4d56-be22-384767287aa2/resourceGroups/myRG/providers/Microsoft.KeyVault/vaults/myVault`
-- `identity.type`
- - Returns the type of
- [managed identity](../../../active-directory/managed-identities-azure-resources/overview.md)
- enabled on the resource.
-- `tags`-- `tags['<tagName>']`
- - This bracket syntax supports tag names that have punctuation such as a hyphen, period, or space.
- - Where **\<tagName\>** is the name of the tag to validate the condition for.
- - Examples: `tags['Acct.CostCenter']` where **Acct.CostCenter** is the name of the tag.
-- `tags['''<tagName>''']`
- - This bracket syntax supports tag names that have apostrophes in it by escaping with double
- apostrophes.
- - Where **'\<tagName\>'** is the name of the tag to validate the condition for.
- - Example: `tags['''My.Apostrophe.Tag''']` where **'My.Apostrophe.Tag'** is the name of the tag.
-
- > [!NOTE]
- > `tags.<tagName>`, `tags[tagName]`, and `tags[tag.with.dots]` are still acceptable ways of
- > declaring a tags field. However, the preferred expressions are those listed above.
-- property aliases - for a list, see [Aliases](#aliases).
- > [!NOTE]
- > In **field** expressions referring to **\[\*\] alias**, each element in the array is evaluated
- > individually with logical **and** between elements. For more information, see
- > [Referencing array resource properties](../how-to/author-policies-for-arrays.md#referencing-array-resource-properties).
--
-Conditions that use `field` expressions can replace the legacy policy definition syntax `"source": "action"`, which used to work for write operations. For example, this is no longer supported:
-```json
-{
- "source": "action",
- "like": "Microsoft.Network/publicIPAddresses/*"
-}
-```
-
-But the desired behavior can be achieved using `field` logic:
-```json
-{
- "field": "type",
- "equals": "Microsoft.Network/publicIPAddresses"
-}
-```
-
-#### Use tags with parameters
-
-A parameter value can be passed to a tag field. Passing a parameter to a tag field increases the
-flexibility of the policy definition during policy assignment.
-
-In the following example, `concat` is used to create a tags field lookup for the tag named the value
-of the **tagName** parameter. If that tag doesn't exist, the **modify** effect is used to add the
-tag using the value of the same named tag set on the audited resources parent resource group by
-using the `resourcegroup()` lookup function.
-
-```json
-{
- "if": {
- "field": "[concat('tags[', parameters('tagName'), ']')]",
- "exists": "false"
- },
- "then": {
- "effect": "modify",
- "details": {
- "operations": [{
- "operation": "add",
- "field": "[concat('tags[', parameters('tagName'), ']')]",
- "value": "[resourcegroup().tags[parameters('tagName')]]"
- }],
- "roleDefinitionIds": [
- "/providers/microsoft.authorization/roleDefinitions/4a9ae827-6dc8-4573-8ac7-8239d42aa03f"
- ]
- }
- }
-}
-```
-
-### Value
-
-Conditions that evaluate whether a value meets certain criteria can be formed using a **value**
-expression. Values can be literals, the values of [parameters](#parameters), or the returned values
-of any [supported template functions](#policy-functions).
-
-> [!WARNING]
-> If the result of a _template function_ is an error, policy evaluation fails. A failed evaluation
-> is an implicit **deny**. For more information, see
-> [avoiding template failures](#avoiding-template-failures). Use
-> [enforcementMode](./assignment-structure.md#enforcement-mode) of **DoNotEnforce** to prevent
-> impact of a failed evaluation on new or updated resources while testing and validating a new
-> policy definition.
-
-#### Value examples
-
-This policy rule example uses **value** to compare the result of the `resourceGroup()` function and
-the returned **name** property to a **like** condition of `*netrg`. The rule denies any resource not
-of the `Microsoft.Network/*` **type** in any resource group whose name ends in `*netrg`.
-
-```json
-{
- "if": {
- "allOf": [{
- "value": "[resourceGroup().name]",
- "like": "*netrg"
- },
- {
- "field": "type",
- "notLike": "Microsoft.Network/*"
- }
- ]
- },
- "then": {
- "effect": "deny"
- }
-}
-```
-
-This policy rule example uses **value** to check if the result of multiple nested functions
-**equals** `true`. The rule denies any resource that doesn't have at least three tags.
-
-```json
-{
- "mode": "indexed",
- "policyRule": {
- "if": {
- "value": "[less(length(field('tags')), 3)]",
- "equals": "true"
- },
- "then": {
- "effect": "deny"
- }
- }
-}
-```
-
-#### Avoiding template failures
-
-The use of _template functions_ in **value** allows for many complex nested functions. If the result
-of a _template function_ is an error, policy evaluation fails. A failed evaluation is an implicit
-**deny**. An example of a **value** that fails in certain scenarios:
-
-```json
-{
- "policyRule": {
- "if": {
- "value": "[substring(field('name'), 0, 3)]",
- "equals": "abc"
- },
- "then": {
- "effect": "audit"
- }
- }
-}
-```
-
-The example policy rule above uses
-[substring()](../../../azure-resource-manager/templates/template-functions-string.md#substring) to
-compare the first three characters of **name** to **abc**. If **name** is shorter than three
-characters, the `substring()` function results in an error. This error causes the policy to become a
-**deny** effect.
-
-Instead, use the [if()](../../../azure-resource-manager/templates/template-functions-logical.md#if)
-function to check if the first three characters of **name** equal **abc** without allowing a
-**name** shorter than three characters to cause an error:
-
-```json
-{
- "policyRule": {
- "if": {
- "value": "[if(greaterOrEquals(length(field('name')), 3), substring(field('name'), 0, 3), 'not starting with abc')]",
- "equals": "abc"
- },
- "then": {
- "effect": "audit"
- }
- }
-}
-```
-
-With the revised policy rule, `if()` checks the length of **name** before trying to get a
-`substring()` on a value with fewer than three characters. If **name** is too short, the value "not
-starting with abc" is returned instead and compared to **abc**. A resource with a short name that
-doesn't begin with **abc** still fails the policy rule, but no longer causes an error during
-evaluation.
-
-### Count
-
-Conditions that count how many members of an array meet certain criteria can be formed using a
-**count** expression. Common scenarios are checking whether 'at least one of', 'exactly one of',
-'all of', or 'none of' the array members satisfy a condition. **Count** evaluates each array member
-for a condition expression and sums the _true_ results, which is then compared to the expression
-operator.
-
-#### Field count
-
-Count how many members of an array in the request payload satisfy a condition expression. The
-structure of **field count** expressions is:
-
-```json
-{
- "count": {
- "field": "<[*] alias>",
- "where": {
- /* condition expression */
- }
- },
- "<condition>": "<compare the count of true condition expression array members to this value>"
-}
-```
-
-The following properties are used with **field count**:
--- **count.field** (required): Contains the path to the array and must be an array alias.-- **count.where** (optional): The condition expression to individually evaluate for each [\[\*\]
- alias](#understanding-the--alias) array member of `count.field`. If this property isn't
- provided, all array members with the path of 'field' are evaluated to _true_. Any
- [condition](../concepts/definition-structure.md#conditions) can be used inside this property.
- [Logical operators](#logical-operators) can be used inside this property to create complex
- evaluation requirements.
-- **\<condition\>** (required): The value is compared to the number of items that met the
- **count.where** condition expression. A numeric
- [condition](../concepts/definition-structure.md#conditions) should be used.
-
-For more details on how to work with array properties in Azure Policy, including detailed
-explanation on how the **field count** expression is evaluated, see
-[Referencing array resource properties](../how-to/author-policies-for-arrays.md#referencing-array-resource-properties).
-
-#### Value count
-
-Count how many members of an array satisfy a condition. The array can be a literal array or a
-[reference to array parameter](#using-a-parameter-value). The structure of **value count**
-expressions is:
-
-```json
-{
- "count": {
- "value": "<literal array | array parameter reference>",
- "name": "<index name>",
- "where": {
- /* condition expression */
- }
- },
- "<condition>": "<compare the count of true condition expression array members to this value>"
-}
-```
-
-The following properties are used with **value count**:
--- **count.value** (required): The array to evaluate.-- **count.name** (required): The index name, composed of English letters and digits. Defines a name
- for the value of the array member evaluated in the current iteration. The name is used for
- referencing the current value inside the `count.where` condition. Optional when the **count**
- expression isn't in a child of another **count** expression. When not provided, the index name is
- implicitly set to `"default"`.
-- **count.where** (optional): The condition expression to individually evaluate for each array
- member of `count.value`. If this property isn't provided, all array members are evaluated to
- _true_. Any [condition](../concepts/definition-structure.md#conditions) can be used inside this
- property. [Logical operators](#logical-operators) can be used inside this property to create
- complex evaluation requirements. The value of the currently enumerated array member can be
- accessed by calling the [current](#the-current-function) function.
-- **\<condition\>** (required): The value is compared to the number of items that met the
- `count.where` condition expression. A numeric
- [condition](../concepts/definition-structure.md#conditions) should be used.
-
-#### The current function
-
-The `current()` function is only available inside the `count.where` condition. It returns the value
-of the array member that is currently enumerated by the **count** expression evaluation.
-
-**Value count usage**
--- `current(<index name defined in count.name>)`. For example: `current('arrayMember')`.-- `current()`. Allowed only when the **value count** expression isn't a child of another **count**
- expression. Returns the same value as above.
-
-If the value returned by the call is an object, property accessors are supported. For example:
-`current('objectArrayMember').property`.
-
-**Field count usage**
--- `current(<the array alias defined in count.field>)`. For example,
- `current('Microsoft.Test/resource/enumeratedArray[*]')`.
-- `current()`. Allowed only when the **field count** expression isn't a child of another **count**
- expression. Returns the same value as above.
-- `current(<alias of a property of the array member>)`. For example,
- `current('Microsoft.Test/resource/enumeratedArray[*].property')`.
-
-#### Field count examples
-
-Example 1: Check if an array is empty
-
-```json
-{
- "count": {
- "field": "Microsoft.Network/networkSecurityGroups/securityRules[*]"
- },
- "equals": 0
-}
-```
-
-Example 2: Check for only one array member to meet the condition expression
-
-```json
-{
- "count": {
- "field": "Microsoft.Network/networkSecurityGroups/securityRules[*]",
- "where": {
- "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].description",
- "equals": "My unique description"
- }
- },
- "equals": 1
-}
-```
-
-Example 3: Check for at least one array member to meet the condition expression
-
-```json
-{
- "count": {
- "field": "Microsoft.Network/networkSecurityGroups/securityRules[*]",
- "where": {
- "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].description",
- "equals": "My common description"
- }
- },
- "greaterOrEquals": 1
-}
-```
-
-Example 4: Check that all object array members meet the condition expression
-
-```json
-{
- "count": {
- "field": "Microsoft.Network/networkSecurityGroups/securityRules[*]",
- "where": {
- "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].description",
- "equals": "description"
- }
- },
- "equals": "[length(field('Microsoft.Network/networkSecurityGroups/securityRules[*]'))]"
-}
-```
-
-Example 5: Check that at least one array member matches multiple properties in the condition
-expression
-
-```json
-{
- "count": {
- "field": "Microsoft.Network/networkSecurityGroups/securityRules[*]",
- "where": {
- "allOf": [
- {
- "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].direction",
- "equals": "Inbound"
- },
- {
- "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].access",
- "equals": "Allow"
- },
- {
- "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].destinationPortRange",
- "equals": "3389"
- }
- ]
- }
- },
- "greater": 0
-}
-```
-
-Example 6: Use `current()` function inside the `where` conditions to access the value of the
-currently enumerated array member in a template function. This condition checks whether a virtual
-network contains an address prefix that isn't under the 10.0.0.0/24 CIDR range.
-
-```json
-{
- "count": {
- "field": "Microsoft.Network/virtualNetworks/addressSpace.addressPrefixes[*]",
- "where": {
- "value": "[ipRangeContains('10.0.0.0/24', current('Microsoft.Network/virtualNetworks/addressSpace.addressPrefixes[*]'))]",
- "equals": false
- }
- },
- "greater": 0
-}
-```
-
-Example 7: Use `field()` function inside the `where` conditions to access the value of the currently
-enumerated array member. This condition checks whether a virtual network contains an address prefix
-that isn't under the 10.0.0.0/24 CIDR range.
-
-```json
-{
- "count": {
- "field": "Microsoft.Network/virtualNetworks/addressSpace.addressPrefixes[*]",
- "where": {
- "value": "[ipRangeContains('10.0.0.0/24', first(field(('Microsoft.Network/virtualNetworks/addressSpace.addressPrefixes[*]')))]",
- "equals": false
- }
- },
- "greater": 0
-}
-```
-
-#### Value count examples
-
-Example 1: Check if resource name matches any of the given name patterns.
-
-```json
-{
- "count": {
- "value": [ "prefix1_*", "prefix2_*" ],
- "name": "pattern",
- "where": {
- "field": "name",
- "like": "[current('pattern')]"
- }
- },
- "greater": 0
-}
-```
-
-Example 2: Check if resource name matches any of the given name patterns. The `current()` function
-doesn't specify an index name. The outcome is the same as the previous example.
-
-```json
-{
- "count": {
- "value": [ "prefix1_*", "prefix2_*" ],
- "where": {
- "field": "name",
- "like": "[current()]"
- }
- },
- "greater": 0
-}
-```
-
-Example 3: Check if resource name matches any of the given name patterns provided by an array
-parameter.
-
-```json
-{
- "count": {
- "value": "[parameters('namePatterns')]",
- "name": "pattern",
- "where": {
- "field": "name",
- "like": "[current('pattern')]"
- }
- },
- "greater": 0
-}
-```
-
-Example 4: Check if any of the virtual network address prefixes isn't under the list of approved
-prefixes.
-
-```json
-{
- "count": {
- "field": "Microsoft.Network/virtualNetworks/addressSpace.addressPrefixes[*]",
- "where": {
- "count": {
- "value": "[parameters('approvedPrefixes')]",
- "name": "approvedPrefix",
- "where": {
- "value": "[ipRangeContains(current('approvedPrefix'), current('Microsoft.Network/virtualNetworks/addressSpace.addressPrefixes[*]'))]",
- "equals": true
- },
- },
- "equals": 0
- }
- },
- "greater": 0
-}
-```
-
-Example 5: Check that all the reserved NSG rules are defined in an NSG. The properties of the
-reserved NSG rules are defined in an array parameter containing objects.
-
-Parameter value:
-
-```json
-[
- {
- "priority": 101,
- "access": "deny",
- "direction": "inbound",
- "destinationPortRange": 22
- },
- {
- "priority": 102,
- "access": "deny",
- "direction": "inbound",
- "destinationPortRange": 3389
- }
-]
-```
-
-Policy:
-
-```json
-{
- "count": {
- "value": "[parameters('reservedNsgRules')]",
- "name": "reservedNsgRule",
- "where": {
- "count": {
- "field": "Microsoft.Network/networkSecurityGroups/securityRules[*]",
- "where": {
- "allOf": [
- {
- "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].priority",
- "equals": "[current('reservedNsgRule').priority]"
- },
- {
- "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].access",
- "equals": "[current('reservedNsgRule').access]"
- },
- {
- "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].direction",
- "equals": "[current('reservedNsgRule').direction]"
- },
- {
- "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].destinationPortRange",
- "equals": "[current('reservedNsgRule').destinationPortRange]"
- }
- ]
- }
- },
- "equals": 1
- }
- },
- "equals": "[length(parameters('reservedNsgRules'))]"
-}
-```
-
-### Policy functions
-
-Functions can be used to introduce additional logic into a policy rule. They are resolved within the [policy rule](#policy-rule) of a policy definition and within [parameter values assigned to policy definitions in an initiative](initiative-definition-structure.md#passing-a-parameter-value-to-a-policy-definition).
-
-All [Resource Manager template
-functions](../../../azure-resource-manager/templates/template-functions.md) are available to use
-within a policy rule, except the following functions and user-defined functions:
--- copyIndex()-- dateTimeAdd()-- dateTimeFromEpoch-- dateTimeToEpoch-- deployment()-- environment()-- extensionResourceId()-- [lambda()](../../../azure-resource-manager/templates/template-functions-lambda.md)-- listAccountSas()-- listKeys()-- listSecrets()-- list*-- managementGroup()-- newGuid()-- pickZones()-- providers()-- reference()-- resourceId()-- subscriptionResourceId()-- tenantResourceId()-- tenant()-- variables()-
-> [!NOTE]
-> These functions are still available within the `details.deployment.properties.template` portion of
-> the template deployment in a **deployIfNotExists** policy definition.
-
-The following function is available to use in a policy rule, but differs from use in an Azure
-Resource Manager template (ARM template):
--- `utcNow()` - Unlike an ARM template, this property can be used outside _defaultValue_.
- - Returns a string that is set to the current date and time in Universal ISO 8601 DateTime format
- `yyyy-MM-ddTHH:mm:ss.fffffffZ`.
-
-The following functions are only available in policy rules:
--- `addDays(dateTime, numberOfDaysToAdd)`
- - **dateTime**: [Required] string - String in the Universal ISO 8601 DateTime format
- 'yyyy-MM-ddTHH:mm:ss.FFFFFFFZ'
- - **numberOfDaysToAdd**: [Required] integer - Number of days to add
--- `field(fieldName)`
- - **fieldName**: [Required] string - Name of the [field](#fields) to retrieve
- - Returns the value of that field from the resource that is being evaluated by the If condition.
- - `field` is primarily used with **AuditIfNotExists** and **DeployIfNotExists** to reference
- fields on the resource that are being evaluated. An example of this use can be seen in the
- [DeployIfNotExists example](effects.md#deployifnotexists-example).
--- `requestContext().apiVersion`
- - Returns the API version of the request that triggered policy evaluation (example: `2021-09-01`).
- This value is the API version that was used in the PUT/PATCH request for evaluations on resource
- creation/update. The latest API version is always used during compliance evaluation on existing
- resources.
--- `policy()`
- - Returns the following information about the policy that is being evaluated. Properties can be
- accessed from the returned object (example: `[policy().assignmentId]`).
-
- ```json
- {
- "assignmentId": "/subscriptions/ad404ddd-36a5-4ea8-b3e3-681e77487a63/providers/Microsoft.Authorization/policyAssignments/myAssignment",
- "definitionId": "/providers/Microsoft.Authorization/policyDefinitions/34c877ad-507e-4c82-993e-3452a6e0ad3c",
- "setDefinitionId": "/providers/Microsoft.Authorization/policySetDefinitions/42a694ed-f65e-42b2-aa9e-8052e9740a92",
- "definitionReferenceId": "StorageAccountNetworkACLs"
- }
- ```
--- `ipRangeContains(range, targetRange)`
- - **range**: [Required] string - String specifying a range of IP addresses to check if the
- _targetRange_ is within.
- - **targetRange**: [Required] string - String specifying a range of IP addresses to validate as
- included within the _range_.
- - Returns a _boolean_ for whether the _range_ IP address range contains the _targetRange_ IP
- address range. Empty ranges, or mixing between IP families isn't allowed and results in
- evaluation failure.
-
- Supported formats:
- - Single IP address (examples: `10.0.0.0`, `2001:0DB8::3:FFFE`)
- - CIDR range (examples: `10.0.0.0/24`, `2001:0DB8::/110`)
- - Range defined by start and end IP addresses (examples: `192.168.0.1-192.168.0.9`, `2001:0DB8::-2001:0DB8::3:FFFF`)
--- `current(indexName)`
- - Special function that may only be used inside [count expressions](#count).
-
-#### Policy function example
-
-This policy rule example uses the `resourceGroup` resource function to get the **name** property,
-combined with the `concat` array and object function to build a `like` condition that enforces the
-resource name to start with the resource group name.
-
-```json
-{
- "if": {
- "not": {
- "field": "name",
- "like": "[concat(resourceGroup().name,'*')]"
- }
- },
- "then": {
- "effect": "deny"
- }
-}
-```
-
-### Policy rule limits
-
-#### Limits enforced during authoring
-
-Limits to the structure of policy rules are enforced during the authoring or assignment of a policy.
-Attempts to create or assign policy definitions that exceed these limits will fail.
-
-| Limit | Value | Additional details |
-|:|:|:|
-| Condition expressions in the **if** condition | 4096 | |
-| Condition expressions in the **then** block | 128 | Applies to the **existenceCondition** of **AuditIfNotExists** and **DeployIfNotExists** policies |
-| Policy functions per policy rule | 2048 | |
-| Policy function number of parameters | 128 | Example: `[function('parameter1', 'parameter2', ...)]` |
-| Nested policy functions depth | 64 | Example: `[function(nested1(nested2(...)))]` |
-| Policy functions expression string length | 81920 | Example: the length of `"[function(....)]"` |
-| **Field count** expressions per array | 5 | |
-| **Value count** expressions per policy rule | 10 | |
-| **Value count** expression iteration count | 100 | For nested **Value count** expressions, this also includes the iteration count of the parent expression |
-
-#### Limits enforced during evaluation
-
-Limits to the size of objects that are processed by policy functions during policy evaluation. These limits can't always be enforced during authoring since they depend on the evaluated content. For example:
-
-```json
-{
- "field": "name",
- "equals": "[concat(field('stringPropertyA'), field('stringPropertyB'))]"
-}
-```
-
-The length of the string created by the `concat()` function depends on the value of properties in the evaluated resource.
-
-| Limit | Value | Example |
-|:|:|:|
-| Length of string returned by a function | 131072 | `[concat(field('longString1'), field('longString2'))]`|
-| Depth of complex objects provided as a parameter to, or returned by a function | 128 | `[union(field('largeObject1'), field('largeObject2'))]` |
-| Number of nodes of complex objects provided as a parameter to, or returned by a function | 32768 | `[concat(field('largeArray1'), field('largeArray2'))]` |
-
-> [!WARNING]
-> Policy that exceed the above limits during evaluation will effectively become a **deny** policy and can block incoming requests.
-> When writing policies with complex functions, be mindful of these limits and test your policies against resources that have the potential to exceed them.
-
-## Aliases
-
-You use property aliases to access specific properties for a resource type. Aliases enable you to
-restrict what values or conditions are allowed for a property on a resource. Each alias maps to
-paths in different API versions for a given resource type. During policy evaluation, the policy
-engine gets the property path for that API version.
-
-The list of aliases is always growing. To find what aliases are currently supported by Azure
-Policy, use one of the following methods:
--- Azure Policy extension for Visual Studio Code (recommended)-
- Use the [Azure Policy extension for Visual Studio Code](../how-to/extension-for-vscode.md) to view
- and discover aliases for resource properties.
-
- :::image type="content" source="../media/extension-for-vscode/extension-hover-shows-property-alias.png" alt-text="Screenshot of the Azure Policy extension for Visual Studio Code hovering a property to display the alias names." border="false":::
--- Azure PowerShell-
- ```azurepowershell-interactive
- # Login first with Connect-AzAccount if not using Cloud Shell
-
- # Use Get-AzPolicyAlias to list available providers
- Get-AzPolicyAlias -ListAvailable
-
- # Use Get-AzPolicyAlias to list aliases for a Namespace (such as Azure Compute -- Microsoft.Compute)
- (Get-AzPolicyAlias -NamespaceMatch 'compute').Aliases
- ```
-
- > [!NOTE]
- > To find aliases that can be used with the [modify](./effects.md#modify) effect, use the
- > following command in Azure PowerShell **4.6.0** or higher:
- >
- > ```azurepowershell-interactive
- > Get-AzPolicyAlias | Select-Object -ExpandProperty 'Aliases' | Where-Object { $_.DefaultMetadata.Attributes -eq 'Modifiable' }
- > ```
--- Azure CLI-
- ```azurecli-interactive
- # Login first with az login if not using Cloud Shell
-
- # List namespaces
- az provider list --query [*].namespace
-
- # Get Azure Policy aliases for a specific Namespace (such as Azure Compute -- Microsoft.Compute)
- az provider show --namespace Microsoft.Compute --expand "resourceTypes/aliases" --query "resourceTypes[].aliases[].name"
- ```
--- REST API / ARMClient-
- ```http
- GET https://management.azure.com/providers/?api-version=2019-10-01&$expand=resourceTypes/aliases
- ```
-
-### Understanding the [*] alias
-
-Several of the aliases that are available have a version that appears as a 'normal' name and another
-that has **\[\*\]** attached to it. For example:
--- `Microsoft.Storage/storageAccounts/networkAcls.ipRules`-- `Microsoft.Storage/storageAccounts/networkAcls.ipRules[*]`-
-The 'normal' alias represents the field as a single value. This field is for exact match comparison
-scenarios when the entire set of values must be exactly as defined, no more and no less.
-
-The **\[\*\]** alias represents a collection of values selected from the elements of an array
-resource property. For example:
-
-| Alias | Selected values |
-|:|:|
-| `Microsoft.Storage/storageAccounts/networkAcls.ipRules[*]` | The elements of the `ipRules` array. |
-| `Microsoft.Storage/storageAccounts/networkAcls.ipRules[*].action` | The values of the `action` property from each element of the `ipRules` array. |
-
-When used in a [field](#fields) condition, array aliases make it possible to compare each individual
-array element to a target value. When used with [count](#count) expression, it's possible to:
--- Check the size of an array-- Check if all\any\none of the array elements meet a complex condition-- Check if exactly ***n*** array elements meet a complex condition-
-For more information and examples, see
-[Referencing array resource properties](../how-to/author-policies-for-arrays.md#referencing-array-resource-properties).
-
-### Effect
-
-Azure Policy supports the following types of effect:
--- **Append**: adds the defined set of fields to the request-- **Audit**: generates a warning event in activity log but doesn't fail the request-- **AuditIfNotExists**: generates a warning event in activity log if a related resource doesn't
- exist
-- **Deny**: generates an event in the activity log and fails the request based on requested resource configuration-- **DenyAction**: generates an event in the activity log and fails the request based on requested action-- **DeployIfNotExists**: deploys a related resource if it doesn't already exist-- **Disabled**: doesn't evaluate resources for compliance to the policy rule-- **Modify**: adds, updates, or removes the defined set of fields in the request-- **EnforceOPAConstraint** (deprecated): configures the Open Policy Agent admissions controller with
- Gatekeeper v3 for self-managed Kubernetes clusters on Azure
-- **EnforceRegoPolicy** (deprecated): configures the Open Policy Agent admissions controller with
- Gatekeeper v2 in Azure Kubernetes Service
-
-For complete details on each effect, order of evaluation, properties, and examples, see
-[Understanding Azure Policy Effects](effects.md).
-
-## Next steps
--- See the [initiative definition structure](./initiative-definition-structure.md)-- Review examples at [Azure Policy samples](../samples/index.md).-- Review [Understanding policy effects](effects.md).-- Understand how to [programmatically create policies](../how-to/programmatically-create.md).-- Learn how to [get compliance data](../how-to/get-compliance-data.md).-- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).-- Review what a management group is with [Organize your resources with Azure management groups](../../management-groups/overview.md).
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md
condition as non-compliant.
An append effect only has a **details** array, which is required. As **details** is an array, it can take either a single **field/value** pair or multiples. Refer to
-[definition structure](definition-structure.md#fields) for the list of acceptable fields.
+[definition structure](./definition-structure-policy-rule.md#fields) for the list of acceptable fields.
### Append examples Example 1: Single **field/value** pair using a non-`[*]`
-[alias](definition-structure.md#aliases) with an array **value** to set IP rules on a storage
+[alias](./definition-structure-alias.md) with an array **value** to set IP rules on a storage
account. When the non-`[*]` alias is an array, the effect appends the **value** as the entire array. If the array already exists, a deny event occurs from the conflict.
array. If the array already exists, a deny event occurs from the conflict.
} ```
-Example 2: Single **field/value** pair using an `[*]` [alias](definition-structure.md#aliases)
+Example 2: Single **field/value** pair using an `[*]` [alias](./definition-structure-alias.md)
with an array **value** to set IP rules on a storage account. When you use the `[*]` alias, the effect appends the **value** to a potentially pre-existing array. If the array doesn't exist yet, it's created.
to `Unknown`. The `Unknown` compliance state indicates that you must attest the
The following screenshot shows how a manual policy assignment with the `Unknown` state appears in the Azure portal:
-![Resource compliance table in the Azure portal showing an assigned manual policy with a compliance reason of 'unknown.'](./manual-policy-portal.png)
When a policy definition with `manual` effect is assigned, you can set the compliance states of targeted resources or scopes through custom [attestations](attestation-structure.md). Attestations also allow you to provide optional supplemental information through the form of metadata and links to **evidence** that accompany the chosen compliance state. The person assigning the manual policy can recommend a default storage location for evidence by specifying the `evidenceStorages` property of the [policy assignment's metadata](../concepts/assignment-structure.md#metadata).
needed for remediation and the **operations** used to add, update, or remove tag
_Remove_. _Add_ behaves similar to the [Append](#append) effect. - **field** (required) - The tag to add, replace, or remove. Tag names must adhere to the same naming convention for
- other [fields](./definition-structure.md#fields).
+ other [fields](./definition-structure-policy-rule.md#fields).
- **value** (optional) - The value to set the tag to. - This property is required if **operation** is _addOrReplace_ or _Add_.
governance Policy Applicability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-applicability.md
# What is applicability in Azure Policy?
-When a policy definition is assigned to a scope, Azure Policy determines which resources in that scope should be considered for compliance evaluation. A resource will only be assessed for compliance if it's considered **applicable** to the given policy assignment.
+When a policy definition is assigned to a scope, Azure Policy determines which resources in that scope should be considered for compliance evaluation. A resource will only be assessed for compliance if it's considered **applicable** to the given policy assignment.
Applicability is determined by several factors:-- **Conditions** in the `if` block of the [policy rule](../concepts/definition-structure.md#policy-rule).
+- **Conditions** in the `if` block of the [policy rule](../concepts/definition-structure-policy-rule.md#conditions).
- **Mode** of the policy definition.-- **Excluded scopes** specified in the assignment. -- **Resource selectors** specified in the assignment.
+- **Excluded scopes** specified in the assignment.
+- **Resource selectors** specified in the assignment.
- **Exemptions** of resources or resource hierarchies. Condition(s) in the `if` block of the policy rule are evaluated for applicability in slightly different ways based on the effect.
Condition(s) in the `if` block of the policy rule are evaluated for applicabilit
> [!NOTE] > Applicability is different from compliance, and the logic used to determine each is different. If a resource is **applicable** that means it is relevant to the policy. If a resource is **compliant** that means it adheres to the policy. Sometimes only certain conditions from the policy rule impact applicability, while all conditions of the policy rule impact compliance state.
-## Resource manager modes
+## Resource Manager modes
### -IfNotExists policy effects
Azure Machine Learning component type:
Policies with mode `Microsoft.Network.Data` are applicable if the `type` and `name` conditions of the policy rule evaluate to true. The `type` refers to component type: - Microsoft.Network/virtualNetworks
-## Not Applicable Resources
+## Not Applicable Resources
There could be situations in which resources are applicable to an assignment based on conditions or scope, but they shouldn't be applicable due to business reasons. At that time, it would be best to apply [exclusions](./assignment-structure.md#excluded-scopes) or [exemptions](./exemption-structure.md). To learn more on when to use either, review [scope comparison](./scope.md#scope-comparison) > [!NOTE] > By design, Azure Policy does not evaluate resources under the `Microsoft.Resources` resource provider (RP) from
-policy evaluation, except for subscriptions and resource groups.
+policy evaluation, except for subscriptions and resource groups.
## Next steps
hdinsight-aks Assign Kafka Topic Event Message To Azure Data Lake Storage Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/assign-kafka-topic-event-message-to-azure-data-lake-storage-gen2.md
Flink provides an Apache Kafka connector for reading data from and writing data
*abfsGen2.java* > [!Note]
-> Replace [Apache Kafka on HDInsight cluster](../../hdinsight/kafk) bootStrapServers with your own brokers for Kafka 2.4 or 3.2
+> Replace [Apache Kafka on HDInsight cluster](../../hdinsight/kafk) bootStrapServers with your own brokers for Kafka 3.2
``` java package contoso.example;
hdinsight-aks Change Data Capture Connectors For Apache Flink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/change-data-capture-connectors-for-apache-flink.md
GO
``` ##### Maven source code on IdeaJ
-In the below snippet, we use Kafka 2.4.1. Based on your usage, update the version of Kafka on `<kafka.version>`.
+Based on your usage, update the version of Kafka on `<kafka.version>`.
**maven pom.xml**
hdinsight-aks Join Stream Kafka Table Filesystem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/join-stream-kafka-table-filesystem.md
We're creating a topic called `user_events`.
timestamp, ```
-**Kafka 2.4.1**
-```
-/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --replication-factor 2 --partitions 3 --topic user_events --zookeeper zk0-contos:2181
-/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --replication-factor 2 --partitions 3 --topic user_events_output --zookeeper zk0-contos:2181
-```
- **Kafka 3.2.0** ``` /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --replication-factor 2 --partitions 3 --topic user_events --bootstrap-server wn0-contsk:9092
In this step, we perform the following activities
<flink.version>1.17.0</flink.version> <java.version>1.8</java.version> <scala.binary.version>2.12</scala.binary.version>
- <kafka.version>3.2.0</kafka.version> //replace with 2.4.1 if you are using HDInsight Kafka 2.4.1
+ <kafka.version>3.2.0</kafka.version>
</properties> <dependencies> <dependency>
hdinsight-aks Sink Kafka To Kibana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/sink-kafka-to-kibana.md
Title: Use Elasticsearch along with Apache Flink® on HDInsight on AKS
-description: Learn how to use Elasticsearch along Apache Flink® on HDInsight on AKS
+description: Learn how to use Elasticsearch along Apache Flink® on HDInsight on AKS.
Previously updated : 10/27/2023 Last updated : 04/04/2024 # Using Elasticsearch with Apache Flink® on HDInsight on AKS
In this article, learn how to Use Elastic along Apache Flink® on HDInsight on A
## Elasticsearch and Kibana
-Elasticsearch is a distributed, free and open search and analytics engine for all types of data, including
+Elasticsearch is a distributed, free, and open search and analytics engine for all types of data, including.
+ * Textual * Numerical * Geospatial
Elasticsearch is a distributed, free and open search and analytics engine for al
Kibana is a free and open frontend application that sits on top of the elastic stack, providing search and data visualization capabilities for data indexed in Elasticsearch.
-For more information, refer
+For more information, see.
* [Elasticsearch](https://www.elastic.co) * [Kibana](https://www.elastic.co/guide/en/kibana/current/https://docsupdatetracker.net/index.html) ## Prerequisites
-* [Create Flink 1.16.0 cluster](./flink-create-cluster-portal.md)
+* [Create Flink 1.17.0 cluster](./flink-create-cluster-portal.md)
* Elasticsearch-7.13.2 * Kibana-7.13.2
-* [HDInsight 5.0 - Kafka 2.4.1](../../hdinsight/kafk)
+* [HDInsight 5.0 - Kafka 3.2.0](../../hdinsight/kafk)
* IntelliJ IDEA for development on an Azure VM which in the same Vnet
sudo apt install elasticsearch
For installing and configuring Kibana Dashboard, we donΓÇÖt need to add any other repository because the packages are available through the already added ElasticSearch.
-We use the following command to install Kibana
+We use the following command to install Kibana.
``` sudo apt install kibana
sudo apt install kibana
``` ### Access the Kibana Dashboard web interface
-In order to make Kibana accessible from output, need to set network.host to 0.0.0.0
+In order to make Kibana accessible from output, need to set network.host to 0.0.0.0.
-configure /etc/kibana/kibana.yml on Ubuntu VM
+Configure `/etc/kibana/kibana.yml` on Ubuntu VM
> [!NOTE] > 10.0.1.4 is a local private IP, that we have used which can be accessed in maven project develop Windows VM. You're required to make modifications according to your network security requirements. We use the same IP later to demo for performing analytics on Kibana.
elasticsearch.hosts: ["http://10.0.1.4:9200"]
## Prepare Click Events on HDInsight Kafka
-We use python output as input to produce the streaming data
+We use python output as input to produce the streaming data.
``` sshuser@hn0-contsk:~$ python weblog.py | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --bootstrap-server wn0-contsk:9092 --topic click_events ```
-Now, lets check messages in this topic
+Now, lets check messages in this topic.
``` sshuser@hn0-contsk:~$ /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --bootstrap-server wn0-contsk:9092 --topic click_events
sshuser@hn0-contsk:~$ /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.s
## Creating Kafka Sink to Elastic
-Let us write maven source code on the Windows VM
+Let us write maven source code on the Windows VM.
**Main: kafkaSinkToElastic.java** ``` java
public class kafkaSinkToElastic {
**Package the jar and submit to Flink to run on WebSSH**
-On [Secure Shell for Flink](./flink-web-ssh-on-portal-to-flink-sql.md), you can use the following commands
+On [Secure Shell for Flink](./flink-web-ssh-on-portal-to-flink-sql.md), you can use the following commands.
``` msdata@pod-0 [ ~ ]$ ls -l FlinkElasticSearch-1.0-SNAPSHOT.jar
Job has been submitted with JobID e0eba72d5143cea53bcf072335a4b1cb
## Validation on Apache Flink Job UI
-You can find the job in running state on your Flink Web UI
+You can find the job in running state on your Flink Web UI.
:::image type="content" source="./media/sink-kafka-to-kibana/flink-elastic-job.png" alt-text="Screenshot showing Kibana UI to start Elasticsearch and Kibana and perform analytics on Kibana." lightbox="./media/sink-kafka-to-kibana/flink-elastic-job.png":::
hdinsight-aks Use Apache Nifi With Datastream Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-apache-nifi-with-datastream-api.md
By combining the low latency streaming features of Apache Flink and the dataflow
For purposes of this demonstration, we're using a HDInsight Kafka Cluster. Let us prepare HDInsight Kafka topic for the demo. > [!NOTE]
-> Setup a HDInsight cluster with [Apache Kafka](../../hdinsight/kafk) and replace broker list with your own list before you get started for both Kafka 2.4 and 3.2.
+> Setup a HDInsight cluster with [Apache Kafka](../../hdinsight/kafk) and replace broker list with your own list before you get started for both Kafka 3.2.
-**Kafka 2.4.1**
-```
-/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --replication-factor 2 --partitions 3 --topic click_events --zookeeper zk0-contsk:2181
-```
**Kafka 3.2.0** ```
public class ClickSource implements SourceFunction<Event> {
``` **Maven pom.xml**
-You can replace 2.4.1 with 3.2.0 in case you're using Kafka 3.2.0 on HDInsight, where applicable on the pom.xml.
``` xml <?xml version="1.0" encoding="UTF-8"?>
You can replace 2.4.1 with 3.2.0 in case you're using Kafka 3.2.0 on HDInsight,
<flink.version>1.17.0</flink.version> <java.version>1.8</java.version> <scala.binary.version>2.12</scala.binary.version>
- <kafka.version>3.2.0</kafka.version> > Replace 2.4.1 with 3.2.0 , in case you're using HDInsight Kafka 3.2.0
+ <kafka.version>3.2.0</kafka.version>
</properties> <dependencies> <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-streaming-java -->
hdinsight-aks Use Flink To Sink Kafka Message Into Hbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-flink-to-sink-kafka-message-into-hbase.md
hbase:002:0>
<java.version>1.8</java.version> <scala.binary.version>2.12</scala.binary.version> <hbase.version>2.4.11</hbase.version>
- <kafka.version>3.2.0</kafka.version> // Replace with 2.4.0 for HDInsight Kafka 2.4
+ <kafka.version>3.2.0</kafka.version>
</properties> <dependencies> <dependency>
hdinsight Hdinsight Overview Before You Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-overview-before-you-start.md
As part of the best practices, we recommend you keep your clusters updated on re
HDInsight release happens every 30 to 60 days. It's always good to move to the latest release as early possible. The recommended maximum duration for cluster upgrades is less than six months.
-For more information, see how to [Migrate HDInsight cluster to a newer version](./hdinsight-upgrade-cluster.md)
+For more information, see how to [Migrate HDInsight cluster to a newer version](./hdinsight-upgrade-cluster.md).
+
+## Integrating Third-party applications
+
+Microsoft will only support machines that are created by the HDInsight service (for example, HDInsight clusters, edge nodes, and worker nodes). We don't support third-party client machines or moving the HDInsight libraries from a supported machine to an external machine.
+
+While this third-party integration may work for some time, it is not recommended in production environments because the scenario isn't supported.
+
+When you open a support request for an unsupported scenario, you'll be asked to ***reproduce the problem in a supported scenario*** so we can investigate. Any fixes provided would be for the supported scenario only.
+
+### Supported ways to integrate third party applications
+
+* [Install HDInsight applications](hdinsight-apps-install-applications.md): Learn how to install an HDInsight application to your clusters.
+* [Install custom HDInsight applications](hdinsight-apps-install-custom-applications.md): learn how to deploy an unpublished HDInsight application to HDInsight.
+* [Publish HDInsight applications](hdinsight-apps-publish-applications.md): Learn how to publish your custom HDInsight applications to Azure Marketplace.
## Next steps
healthcare-apis Fhir Rest Api Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-rest-api-capabilities.md
[!INCLUDE [retirement banner](../includes/healthcare-apis-azure-api-fhir-retirement.md)]
-In this article, we'll cover some of the nuances of the RESTful interactions of Azure API for FHIR.
+In this article, we cover some of the nuances of the RESTful interactions of Azure API for FHIR.
## Conditional create/update
Azure API for FHIR offers two delete types. There's [Delete](https://www.hl7.org
### Delete (Hard + Soft Delete)
-Delete defined by the FHIR specification requires that after deleting a resource, subsequent non-version specific reads of a resource returns a 410 HTTP status code. Therefore, the resource is no longer found through searching. Additionally, Azure API for FHIR enables you to fully delete (including all history) the resource. To fully delete the resource, you can pass a parameter settings `hardDelete` to true `(DELETE {{FHIR_URL}}/{resource}/{id}?hardDelete=true)`. If you don't pass this parameter or set `hardDelete` to false, the historic versions of the resource will still be available.
+Delete defined by the FHIR specification requires that after deleting a resource, subsequent nonversion specific reads of a resource returns a 410 HTTP status code. Therefore, the resource is no longer found through searching. Additionally, Azure API for FHIR enables you to fully delete (including all history) the resource. To fully delete the resource, you can pass a parameter settings `hardDelete` to true `(DELETE {{FHIR_URL}}/{resource}/{id}?hardDelete=true)`. If you don't pass this parameter or set `hardDelete` to false, the historic versions of the resource will still be available.
> [!NOTE] > If you only want to delete the history, Azure API for FHIR supports a custom operation called `$purge-history`. This operation allows you to delete the history off of a resource.
You can do the same search but include `hardDelete=true` to also delete all hist
`DELETE https://{{FHIR_URL}}/Patient?identifier=1032704&hardDelete=true`
-To delete multiple resources, include `_count=100` parameter. This parameter will delete up to 100 resources that match the search criteria.
+To delete multiple resources, include `_count=100` parameter. This parameter deletes up to 100 resources that match the search criteria.
`DELETE https://{{FHIR_URL}}/Patient?identifier=1032704&_count=100` ### Recovery of deleted files
-If you don't use the hard delete parameter, then the record(s) in Azure API for FHIR should still exist. The record(s) can be found by doing a history search on the resource and looking for the last version with data.
+If you don't use the hard delete parameter, then the records in Azure API for FHIR should still exist. The records can be found by doing a history search on the resource and looking for the last version with data.
If the ID of the resource that was deleted is known, use the following URL pattern:
For example: `https://myworkspace-myfhirserver.fhir.azurehealthcareapis.com/Pati
After you've found the record you want to restore, use the `PUT` operation to recreate the resource with the same ID, or use the `POST` operation to make a new resource with the same information. > [!NOTE]
-> There is no time-based expiration for history/soft delete data. The only way to remove history/soft deleted data is with a hard delete or the purge history operation.\
+> There is no time-based expiration for history/soft delete data. The only way to remove history/soft deleted data is with a hard delete or the purge history operation.
## Batch Bundles In FHIR, bundles can be considered as a container that holds multiple resources. Batch bundles enable users to submit a set of actions to be performed on a server in single HTTP request/response.
In the case of a batch, each entry is treated as an individual interaction or op
> [!NOTE] > For batch bundles there should be no interdependencies between different entries in FHIR bundle. The success or failure of one entry should not impact the success or failure of another entry.
-### Batch bundle parallel processing in public preview
+### Batch bundle parallel processing
Currently batch bundles are executed serially in FHIR service. To improve performance and throughput, we're enabling parallel processing of batch bundles in public preview. To use the capability of parallel batch bundle processing-
-* Set header ΓÇ£x-bundle-processing-logicΓÇ¥ value to ΓÇ£parallelΓÇ¥.
-* Ensure there's no overlapping resource ID that is executing on DELETE, POST, PUT or PATCH operations in the same bundle.
-
-> [!IMPORTANT]
-> Bundle parallel processing is currently in public preview. Preview APIs and SDKs are provided without a service-level agreement. We recommend that you don't use them for production workloads. Some features might not be supported, or they might have constrained capabilities. For more information, review Supplemental Terms of Use for Microsoft Azure Previews
+* Set header 'x-bundle-processing-logic' value to 'parallel'.
+* Ensure there's no overlapping resource ID that is executing on DELETE, POST, PUT, or PATCH operations in the same bundle.
## Patch and Conditional Patch
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/release-notes.md
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Server for Azure. The server is an implementation of the [FHIR](https://hl7.org/fhir) standard. This document provides details about the features and enhancements made to Azure API for FHIR.
+## **March 2024**
+**Batch-bundle parallelization**
+Batch bundles are executed serially in FHIR service by default. To improve throughput with bundle calls, we enabled parallel processing of batch bundles.
+
+Learn more:
+- [Batch bundle parallelization](././../azure-api-for-fhir/fhir-rest-api-capabilities.md)
+
+**Bug Fixes**
+
+- **Fixed: Improve performance for bundle processing**. Updates are made to the task execution method, leading to bundle processing performance improvement. See [PR#3727](https://github.com/microsoft/fhir-server/pull/3727).
++ ## **February 2024** **Enables counting all versions (historical and soft deleted) of resources** The query parameter _summary=count and _count=0 can be added to _history endpoint to get count of all versioned resources. This count includes soft deleted resources. For more information, see [history management](././../azure-api-for-fhir/purge-history.md).
For more details, visit [#3222](https://github.com/microsoft/fhir-server/pull/32
**Fixed the Error generated when resource is updated using if-match header and PATCH**
-Bug is now fixed and Resource will be updated if matches the Etag header. For details , see [#2877](https://github.com/microsoft/fhir-server/issues/2877)|
+Bug is now fixed and Resource will be updated if matches the Etag header. For details , see [#2877](https://github.com/microsoft/fhir-server/issues/2877)|.
## May 2022
Bug is now fixed and Resource will be updated if matches the Etag header. For de
|Enhancements |Related information | | :-- | : | |Added 429 retry and logging in BundleHandler |We sometimes encounter 429 errors when processing a bundle. If the FHIR service receives a 429 at the BundleHandler layer, we abort processing of the bundle and skip the remaining resources. We've added another retry (in addition to the retry present in the data store layer) that will execute one time per resource that encounters a 429. For more about this feature enhancement, see [PR #2400](https://github.com/microsoft/fhir-server/pull/2400).|
-|Billing for `$convert-data` and `$de-id` |Azure API for FHIR's data conversion and de-identified export features are now Generally Available. Billing for `$convert-data` and `$de-id` operations in Azure API for FHIR has been enabled. Billing meters were turned on March 1, 2022. |
+|Billing for `$convert-data` and `$de-id` |Azure API for FHIR's data conversion and deidentified export features are now Generally Available. Billing for `$convert-data` and `$de-id` operations in Azure API for FHIR has been enabled. Billing meters were turned on March 1, 2022. |
### **Bug fixes**
Bug is now fixed and Resource will be updated if matches the Etag header. For de
|Bug fixes |Related information | | :-- | : |
-|Fixed 500 error when `SearchParameter` Code is null |Fixed an issue with `SearchParameter` if it had a null value for Code, the result would be a 500. Now it will result in an `InvalidResourceException` like the other values do. [#2343](https://github.com/microsoft/fhir-server/pull/2343) |
+|Fixed 500 error when `SearchParameter` Code is null |Fixed an issue with `SearchParameter` if it had a null value for Code, the result would be a 500. Now it results in an `InvalidResourceException` like the other values do. [#2343](https://github.com/microsoft/fhir-server/pull/2343) |
|Returned `BadRequestException` with valid message when input JSON body is invalid |For invalid JSON body requests, the FHIR server was returning a 500 error. Now we'll return a `BadRequestException` with a valid message instead of 500. [#2239](https://github.com/microsoft/fhir-server/pull/2239) | |`_sort` can cause `ChainedSearch` to return incorrect results |Previously, the sort options from the chained search's `SearchOption` object wasn't cleared, causing the sorting options to be passed through to the chained subsearch, which aren't valid. This could result in no results when there should be results. This bug is now fixed [#2347](https://github.com/microsoft/fhir-server/pull/2347). It addressed GitHub bug [#2344](https://github.com/microsoft/fhir-server/issues/2344). |
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-features-supported.md
All the operations that are supported that extend the REST API.
| [$patient-everything](patient-everything.md) | Yes | Yes | | | [$purge-history](purge-history.md) | Yes | Yes | | | [$import](import-data.md) |No |Yes | |
+| [$bulk-delete](fhir-bulk-delete.md)|Yes |Yes | |
## Role-based access control
healthcare-apis Release Notes 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes-2024.md
Previously updated : 03/13/2024 Last updated : 04/02/2024
This article describes features, enhancements, and bug fixes released in 2024 for the FHIR&reg; service, DICOM&reg; service, and MedTech service in Azure Health Data Services.
+## April 2024
+
+### FHIR service
+
+#### The bulk delete operation is generally available
+
+The bulk delete operation allows deletion of FHIR resources across different levels, enabling healthcare organizations to comply with data retention policies while providing asynchronous processing capabilities. The benefits of the bulk delete operation are:
+
+- **Execute bulk delete at different levels**: The bulk delete operation allows you to delete resources from the FHIR server asynchronously. You can execute bulk delete at different levels:
+ - **System level**: Enables deletion of FHIR resources across all resource types.
+ - **Individual resource type**: Allows deletion of specific FHIR resources.
+- **Customizable**: Query parameters allow filtering of raw resources for targeted deletions.
+- **Async processing**: The operation is asynchronous, providing a polling endpoint to track progress.
+
+Learn more:
+- [Bulk delete in the FHIR service](./fhir/fhir-bulk-delete.md)
+ ## March 2024 ### DICOM service
Learn more:
- [Manage medical imaging data with the DICOM service and Azure Data Lake Storage](./dicom/dicom-data-lake.md) - [Deploy the DICOM service with Azure Data Lake Storage](./dicom/deploy-dicom-services-in-azure-data-lake.md)
+### FHIR Service
+
+#### Bundle parallelization (GA)
+Bundles are executed serially in FHIR service by default. To improve throughput with bundle calls, we enabled parallel processing.
+
+Learn more:
+- [Bundle parallelization](./../healthcare-apis/fhir/fhir-rest-api-capabilities.md)
+
+#### Import operation accepts multiple resource types in single file
+
+Import operation allowed to have resource type per input file in the request parameters. With this enhance capability, you can pass multiple resource types in single file.
+
+#### Bug Fixes
+
+- **Fixed: Import operation ingest resources with same resource type and lastUpdated field value**. Before this change, resources executed in a batch with same type and lastUpdated field value were not ingested into the FHIR service. This bug fix addresses the issue. See [PR#3768](https://github.com/microsoft/fhir-server/pull/3768).
+
+- **Fixed: FHIR search with 3 or more custom search parameters**. Before this fix, FHIR search query at the root with three or more custom search parameters resulted in HTTP status code 504. See [PR#3701](https://github.com/microsoft/fhir-server/pull/3701).
+
+- **Fixed: Improve performance for bundle processing**. Updates are made to the task execution method, leading to bundle processing performance improvement. See [PR#3727](https://github.com/microsoft/fhir-server/pull/3727).
+ ## February 2024 ### FHIR service
iot-operations Howto Configure Data Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-data-lake.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 04/02/2024 #CustomerIntent: As an operator, I want to understand how to configure Azure IoT MQ so that I can send data from Azure IoT MQ to Data Lake Storage.
You can use the data lake connector to send data from Azure IoT MQ Preview broke
| Delta format | Supported | | Parquet format | Supported | | JSON message payload | Supported |
-| Create new container if doesn't exist | Supported |
+| Create new container if it doesn't exist | Supported |
| Signed types support | Supported | | Unsigned types support | Not Supported |
Configure a data lake connector to connect to Microsoft Fabric OneLake using man
1. Select **Contributor** as the role, then select **Add**. 1. Create a [DataLakeConnector](#datalakeconnector) resource that defines the configuration and endpoint settings for the connector. You can use the YAML provided as an example, but make sure to change the following fields:-
- - `target.fabriceOneLake.names`: The names of the workspace and the lakehouse. Use either this field or `guids`, don't use both.
+ - `target.fabricOneLake.endpoint`: The endpoint of the Microsoft Fabric OneLake account. You can get the endpoint URL from Microsoft Fabric lakehouse under **Files** > **Properties**. The URL should look like `https://onelake.dfs.fabric.microsoft.com`.
+ - `target.fabricOneLake.names`: The names of the workspace and the lakehouse. Use either this field or `guids`. Don't use both.
- `workspaceName`: The name of the workspace. - `lakehouseName`: The name of the lakehouse.
Configure a data lake connector to connect to Microsoft Fabric OneLake using man
databaseFormat: delta target: fabricOneLake:
- endpoint: https://msit-onelake.dfs.fabric.microsoft.com
+ # Example: https://onelake.dfs.fabric.microsoft.com
+ endpoint: <example-endpoint-url>
names: workspaceName: <example-workspace-name> lakehouseName: <example-lakehouse-name>
Configure a data lake connector to connect to Microsoft Fabric OneLake using man
- `dataLakeConnectorRef`: The name of the DataLakeConnector resource that you created earlier. - `clientId`: A unique identifier for your MQTT client. - `mqttSourceTopic`: The name of the MQTT topic that you want data to come from.
- - `table.tableName`: The name of the table that you want to append to in the lakehouse. If the table doesn't exist, it's created automatically.
+ - `table.tableName`: The name of the table that you want to append to in the lakehouse. The table is created automatically if doesn't exist.
- `table.schema`: The schema of the Delta table that should match the format and fields of the JSON messages that you send to the MQTT topic. 1. Apply the DataLakeConnector and DataLakeConnectorTopicMap resources to your Kubernetes cluster using `kubectl apply -f datalake-connector.yaml`.
The spec field of a *DataLakeConnector* resource contains the following subfield
- `accessTokenSecretName`: The name of the Kubernetes secret for using shared access token authentication for the Data Lake Storage account. This field is required if the type is `accessToken`. - `systemAssignedManagedIdentity`: For using system managed identity for authentication. It has one subfield - `audience`: A string in the form of `https://<my-account-name>.blob.core.windows.net` for the managed identity token audience scoped to the account level or `https://storage.azure.com` for any storage account.
- - `fabriceOneLake`: Specifies the configuration and properties of the Microsoft Fabric OneLake. It has the following subfields:
+ - `fabricOneLake`: Specifies the configuration and properties of the Microsoft Fabric OneLake. It has the following subfields:
- `endpoint`: The URL of the Microsoft Fabric OneLake endpoint. It's usually `https://onelake.dfs.fabric.microsoft.com` because that's the OneLake global endpoint. If you're using a regional endpoint, it's in the form of `https://<region>-onelake.dfs.fabric.microsoft.com`. Don't include any trailing slash `/`. To learn more, see [Connecting to Microsoft OneLake](/fabric/onelake/onelake-access-api).
- - `names`: Specifies the names of the workspace and the lakehouse. Use either this field or `guids`, don't use both. It has the following subfields:
+ - `names`: Specifies the names of the workspace and the lakehouse. Use either this field or `guids`. Don't use both. It has the following subfields:
- `workspaceName`: The name of the workspace. - `lakehouseName`: The name of the lakehouse.
- - `guids`: Specifies the GUIDs of the workspace and the lakehouse. Use either this field or `names`, don't use both. It has the following subfields:
+ - `guids`: Specifies the GUIDs of the workspace and the lakehouse. Use either this field or `names`. Don't use both. It has the following subfields:
- `workspaceGuid`: The GUID of the workspace. - `lakehouseGuid`: The GUID of the lakehouse.
- - `fabricePath`: The location of the data in the Fabric workspace. It can be either `tables` or `files`. If it's `tables`, the data is stored in the Fabric OneLake as tables. If it's `files`, the data is stored in the Fabric OneLake as files. If it's `files`, the `databaseFormat` must be `parquet`.
+ - `fabricPath`: The location of the data in the Fabric workspace. It can be either `tables` or `files`. If it's `tables`, the data is stored in the Fabric OneLake as tables. If it's `files`, the data is stored in the Fabric OneLake as files. If it's `files`, the `databaseFormat` must be `parquet`.
- `authentication`: The authentication field specifies the type and credentials for accessing the Microsoft Fabric OneLake. It can only be `systemAssignedManagedIdentity` for now. It has one subfield: - `systemAssignedManagedIdentity`: For using system managed identity for authentication. It has one subfield - `audience`: A string for the managed identity token audience and it must be `https://storage.azure.com`.
spec:
messagePayloadType: "json" maxMessagesPerBatch: 10 clientId: id
- mqttSourceTopic: "orders"
+ mqttSourceTopic: "azure-iot-operations/data/opc-ua-connector-de/thermostat-de"
qos: 1 table:
- tableName: "orders"
+ tableName: thermostat
schema:
- - name: "orderId"
- format: int32
- optional: false
- mapping: "data.orderId"
- - name: "item"
+ - name: externalAssetId
format: utf8 optional: false
- mapping: "data.item"
- - name: "clientId"
+ mapping: $property.externalAssetId
+ - name: assetName
format: utf8 optional: false
- mapping: "$client_id"
- - name: "mqttTopic"
- format: utf8
+ mapping: DataSetWriterName
+ - name: CurrentTemperature
+ format: float32
optional: false
- mapping: "$topic"
- - name: "timestamp"
+ mapping: Payload.temperature.Value
+ - name: Pressure
+ format: float32
+ optional: true
+ mapping: "Payload.Tag 10.Value"
+ - name: Timestamp
format: timestamp optional: false
- mapping: "$received_time"
+ mapping: $received_time
```
-Escaped JSON like `{"data": "{\"orderId\": 181, \"item\": \"item181\"}"}` isn't supported and causes the connector to throw a "convertor found a null value" error. An example message for the `orders` topic that works with this schema:
+Stringified JSON like `"{\"SequenceNumber\": 4697, \"Timestamp\": \"2024-04-02T22:36:03.1827681Z\", \"DataSetWriterName\": \"thermostat-de\", \"MessageType\": \"ua-deltaframe\", \"Payload\": {\"temperature\": {\"SourceTimestamp\": \"2024-04-02T22:36:02.6949717Z\", \"Value\": 5506}, \"Tag 10\": {\"SourceTimestamp\": \"2024-04-02T22:36:02.6949888Z\", \"Value\": 5506}}}"` isn't supported and causes the connector to throw a *convertor found a null value* error. An example message for the `dlc` topic that works with this schema:
```json {
- "data": {
- "orderId": 181,
- "item": "item181"
+ "SequenceNumber": 4697,
+ "Timestamp": "2024-04-02T22:36:03.1827681Z",
+ "DataSetWriterName": "thermostat-de",
+ "MessageType": "ua-deltaframe",
+ "Payload": {
+ "temperature": {
+ "SourceTimestamp": "2024-04-02T22:36:02.6949717Z",
+ "Value": 5506
+ },
+ "Tag 10": {
+ "SourceTimestamp": "2024-04-02T22:36:02.6949888Z",
+ "Value": 5506
+ }
} } ``` Which maps to:
-| orderId | item | clientId | mqttTopic | timestamp |
-| - | - | -- | | |
-| 181 | item181 | id | orders | 2023-07-28T12:45:59.324310806Z |
+| externalAssetId | assetName | CurrentTemperature | Pressure | mqttTopic | timestamp |
+| | | | -- | -- | |
+| 59ad3b8b-c840-43b5-b79d-7804c6f42172 | thermostat-de | 5506 | 5506 | dlc | 2024-04-02T22:36:03.1827681Z |
> [!IMPORTANT] > If the data schema is updated, for example a data type is changed or a name is changed, transformation of incoming data might stop working. You need to change the data table name if a schema change occurs.
logic-apps Logic Apps Enterprise Integration Rosettanet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-rosettanet.md
Last updated 01/31/2024
[!INCLUDE [logic-apps-sku-consumption](../../includes/logic-apps-sku-consumption.md)]
-To send and receive RosettaNet messages in workflows that you create using Azure Logic Apps, you can use the RosettaNet connector, which provides actions that manage and support communication that follows RosettaNet standards. RosettaNet is a non-profit consortium that has established standard processes for sharing business information. These standards are commonly used for supply chain processes and are widespread in the semiconductor, electronics, and logistics industries. The RosettaNet consortium creates and maintains Partner Interface Processes (PIPs), which provide common business process definitions for all RosettaNet message exchanges. RosettaNet is based on XML and defines message guidelines, interfaces for business processes, and implementation frameworks for communication between companies. For more information, visit the [RosettaNet site](https://resources.gs1us.org).
+To send and receive RosettaNet messages in workflows that you create using Azure Logic Apps, you can use the RosettaNet connector, which provides actions that manage and support communication that follows RosettaNet standards. RosettaNet is a non-profit consortium that has established standard processes for sharing business information. These standards are commonly used for supply chain processes and are widespread in the semiconductor, electronics, and logistics industries. The RosettaNet consortium creates and maintains Partner Interface Processes (PIPs), which provide common business process definitions for all RosettaNet message exchanges. RosettaNet is based on XML and defines message guidelines, interfaces for business processes, and implementation frameworks for communication between companies. For more information, visit the [RosettaNet site](https://www.gs1us.org/resources/rosettanet).
The connector is based on the RosettaNet Implementation Framework (RNIF) version 2.0.01 and supports all PIPs defined by this version. RNIF is an open network application framework that enables business partners to collaboratively run RosettaNet PIPs. This framework defines the message structure, the need for acknowledgments, Multipurpose Internet Mail Extensions (MIME) encoding, and the digital signature. Communication with the partner can be synchronous or asynchronous. The connector provides the following capabilities:
The following concepts and terms are unique to the RosettaNet specification and
The RosettaNet organization creates and maintains PIPs, which provide common business process definitions for all RosettaNet message exchanges. Each PIP specification provides a document type definition (DTD) file and a message guideline document. The DTD file defines the service-content message structure. The message guideline document, which is a human-readable HTML file, specifies element-level constraints. Together, these files provide a complete definition of the business process.
- PIPs are categorized by a high-level business function, or cluster, and a subfunction, or segment. For example, "3A4" is the PIP for Purchase Order, while "3" is the Order Management function, and "3A" is the Quote & Order Entry subfunction. For more information, visit the [RosettaNet site](https://resources.gs1us.org).
+ PIPs are categorized by a high-level business function, or cluster, and a subfunction, or segment. For example, "3A4" is the PIP for Purchase Order, while "3" is the Order Management function, and "3A" is the Quote & Order Entry subfunction. For more information, visit the [RosettaNet site](https://www.gs1us.org/resources/rosettanet).
* **Action**
machine-learning Concept Automl Forecasting At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-at-scale.md
Last updated 08/01/2023
show_latex: true
-# Forecasting at scale: many models and distributed training (preview)
-
+# Forecasting at scale: many models and distributed training
This article is about training forecasting models on large quantities of historical data. Instructions and examples for training forecasting models in AutoML can be found in our [set up AutoML for time series forecasting](./how-to-auto-train-forecast.md) article.
The many models [components](concept-component.md) in AutoML enable you to train
:::image type="content" source="./media/how-to-auto-train-forecast/many-models.svg" alt-text="Diagram showing the AutoML many models workflow.":::
-The many models training component applies AutoML's [model sweeping and selection](concept-automl-forecasting-sweeping.md) independently to each store in this example. This model independence aids scalability and can benefit model accuracy especially when the stores have diverging sales dynamics. However, a single model approach may yield more accurate forecasts when there are common sales dynamics. See the [distributed DNN training](#distributed-dnn-training) section for more details on that case.
+The many models training component applies AutoML's [model sweeping and selection](concept-automl-forecasting-sweeping.md) independently to each store in this example. This model independence aids scalability and can benefit model accuracy especially when the stores have diverging sales dynamics. However, a single model approach may yield more accurate forecasts when there are common sales dynamics. See the [distributed DNN training](#distributed-dnn-training-preview) section for more details on that case.
You can configure the data partitioning, the [AutoML settings](how-to-auto-train-forecast.md#configure-experiment) for the models, and the degree of parallelism for many models training jobs. For examples, see our guide section on [many models components](how-to-auto-train-forecast.md#forecasting-at-scale-many-models).
AutoML supports the following features for hierarchical time series (HTS):
HTS components in AutoML are built on top of [many models](#many-models), so HTS shares the scalable properties of many models. For examples, see our guide section on [HTS components](how-to-auto-train-forecast.md#forecasting-at-scale-hierarchical-time-series).
-## Distributed DNN training
+## Distributed DNN training (preview)
+ Data scenarios featuring large amounts of historical observations and/or large numbers of related time series may benefit from a scalable, single model approach. Accordingly, **AutoML supports distributed training and model search on temporal convolutional network (TCN) models**, which are a type of deep neural network (DNN) for time series data. For more information on AutoML's TCN model class, see our [DNN article](concept-automl-forecasting-deep-learning.md).
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md
Also see the [demand forecasting with hierarchical time series notebook](https:/
## Forecasting at scale: distributed DNN training
-* To learn how distributed training works for forecasting tasks, see our [forecasting at scale article](concept-automl-forecasting-at-scale.md#distributed-dnn-training).
+* To learn how distributed training works for forecasting tasks, see our [forecasting at scale article](concept-automl-forecasting-at-scale.md#distributed-dnn-training-preview).
* See our [setup distributed training for tabular data](how-to-configure-auto-train.md#automl-at-scale-distributed-training) article section for code samples. ## Example notebooks
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-train.md
limits:
### Distributed training for forecasting
-To learn how distributed training works for forecasting tasks, see our [forecasting at scale](concept-automl-forecasting-at-scale.md#distributed-dnn-training) article. To use distributed training for forecasting, you need to set the `training_mode`, `enable_dnn_training`, `max_nodes`, and optionally the `max_concurrent_trials` properties of the job object.
+To learn how distributed training works for forecasting tasks, see our [forecasting at scale](concept-automl-forecasting-at-scale.md#distributed-dnn-training-preview) article. To use distributed training for forecasting, you need to set the `training_mode`, `enable_dnn_training`, `max_nodes`, and optionally the `max_concurrent_trials` properties of the job object.
Property | Description -- | --
machine-learning How To Create Component Pipeline Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipeline-python.md
You need to prepare the input data for this image classification pipeline.
Fashion-MNIST is a dataset of fashion images divided into 10 classes. Each image is a 28x28 grayscale image and there are 60,000 training and 10,000 test images. As an image classification problem, Fashion-MNIST is harder than the classic MNIST handwritten digit database. It's distributed in the same compressed binary form as the original [handwritten digit database](http://yann.lecun.com/exdb/mnist/).
-To define the input data of a job that references the Web-based data, run:
--
-[!notebook-python[] (~/azureml-examples-main/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=define-input)]
-
+Import all the Azure Machine Learning required libraries that you'll need.
By defining an `Input`, you create a reference to the data source location. The data remains in its existing location, so no extra storage cost is incurred.
machine-learning How To Deploy Models Cohere Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-cohere-command.md
+
+ Title: How to deploy Cohere Command models with Azure Machine Learning studio
+
+description: Learn how to deploy Cohere Command models with Azure Machine Learning studio.
++++ Last updated : 04/02/2024+++++
+#This functionality is also available in Azure AI Studio: /azure/ai-studio/how-to/deploy-models-cohere.md
+
+# How to deploy Cohere Command models with Azure Machine Learning studio
+Cohere offers two Command models in Azure Machine Learning studio. These models are available with pay-as-you-go token based billing with Models as a Service.
+
+* Cohere Command R
+* Cohere Command R+
+
+You can browse the Cohere family of models in the model catalog by filtering on the Cohere collection.
+
+## Models
+
+In this article, you learn how to use Azure Machine Learning studio to deploy the Cohere Command models as a service with pay-as you go billing.
+
+### Cohere Command R
+Command R is a highly performant generative large language model, optimized for a variety of use cases including reasoning, summarization, and question answering.
++
+*Model Architecture:* This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety.
+
+*Languages covered:* The model is optimized to perform well in the following languages: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic.
+
+Pre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian.
+
+*Context length:* Command R supports a context length of 128K.
+
+*Input:* Models input text only.
+
+*Output:* Models generate text only.
+
+
+### Cohere Command R+
+Command R+ is a highly performant generative large language model, optimized for a variety of use cases including reasoning, summarization, and question answering.
++
+*Model Architecture:* This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety.
+
+*Languages covered:* The model is optimized to perform well in the following languages: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic.
+
+Pre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian.
+
+*Context length:* Command R+ supports a context length of 128K.
+
+*Input:* Models input text only.
+
+*Output:* Models generate text only.
++
+## Deploy with pay-as-you-go
+
+Certain models in the model catalog can be deployed as a service with pay-as-you-go, providing a way to consume them as an API without hosting them on your subscription, while keeping the enterprise security and compliance organizations need. This deployment option doesn't require quota from your subscription.
+
+The previously mentioned Cohere models can be deployed as a service with pay-as-you-go, and are offered by Cohere through the Microsoft Azure Marketplace. Cohere can change or update the terms of use and pricing of this model.
+
+### Prerequisites
+
+- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
+- An Azure Machine Learning workspace. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them.
+
+ > [!IMPORTANT]
+ > Pay-as-you-go model deployment offering is only available in workspaces created in EastUS, EastUS2 or Sweden Central regions.
+
+- Azure role-based access controls (Azure RBAC) are used to grant access to operations. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the Resource Group.
+
+ For more information on permissions, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
+
+### Create a new deployment
+
+To create a deployment:
+
+1. Go to [Azure Machine Learning studio](https://ml.azure.com/home).
+1. Select the workspace in which you want to deploy your models. To use the pay-as-you-go model deployment offering, your workspace must belong to the EastUS, EastUS2 or Sweden Central region.
+1. Choose the model you want to deploy from the [model catalog](https://ml.azure.com/model/catalog).
+
+ Alternatively, you can initiate deployment by going to your workspace and selecting **Endpoints** > **Serverless endpoints** > **Create**.
+
+1. On the model's overview page in the model catalog, select **Deploy** and then **Pay-as-you-go**.
+
+ :::image type="content" source="media/how-to-deploy-models-cohere-command/command-r-deploy-pay-as-you-go.png" alt-text="A screenshot showing how to deploy a model with the pay-as-you-go option." lightbox="media/how-to-deploy-models-cohere-command/command-r-deploy-pay-as-you-go.png":::
+
+1. In the deployment wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use.
+1. You can also select the **Marketplace offer details** tab to learn about pricing for the selected model.
+1. If this is your first time deploying the model in the workspace, you have to subscribe your workspace for the particular offering of the model. This step requires that your account has the **Azure AI Developer role** permissions on the Resource Group, as listed in the prerequisites. Each workspace has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and Deploy**. Currently you can have only one deployment for each model within a workspace.
+
+ :::image type="content" source="media/how-to-deploy-models-cohere-command/command-r-marketplace-terms.png" alt-text="A screenshot showing the terms and conditions of a given model." lightbox="media/how-to-deploy-models-cohere-command/command-r-marketplace-terms.png":::
+
+1. Once you subscribe the workspace for the particular Azure Marketplace offering, subsequent deployments of the _same_ offering in the _same_ workspace don't require subscribing again. If this scenario applies to you, there's a **Continue to deploy** option to select.
+
+ :::image type="content" source="media/how-to-deploy-models-cohere-command/command-r-existing-subscription.png" alt-text="A screenshot showing a project that is already subscribed to the offering." lightbox="media/how-to-deploy-models-cohere-command/command-r-existing-subscription.png":::
+
+1. Give the deployment a name. This name becomes part of the deployment API URL. This URL must be unique in each Azure region.
+
+ :::image type="content" source="media/how-to-deploy-models-cohere-command/command-r-deployment-name.png" alt-text="A screenshot showing how to indicate the name of the deployment you want to create." lightbox="media/how-to-deploy-models-cohere-command/command-r-deployment-name.png":::
+
+1. Select **Deploy**. Wait until the deployment is finished and you're redirected to the serverless endpoints page.
+1. Select the endpoint to open its Details page.
+1. Select the **Test** tab to start interacting with the model.
+1. You can always find the endpoint's details, URL, and access keys by navigating to **Workspace** > **Endpoints** > **Serverless endpoints**.
+1. Take note of the **Target** URL and the **Secret Key**. For more information on using the APIs, see the [reference](#chat-api-reference-for-cohere-command-models-deployed-as-a-service) section.
+
+To learn about billing for models deployed with pay-as-you-go, see [Cost and quota considerations for Cohere models deployed as a service](#cost-and-quota-considerations-for-models-deployed-as-a-service).
+
+### Consume the models as a service
+
+The previously mentioned Cohere models can be consumed using the chat API.
+
+1. In the **workspace**, select **Endpoints** > **Serverless endpoints**.
+1. Find and select the deployment you created.
+1. Copy the **Target** URL and the **Key** token values.
+1. Cohere exposes two routes for inference with the Command R and Command R+ models. `v1/chat/completions` adheres to the Azure AI Generative Messages API schema, and `v1/chat` supports Cohere's native API schema.
+
+For more information on using the APIs, see the [reference](#chat-api-reference-for-cohere-command-models-deployed-as-a-service) section.
+
+## Chat API reference for cohere command models deployed as a service
+
+### v1/chat/completions
+#### Request
+
+```
+ POST /v1/chat/completions HTTP/1.1
+ Host: <DEPLOYMENT_URI>
+ Authorization: Bearer <TOKEN>
+ Content-type: application/json
+```
+
+#### v1/chat/completions request schema
+
+Cohere Command R and Command R+ accept the following parameters for a `v1/chat/completions` response inference call:
+
+| Property | Type | Default | Description |
+| | | | |
+| `messages` | `array` | `None` | Text input for the model to respond to. |
+| `max_tokens` | `integer` | `None` | The maximum number of tokens the model generates as part of the response. Note: Setting a low value might result in incomplete generations. If not specified, tokens are generated until end of sequence. |
+| `stop` | `array of strings` | `None` | The generated text is cut at the end of the earliest occurrence of a stop sequence. The sequence is included in the text.|
+| `stream` | `boolean` | `False` | When `true`, the response is a JSON stream of events. The final event contains the complete response, and has an `event_type` of `"stream-end"`. Streaming is beneficial for user interfaces that render the contents of the response piece by piece, as it gets generated. |
+| `temperature` | `float` | `0.3` |Use a lower value to decrease randomness in the response. Randomness can be further maximized by increasing the value of the `p` parameter. Min value is 0, and max is 2. |
+| `top_p` | `float` |`0.75` |Use a lower value to ignore less probable options. Set to 0 or 1.0 to disable. If both p and k are enabled, p acts after k. min value of 0.01, max value of 0.99.|
+| `frequency_penalty` | `float` | `0` |Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation. Min value of 0.0, max value of 1.0.|
+| `presence_penalty` | `float` |`0` |Used to reduce repetitiveness of generated tokens. Similar to `frequency_penalty`, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies. Min value of 0.0, max value of 1.0.|
+| `seed` | `integer` |`None` |If specified, the backend makes a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinism can't be guaranteed.|
+| `tools` | `list[Tool]` | `None` | A list of available tools (functions) that the model might suggest invoking before producing a text response. |
+
+`response_format` and `tool_choice` aren't yet supported parameters for the Command R and Command R+ models.
+
+<br/>
+
+A System or User Message supports the following properties:
+
+| Property | Type | Default | Description |
+| | | | |
+| `role` | `enum` | Required | `role=system` or `role=user`. |
+|`content` |`string` |Required |Text input for the model to respond to. |
+
+An Assistant Message supports the following properties:
+
+| Property | Type | Default | Description |
+| | | | |
+| `role` | `enum` | Required | `role=assistant`|
+|`content` |`string` |Required |The contents of the assistant message. |
+|`tool_calls` |`array` |None |The tool calls generated by the model, such as function calls. |
+
+A Tool Message supports the following properties:
+
+| Property | Type | Default | Description |
+| | | | |
+| `role` | `enum` | Required | `role=tool`|
+|`content` |`string` |Required |The contents of the tool message. |
+|`tool_call_id` |`string` |None |Tool call that this message is responding to. |
+
+<br/>
+
+#### v1/chat/completions response schema
+
+The response payload is a dictionary with the following fields:
+
+| Key | Type | Description |
+| | | |
+| `id` | `string` | A unique identifier for the completion. |
+| `choices` | `array` | The list of completion choices the model generated for the input messages. |
+| `created` | `integer` | The Unix timestamp (in seconds) of when the completion was created. |
+| `model` | `string` | The model_id used for completion. |
+| `object` | `string` | chat.completion. |
+| `usage` | `object` | Usage statistics for the completion request. |
+
+The `choices` object is a dictionary with the following fields:
+
+| Key | Type | Description |
+| | | |
+| `index` | `integer` | Choice index. |
+| `messages` or `delta` | `string` | Chat completion result in messages object. When streaming mode is used, delta key is used. |
+| `finish_reason` | `string` | The reason the model stopped generating tokens. |
+
+The `usage` object is a dictionary with the following fields:
+
+| Key | Type | Description |
+| | | |
+| `prompt_tokens` | `integer` | Number of tokens in the prompt. |
+| `completion_tokens` | `integer` | Number of tokens generated in the completion. |
+| `total_tokens` | `integer` | Total tokens. |
++
+#### Examples
+
+Request:
+
+```json
+ "messages": [
+ {
+ "role": "user",
+ "content": "What is the weather like in Boston?"
+ },
+ {
+ "role": "assistant",
+ "tool_calls": [
+ {
+ "id": "call_ceRrx0tP7bYPTClugKrOgvh4",
+ "type": "function",
+ "function": {
+ "name": "get_current_weather",
+ "arguments": "{\"location\":\"Boston\"}"
+ }
+ }
+ ]
+ },
+ {
+ "role": "tool",
+ "content": "{\"temperature\":30}",
+ "tool_call_id": "call_ceRrx0tP7bYPTClugKrOgvh4"
+ }
+ ]
+```
+
+Response:
+
+```json
+ {
+ "id": "df23b9f7-e6bd-493f-9437-443c65d428a1",
+ "choices": [
+ {
+ "index": 0,
+ "finish_reason": "stop",
+ "message": {
+ "role": "assistant",
+ "content": "Right now, the weather in Boston is cool, with temperatures of around 30┬░F. Stay warm!"
+ }
+ }
+ ],
+ "created": 1711734274,
+ "model": "command-r",
+ "object": "chat.completion",
+ "usage": {
+ "prompt_tokens": 744,
+ "completion_tokens": 23,
+ "total_tokens": 767
+ }
+ }
+```
+
+### v1/chat
+#### Request
+
+```
+ POST /v1/chat HTTP/1.1
+ Host: <DEPLOYMENT_URI>
+ Authorization: Bearer <TOKEN>
+ Content-type: application/json
+```
+
+#### v1/chat request schema
+
+Cohere Command R and Command R+ accept the following parameters for a `v1/chat` response inference call:
+
+|Key |Type |Default |Description |
+|||||
+|`message` |`string` |Required |Text input for the model to respond to. |
+|`chat_history` |`array of messages` |`None` |A list of previous messages between the user and the model, meant to give the model conversational context for responding to the user's message. |
+|`documents` |`array` |`None ` |A list of relevant documents that the model can cite to generate a more accurate reply. Each document is a string-string dictionary. Keys and values from each document are serialized to a string and passed to the model. The resulting generation includes citations that reference some of these documents. Some suggested keys are "text", "author", and "date". For better generation quality, keep the total word count of the strings in the dictionary to under 300 words. An `_excludes` field (array of strings) can be optionally supplied to omit some key-value pairs from being shown to the model. The omitted fields still show up in the citation object. The "_excludes" field isn't passed to the model. See [Document Mode](https://docs.cohere.com/docs/retrieval-augmented-generation-rag#document-mode) guide from Cohere docs. |
+|`search_queries_only` |`boolean` |`false` |When `true`, the response only contains a list of generated search queries, but no search takes place, and no reply from the model to the user's `message` is generated.|
+|`stream` |`boolean` |`false` |When `true`, the response is a JSON stream of events. The final event contains the complete response, and has an `event_type` of `"stream-end"`. Streaming is beneficial for user interfaces that render the contents of the response piece by piece, as it gets generated.|
+|`max_tokens` |`integer` |None |The maximum number of tokens the model generates as part of the response. Note: Setting a low value might result in incomplete generations. If not specified, generates tokens until end of sequence.|
+|`temperature` |`float` |`0.3` |Use a lower value to decrease randomness in the response. Randomness can be further maximized by increasing the value of the `p` parameter. Min value is 0, and max is 2. |
+|`p` |`float` |`0.75` |Use a lower value to ignore less probable options. Set to 0 or 1.0 to disable. If both p and k are enabled, p acts after k. min value of 0.01, max value of 0.99.|
+|`k` |`float` |`0` |Specify the number of token choices the model uses to generate the next token. If both p and k are enabled, p acts after k. Min value is 0, max value is 500.|
+|`prompt_truncation` |`enum string` |`OFF` |Accepts `AUTO_PRESERVE_ORDER`, `AUTO`, `OFF`. Dictates how the prompt is constructed. With `prompt_truncation` set to `AUTO_PRESERVE_ORDER`, some elements from `chat_history` and `documents` are dropped to construct a prompt that fits within the model's context length limit. During this process the order of the documents and chat history is preserved. With `prompt_truncation` set to "OFF", no elements are dropped.|
+|`stop_sequences` |`array of strings` |`None` |The generated text is cut at the end of the earliest occurrence of a stop sequence. The sequence is included in the text. |
+|`frequency_penalty` |`float` |`0` |Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation. Min value of 0.0, max value of 1.0.|
+|`presence_penalty` |`float` |`0` |Used to reduce repetitiveness of generated tokens. Similar to `frequency_penalty`, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies. Min value of 0.0, max value of 1.0.|
+|`seed` |`integer` |`None` |If specified, the backend makes a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinism can't be guaranteed.|
+|`return_prompt` |`boolean ` |`false ` |Returns the full prompt that was sent to the model when `true`. |
+|`tools` |`array of objects` |`None` |_Field is subject to changes._ A list of available tools (functions) that the model might suggest invoking before producing a text response. When `tools` is passed (without `tool_results`), the `text` field in the response is `""` and the `tool_calls` field in the response is populated with a list of tool calls that need to be made. If no calls need to be made, the `tool_calls` array is empty.|
+|`tool_results` |`array of objects` |`None` |_Field is subject to changes._ A list of results from invoking tools recommended by the model in the previous chat turn. Results are used to produce a text response and is referenced in citations. When using `tool_results`, `tools` must be passed as well. Each tool_result contains information about how it was invoked, and a list of outputs in the form of dictionaries. Cohere's unique fine-grained citation logic requires the output to be a list. In case the output is just one item, for example, `{"status": 200}`, still wrap it inside a list. |
+
+The `chat_history` object requires the following fields:
+
+|Key |Type |Description |
+||||
+|`role` |`enum string` |Takes `USER`, `SYSTEM`, or `CHATBOT`. |
+|`message` |`string` |Text contents of the message. |
+
+The `documents` object has the following optional fields:
+
+|Key |Type |Default| Description |
+|||||
+|`id` |`string` |`None` |Can be supplied to identify the document in the citations. This field isn't passed to the model. |
+|`_excludes` |`array of strings` |`None`| Can be optionally supplied to omit some key-value pairs from being shown to the model. The omitted fields still show up in the citation object. The `_excludes` field isn't passed to the model. |
+
+#### v1/chat response schema
+
+Response fields are fully documented on [Cohere's Chat API reference](https://docs.cohere.com/reference/chat). The response object always contains:
+
+|Key |Type |Description |
+||||
+|`response_id` |`string` |Unique identifier for chat completion. |
+|`generation_id` |`string` |Unique identifier for chat completion, used with Feedback endpoint on Cohere's platform. |
+|`text` |`string` |Model's response to chat message input. |
+|`finish_reason` |`enum string` |Why the generation was completed. Can be any of the following values: `COMPLETE`, `ERROR`, `ERROR_TOXIC`, `ERROR_LIMIT`, `USER_CANCEL` or `MAX_TOKENS` |
+|`token_count` |`integer` |Count of tokens used. |
+|`meta` |`string` |API usage data, including current version and billable tokens. |
+
+<br/>
+
+#### Documents
+If `documents` are specified in the request, there are two other fields in the response:
+
+|Key |Type |Description |
+||||
+|`documents` |`array of objects` |Lists the documents that were cited in the response. |
+|`citations` |`array of objects` |Specifies which part of the answer was found in a given document. |
+
+`citations` is an array of objects with the following required fields:
+
+|Key |Type |Description |
+||||
+|`start` |`integer` |The index of text that the citation starts at, counting from zero. For example, a generation of `Hello, world!` with a citation on `world` would have a start value of `7`. This is because the citation starts at `w`, which is the seventh character. |
+|`end` |`integer` |The index of text that the citation ends after, counting from zero. For example, a generation of `Hello, world!` with a citation on `world` would have an end value of `11`. This is because the citation ends after `d`, which is the eleventh character. |
+|`text` |`string` |The text of the citation. For example, a generation of `Hello, world!` with a citation of `world` would have a text value of `world`. |
+|`document_ids` |`array of strings` |Identifiers of documents cited by this section of the generated reply. |
+
+#### Tools
+If `tools` are specified and invoked by the model, there's another field in the response:
+
+|Key |Type |Description |
+||||
+|`tool_calls ` |`array of objects` |Contains the tool calls generated by the model. Use it to invoke your tools. |
+
+`tool_calls` is an array of objects with the following fields:
+
+|Key |Type |Description |
+||||
+|`name` |`string` |Name of the tool to call. |
+|`parameters` |`object` |The name and value of the parameters to use when invoking a tool. |
+
+#### Search_queries_only
+If `search_queries_only=TRUE` is specified in the request, there are two other fields in the response:
+
+|Key |Type |Description |
+||||
+|`is_search_required` |`boolean` |Instructs the model to generate a search query. |
+|`search_queries` |`array of objects` |Object that contains a list of search queries. |
+
+`search_queries` is an array of objects with the following fields:
+
+|Key |Type |Description |
+||||
+|`text` |`string` |The text of the search query. |
+|`generation_id` |`string` |Unique identifier for the generated search query. Useful for submitting feedback. |
+
+#### Examples
+
+##### Chat - Completions
+The following text is a sample request call to get chat completions from the Cohere Command model. Use when generating a chat completion.
+
+Request:
+
+```json
+ {
+ "chat_history": [
+ {"role":"USER", "message": "What is an interesting new role in AI if I don't have an ML background"},
+ {"role":"CHATBOT", "message": "You could explore being a prompt engineer!"}
+ ],
+ "message": "What are some skills I should have"
+ }
+```
+
+Response:
+
+```json
+ {
+ "response_id": "09613f65-c603-41e6-94b3-a7484571ac30",
+ "text": "Writing skills are very important for prompt engineering. Some other key skills are:\n- Creativity\n- Awareness of biases\n- Knowledge of how NLP models work\n- Debugging skills\n\nYou can also have some fun with it and try to create some interesting, innovative prompts to train an AI model that can then be used to create various applications.",
+ "generation_id": "6d31a57f-4d94-4b05-874d-36d0d78c9549",
+ "finish_reason": "COMPLETE",
+ "token_count": {
+ "prompt_tokens": 99,
+ "response_tokens": 70,
+ "total_tokens": 169,
+ "billed_tokens": 151
+ },
+ "meta": {
+ "api_version": {
+ "version": "1"
+ },
+ "billed_units": {
+ "input_tokens": 81,
+ "output_tokens": 70
+ }
+ }
+ }
+```
+
+##### Chat - Grounded generation and RAG capabilities
+
+Command R and Command R+ are trained for RAG via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. We introduce that prompt template via the `documents` parameter. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings. The values can be text or semi-structured.
+
+Request:
+
+```json
+ {
+ "message": "Where do the tallest penguins live?",
+ "documents": [
+ {
+ "title": "Tall penguins",
+ "snippet": "Emperor penguins are the tallest."
+ },
+ {
+ "title": "Penguin habitats",
+ "snippet": "Emperor penguins only live in Antarctica."
+ }
+ ]
+ }
+```
+
+Response:
+
+```json
+ {
+ "response_id": "d7e72d2e-06c0-469f-8072-a3aa6bd2e3b2",
+ "text": "Emperor penguins are the tallest species of penguin and they live in Antarctica.",
+ "generation_id": "b5685d8d-00b4-48f1-b32f-baebabb563d8",
+ "finish_reason": "COMPLETE",
+ "token_count": {
+ "prompt_tokens": 615,
+ "response_tokens": 15,
+ "total_tokens": 630,
+ "billed_tokens": 22
+ },
+ "meta": {
+ "api_version": {
+ "version": "1"
+ },
+ "billed_units": {
+ "input_tokens": 7,
+ "output_tokens": 15
+ }
+ },
+ "citations": [
+ {
+ "start": 0,
+ "end": 16,
+ "text": "Emperor penguins",
+ "document_ids": [
+ "doc_0"
+ ]
+ },
+ {
+ "start": 69,
+ "end": 80,
+ "text": "Antarctica.",
+ "document_ids": [
+ "doc_1"
+ ]
+ }
+ ],
+ "documents": [
+ {
+ "id": "doc_0",
+ "snippet": "Emperor penguins are the tallest.",
+ "title": "Tall penguins"
+ },
+ {
+ "id": "doc_1",
+ "snippet": "Emperor penguins only live in Antarctica.",
+ "title": "Penguin habitats"
+ }
+ ]
+ }
+```
+
+##### Chat - Tool use
+
+If invoking tools or generating a response based on tool results, use the following parameters.
+
+Request:
+
+```json
+ {
+ "message":"I'd like 4 apples and a fish please",
+ "tools":[
+ {
+ "name":"personal_shopper",
+ "description":"Returns items and requested volumes to purchase",
+ "parameter_definitions":{
+ "item":{
+ "description":"the item requested to be purchased, in all caps eg. Bananas should be BANANAS",
+ "type": "str",
+ "required": true
+ },
+ "quantity":{
+ "description": "how many of the items should be purchased",
+ "type": "int",
+ "required": true
+ }
+ }
+ }
+ ],
+
+ "tool_results": [
+ {
+ "call": {
+ "name": "personal_shopper",
+ "parameters": {
+ "item": "Apples",
+ "quantity": 4
+ },
+ "generation_id": "cb3a6e8b-6448-4642-b3cd-b1cc08f7360d"
+ },
+ "outputs": [
+ {
+ "response": "Sale completed"
+ }
+ ]
+ },
+ {
+ "call": {
+ "name": "personal_shopper",
+ "parameters": {
+ "item": "Fish",
+ "quantity": 1
+ },
+ "generation_id": "cb3a6e8b-6448-4642-b3cd-b1cc08f7360d"
+ },
+ "outputs": [
+ {
+ "response": "Sale not completed"
+ }
+ ]
+ }
+ ]
+ }
+```
+
+Response:
+
+```json
+ {
+ "response_id": "fa634da2-ccd1-4b56-8308-058a35daa100",
+ "text": "I've completed the sale for 4 apples. \n\nHowever, there was an error regarding the fish; it appears that there is currently no stock.",
+ "generation_id": "f567e78c-9172-4cfa-beba-ee3c330f781a",
+ "chat_history": [
+ {
+ "message": "I'd like 4 apples and a fish please",
+ "response_id": "fa634da2-ccd1-4b56-8308-058a35daa100",
+ "generation_id": "a4c5da95-b370-47a4-9ad3-cbf304749c04",
+ "role": "User"
+ },
+ {
+ "message": "I've completed the sale for 4 apples. \n\nHowever, there was an error regarding the fish; it appears that there is currently no stock.",
+ "response_id": "fa634da2-ccd1-4b56-8308-058a35daa100",
+ "generation_id": "f567e78c-9172-4cfa-beba-ee3c330f781a",
+ "role": "Chatbot"
+ }
+ ],
+ "finish_reason": "COMPLETE",
+ "token_count": {
+ "prompt_tokens": 644,
+ "response_tokens": 31,
+ "total_tokens": 675,
+ "billed_tokens": 41
+ },
+ "meta": {
+ "api_version": {
+ "version": "1"
+ },
+ "billed_units": {
+ "input_tokens": 10,
+ "output_tokens": 31
+ }
+ },
+ "citations": [
+ {
+ "start": 5,
+ "end": 23,
+ "text": "completed the sale",
+ "document_ids": [
+ ""
+ ]
+ },
+ {
+ "start": 113,
+ "end": 132,
+ "text": "currently no stock.",
+ "document_ids": [
+ ""
+ ]
+ }
+ ],
+ "documents": [
+ {
+ "response": "Sale completed"
+ }
+ ]
+ }
+```
+
+Once you run your function and received tool outputs, you can pass them back to the model to generate a response for the user.
+
+Request:
+
+```json
+ {
+ "message":"I'd like 4 apples and a fish please",
+ "tools":[
+ {
+ "name":"personal_shopper",
+ "description":"Returns items and requested volumes to purchase",
+ "parameter_definitions":{
+ "item":{
+ "description":"the item requested to be purchased, in all caps eg. Bananas should be BANANAS",
+ "type": "str",
+ "required": true
+ },
+ "quantity":{
+ "description": "how many of the items should be purchased",
+ "type": "int",
+ "required": true
+ }
+ }
+ }
+ ],
+
+ "tool_results": [
+ {
+ "call": {
+ "name": "personal_shopper",
+ "parameters": {
+ "item": "Apples",
+ "quantity": 4
+ },
+ "generation_id": "cb3a6e8b-6448-4642-b3cd-b1cc08f7360d"
+ },
+ "outputs": [
+ {
+ "response": "Sale completed"
+ }
+ ]
+ },
+ {
+ "call": {
+ "name": "personal_shopper",
+ "parameters": {
+ "item": "Fish",
+ "quantity": 1
+ },
+ "generation_id": "cb3a6e8b-6448-4642-b3cd-b1cc08f7360d"
+ },
+ "outputs": [
+ {
+ "response": "Sale not completed"
+ }
+ ]
+ }
+ ]
+ }
+```
+
+Response:
+
+```json
+ {
+ "response_id": "fa634da2-ccd1-4b56-8308-058a35daa100",
+ "text": "I've completed the sale for 4 apples. \n\nHowever, there was an error regarding the fish; it appears that there is currently no stock.",
+ "generation_id": "f567e78c-9172-4cfa-beba-ee3c330f781a",
+ "chat_history": [
+ {
+ "message": "I'd like 4 apples and a fish please",
+ "response_id": "fa634da2-ccd1-4b56-8308-058a35daa100",
+ "generation_id": "a4c5da95-b370-47a4-9ad3-cbf304749c04",
+ "role": "User"
+ },
+ {
+ "message": "I've completed the sale for 4 apples. \n\nHowever, there was an error regarding the fish; it appears that there is currently no stock.",
+ "response_id": "fa634da2-ccd1-4b56-8308-058a35daa100",
+ "generation_id": "f567e78c-9172-4cfa-beba-ee3c330f781a",
+ "role": "Chatbot"
+ }
+ ],
+ "finish_reason": "COMPLETE",
+ "token_count": {
+ "prompt_tokens": 644,
+ "response_tokens": 31,
+ "total_tokens": 675,
+ "billed_tokens": 41
+ },
+ "meta": {
+ "api_version": {
+ "version": "1"
+ },
+ "billed_units": {
+ "input_tokens": 10,
+ "output_tokens": 31
+ }
+ },
+ "citations": [
+ {
+ "start": 5,
+ "end": 23,
+ "text": "completed the sale",
+ "document_ids": [
+ ""
+ ]
+ },
+ {
+ "start": 113,
+ "end": 132,
+ "text": "currently no stock.",
+ "document_ids": [
+ ""
+ ]
+ }
+ ],
+ "documents": [
+ {
+ "response": "Sale completed"
+ }
+ ]
+ }
+```
+
+##### Chat - Search queries
+If you're building a RAG agent, you can also use Cohere's Chat API to get search queries from Command. Specify `search_queries_only=TRUE` in your request.
++
+Request:
+
+```json
+ {
+ "message": "Which lego set has the greatest number of pieces?",
+ "search_queries_only": true
+ }
+```
+
+Response:
+
+```json
+ {
+ "response_id": "5e795fe5-24b7-47b4-a8bc-b58a68c7c676",
+ "text": "",
+ "finish_reason": "COMPLETE",
+ "meta": {
+ "api_version": {
+ "version": "1"
+ }
+ },
+ "is_search_required": true,
+ "search_queries": [
+ {
+ "text": "lego set with most pieces",
+ "generation_id": "a086696b-ad8e-4d15-92e2-1c57a3526e1c"
+ }
+ ]
+ }
+```
+
+##### Additional inference examples
+
+| **Sample Type** | **Sample Notebook** |
+|-|-|
+| CLI using CURL and Python web requests - Command R | [command-r.ipynb](https://aka.ms/samples/cohere-command-r/webrequests)|
+| CLI using CURL and Python web requests - Command R+ | [command-r-plus.ipynb](https://aka.ms/samples/cohere-command-r-plus/webrequests)|
+| OpenAI SDK (experimental) | [openaisdk.ipynb](https://aka.ms/samples/cohere-command/openaisdk) |
+| LangChain | [langchain.ipynb](https://aka.ms/samples/cohere/langchain) |
+| Cohere SDK | [cohere-sdk.ipynb](https://aka.ms/samples/cohere-python-sdk) |
+
+## Cost and quotas
+
+### Cost and quota considerations for models deployed as a service
+
+Cohere models deployed as a service are offered by Cohere through Azure Marketplace and integrated with Azure Machine Learning studio for use. You can find Azure Marketplace pricing when deploying the models.
+
+Each time a workspace subscribes to a given model offering from Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference; however, multiple meters are available to track each scenario independently.
+
+For more information on how to track costs, see [Monitor costs for models offered through the Azure Marketplace](../ai-studio/how-to/costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace).
+
+Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
+
+## Content filtering
+
+Models deployed as a service with pay-as-you-go are protected by Azure AI content safety. With Azure AI content safety enabled, both the prompt and completion pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Learn more about [Azure AI Content Safety](/azure/ai-services/content-safety/overview).
+
+## Related content
+
+- [Model Catalog and Collections](concept-model-catalog.md)
+- [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md)
+- [Plan and manage costs for Azure AI Studio](concept-plan-manage-cost.md)
machine-learning How To Deploy Models Cohere Embed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-cohere-embed.md
+
+ Title: How to deploy Cohere Embed models with Azure Machine Learning studio
+
+description: Learn how to deploy Cohere Embed models with Azure Machine Learning studio.
++++ Last updated : 04/02/2024+++++
+#This functionality is also available in Azure AI Studio: /azure/ai-studio/how-to/deploy-models-cohere.md
++
+# How to deploy Cohere Embed models with Azure Machine Learning studio
+Cohere offers two Embed models in Azure Machine Learning studio. These models are available with pay-as-you-go token based billing with Models as a Service.
+
+* Cohere Embed v3 - English
+* Cohere Embed v3 - Multilingual
+
+You can browse the Cohere family of models in the model catalog by filtering on the Cohere collection.
+
+## Models
+
+In this article, you learn how to use Azure Machine Learning studio to deploy the Cohere models as a service with pay-as you go billing.
+
+### Cohere Embed v3 - English
+Cohere Embed English is the market's leading text representation model used for semantic search, retrieval-augmented generation (RAG), classification, and clustering. Embed English has top performance on the HuggingFace MTEB benchmark and performs well on various industries such as Finance, Legal, and General-Purpose Corpora.
+
+* Embed English has 1,024 dimensions.
+* Context window of the model is 512 tokens.
+
+### Cohere Embed v3 - Multilingual
+Cohere Embed Multilingual is the market's leading text representation model used for semantic search, retrieval-augmented generation (RAG), classification, and clustering. Embed Multilingual supports 100+ languages and can be used to search within a language (for example, search with a French query on French documents) and across languages (for example, search with an English query on Chinese documents). Embed multilingual has SOTA performance on multilingual benchmarks such as Miracl.
+
+* Embed Multilingual has 1,024 dimensions.
+* Context window of the model is 512 tokens.
++
+## Deploy with pay-as-you-go
+
+Certain models in the model catalog can be deployed as a service with pay-as-you-go, providing a way to consume them as an API without hosting them on your subscription, while keeping the enterprise security and compliance organizations need. This deployment option doesn't require quota from your subscription.
+
+The previously mentioned Cohere models can be deployed as a service with pay-as-you-go, and are offered by Cohere through the Microsoft Azure Marketplace. Cohere can change or update the terms of use and pricing of this model.
+
+### Prerequisites
+
+- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
+- An Azure Machine Learning workspace. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them.
+
+ > [!IMPORTANT]
+ > Pay-as-you-go model deployment offering is only available in workspaces created in EastUS, EastUS2 or Sweden Central regions.
+
+- Azure role-based access controls (Azure RBAC) are used to grant access to operations. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the Resource Group.
+
+ For more information on permissions, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
+
+### Create a new deployment
+
+To create a deployment:
+
+1. Go to [Azure Machine Learning studio](https://ml.azure.com/home).
+1. Select the workspace in which you want to deploy your models. To use the pay-as-you-go model deployment offering, your workspace must belong to the EastUS, EastUS2 or Sweden Central region.
+1. Choose the model you want to deploy from the [model catalog](https://ml.azure.com/model/catalog).
+
+ Alternatively, you can initiate deployment by going to your workspace and selecting **Endpoints** > **Serverless endpoints** > **Create**.
+
+1. On the model's overview page in the model catalog, select **Deploy** and then **Pay-as-you-go**.
+
+ :::image type="content" source="media/how-to-deploy-models-cohere-embed/embed-english-deploy-pay-as-you-go.png" alt-text="A screenshot showing how to deploy a model with the pay-as-you-go option." lightbox="media/how-to-deploy-models-cohere-embed/embed-english-deploy-pay-as-you-go.png":::
+
+1. In the deployment wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use.
+1. You can also select the **Marketplace offer details** tab to learn about pricing for the selected model.
+1. If this is your first time deploying the model in the workspace, you have to subscribe your workspace for the particular offering of the model. This step requires that your account has the **Azure AI Developer role** permissions on the Resource Group, as listed in the prerequisites. Each workspace has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and Deploy**. Currently you can have only one deployment for each model within a workspace.
+
+ :::image type="content" source="media/how-to-deploy-models-cohere-embed/embed-english-marketplace-terms.png" alt-text="A screenshot showing the terms and conditions of a given model." lightbox="media/how-to-deploy-models-cohere-embed/embed-english-marketplace-terms.png":::
+
+1. Once you subscribe the workspace for the particular Azure Marketplace offering, subsequent deployments of the _same_ offering in the _same_ workspace don't require subscribing again. If this scenario applies to you, there's a **Continue to deploy** option to select.
+
+ :::image type="content" source="media/how-to-deploy-models-cohere-embed/embed-english-existing-deployment.png" alt-text="A screenshot showing a project that is already subscribed to the offering." lightbox="media/how-to-deploy-models-cohere-embed/embed-english-existing-deployment.png":::
+
+1. Give the deployment a name. This name becomes part of the deployment API URL. This URL must be unique in each Azure region.
+
+ :::image type="content" source="media/how-to-deploy-models-cohere-embed/embed-english-deployment-name.png" alt-text="A screenshot showing how to indicate the name of the deployment you want to create." lightbox="media/how-to-deploy-models-cohere-embed/embed-english-deployment-name.png":::
+
+1. Select **Deploy**. Wait until the deployment is finished and you're redirected to the serverless endpoints page.
+1. Select the endpoint to open its Details page.
+1. Select the **Test** tab to start interacting with the model.
+1. You can always find the endpoint's details, URL, and access keys by navigating to **Workspace** > **Endpoints** > **Serverless endpoints**.
+1. Take note of the **Target** URL and the **Secret Key**. For more information on using the APIs, see the [reference](#embed-api-reference-for-cohere-embed-models-deployed-as-a-service) section.
+
+To learn about billing for models deployed with pay-as-you-go, see [Cost and quota considerations for Cohere models deployed as a service](#cost-and-quota-considerations-for-models-deployed-as-a-service).
+
+### Consume the models as a service
+
+The previously mentioned Cohere models can be consumed using the chat API.
+
+1. In the **workspace**, select **Endpoints** > **Serverless endpoints**.
+1. Find and select the deployment you created.
+1. Copy the **Target** URL and the **Key** token values.
+1. Cohere exposes two routes for inference with the Embed v3 - English and Embed v3 - Multilingual models. `v1/embeddings` adheres to the Azure AI Generative Messages API schema, and `v1/embed` supports Cohere's native API schema.
+
+For more information on using the APIs, see the [reference](#embed-api-reference-for-cohere-embed-models-deployed-as-a-service) section.
+
+## Embed API reference for Cohere Embed models deployed as a service
+
+### v1/embeddings
+#### Request
+
+```
+ POST /v1/embeddings HTTP/1.1
+ Host: <DEPLOYMENT_URI>
+ Authorization: Bearer <TOKEN>
+ Content-type: application/json
+```
+
+#### v1/emebeddings request schema
+
+Cohere Embed v3 - English and Embed v3 - Multilingual accept the following parameters for a `v1/embeddings` API call:
+
+| Property | Type | Default | Description |
+| | | | |
+|`input` |`array of strings` |Required |An array of strings for the model to embed. Maximum number of texts per call is 96. We recommend reducing the length of each text to be under 512 tokens for optimal quality. |
+
+#### v1/emebeddings response schema
+
+The response payload is a dictionary with the following fields:
+
+| Key | Type | Description |
+| | | |
+| `id` | `string` | A unique identifier for the completion. |
+| `object` | `enum`|The object type, which is always `list` |
+| `data` | `array` | The Unix timestamp (in seconds) of when the completion was created. |
+| `model` | `string` | The model_id used for creating the embeddings. |
+| `usage` | `object` | Usage statistics for the completion request. |
+
+The `data` object is a dictionary with the following fields:
+
+| Key | Type | Description |
+| | | |
+| `index` | `integer` |The index of the embedding in the list of embeddings. |
+| `object` | `enum` | The object type, which is always "embedding." |
+| `embedding` | `array` | The embedding vector, which is a list of floats. |
+
+The `usage` object is a dictionary with the following fields:
+
+| Key | Type | Description |
+| | | |
+| `prompt_tokens` | `integer` | Number of tokens in the prompt. |
+| `completion_tokens` | `integer` | Number of tokens generated in the completion. |
+| `total_tokens` | `integer` | Total tokens. |
++
+### v1/embeddings examples
+
+Request:
+
+```json
+ {
+ "input": ["hi"]
+ }
+```
+
+Response:
+
+```json
+ {
+ "id": "87cb11c5-2316-4c88-af3c-4b2b77ed58f3",
+ "object": "list",
+ "data": [
+ {
+ "index": 0,
+ "object": "embedding",
+ "embedding": [
+ 1.1513672,
+ 1.7060547,
+ ...
+ ]
+ }
+ ],
+ "model": "tmp",
+ "usage": {
+ "prompt_tokens": 1,
+ "completion_tokens": 0,
+ "total_tokens": 1
+ }
+ }
+```
+
+### v1/embed
+#### Request
+
+```
+ POST /v1/embed HTTP/1.1
+ Host: <DEPLOYMENT_URI>
+ Authorization: Bearer <TOKEN>
+ Content-type: application/json
+```
+
+#### v1/embed request schema
+
+Cohere Embed v3 - English and Embed v3 - Multilingual accept the following parameters for a `v1/embed` API call:
+
+|Key |Type |Default |Description |
+|||||
+|`texts` |`array of strings` |Required |An array of strings for the model to embed. Maximum number of texts per call is 96. We recommend reducing the length of each text to be under 512 tokens for optimal quality. |
+|`input_type` |`enum string` |Required |Prepends special tokens to differentiate each type from one another. You shouldn't mix different types together, except when mixing types for for search and retrieval. In this case, embed your corpus with the `search_document` type and embedded queries with type `search_query` type. <br/> `search_document` ΓÇô In search use-cases, use search_document when you encode documents for embeddings that you store in a vector database. <br/> `search_query` ΓÇô Use search_query when querying your vector DB to find relevant documents. <br/> `classification` ΓÇô Use classification when using embeddings as an input to a text classifier. <br/> `clustering` ΓÇô Use clustering to cluster the embeddings.|
+|`truncate` |`enum string` |`NONE` |`NONE` ΓÇô Returns an error when the input exceeds the maximum input token length. <br/> `START` ΓÇô Discards the start of the input. <br/> `END` ΓÇô Discards the end of the input. |
+|`embedding_types` |`array of strings` |`float` |Specifies the types of embeddings you want to get back. Can be one or more of the following types. `float`, `int8`, `uint8`, `binary`, `ubinary` |
+
+#### v1/embed response schema
+
+Cohere Embed v3 - English and Embed v3 - Multilingual include the following fields in the response:
+
+|Key |Type |Description |
+||||
+|`response_type` |`enum` |The response type. Returns `embeddings_floats` when `embedding_types` isn't specified, or returns `embeddings_by_type` when `embeddings_types` is specified. |
+|`id` |`integer` |An identifier for the response. |
+|`embeddings` |`array` or `array of objects` |An array of embeddings, where each embedding is an array of floats with 1,024 elements. The length of the embeddings array is the same as the length of the original texts array.|
+|`texts` |`array of strings` |The text entries for which embeddings were returned. |
+|`meta` |`string` |API usage data, including current version and billable tokens. |
+
+For more information, see [https://docs.cohere.com/reference/embed](https://docs.cohere.com/reference/embed).
+
+### v1/embed examples
+
+#### Embeddings_floats response
+
+Request:
+
+```json
+ {
+ "input_type": "clustering",
+ "truncate": "START",
+ "texts":["hi", "hello"]
+ }
+```
+
+Response:
+
+```json
+ {
+ "id": "da7a104c-e504-4349-bcd4-4d69dfa02077",
+ "texts": [
+ "hi",
+ "hello"
+ ],
+ "embeddings": [
+ [
+ ...
+ ],
+ [
+ ...
+ ]
+ ],
+ "meta": {
+ "api_version": {
+ "version": "1"
+ },
+ "billed_units": {
+ "input_tokens": 2
+ }
+ },
+ "response_type": "embeddings_floats"
+ }
+```
+
+#### Embeddings_by_types response
+
+Request:
+
+```json
+ {
+ "input_type": "clustering",
+ "embedding_types": ["int8", "binary"],
+ "truncate": "START",
+ "texts":["hi", "hello"]
+ }
+```
+
+Response:
+
+```json
+ {
+ "id": "b604881a-a5e1-4283-8c0d-acbd715bf144",
+ "texts": [
+ "hi",
+ "hello"
+ ],
+ "embeddings": {
+ "binary": [
+ [
+ ...
+ ],
+ [
+ ...
+ ]
+ ],
+ "int8": [
+ [
+ ...
+ ],
+ [
+ ...
+ ]
+ ]
+ },
+ "meta": {
+ "api_version": {
+ "version": "1"
+ },
+ "billed_units": {
+ "input_tokens": 2
+ }
+ },
+ "response_type": "embeddings_by_type"
+ }
+```
+
+#### Additional inference examples
+
+| **Sample Type** | **Sample Notebook** |
+|-|-|
+| CLI using CURL and Python web requests | [cohere-embed.ipynb](https://aka.ms/samples/embed-v3/webrequests)|
+| OpenAI SDK (experimental) | [openaisdk.ipynb](https://aka.ms/samples/cohere-embed/openaisdk) |
+| LangChain | [langchain.ipynb](https://aka.ms/samples/cohere-embed/langchain) |
+| Cohere SDK | [cohere-sdk.ipynb](https://aka.ms/samples/cohere-embed/cohere-python-sdk) |
+
+## Cost and quotas
+
+### Cost and quota considerations for models deployed as a service
+
+Cohere models deployed as a service are offered by Cohere through Azure Marketplace and integrated with Azure Machine Learning studio for use. You can find Azure Marketplace pricing when deploying the models.
+
+Each time a workspace subscribes to a given model offering from Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference; however, multiple meters are available to track each scenario independently.
+
+For more information on how to track costs, see [Monitor costs for models offered through the Azure Marketplace](../ai-studio/how-to/costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace).
+
+Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
+
+## Content filtering
+
+Models deployed as a service with pay-as-you-go are protected by Azure AI content safety. With Azure AI content safety enabled, both the prompt and completion pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Learn more about [Azure AI Content Safety](/azure/ai-services/content-safety/overview).
+
+## Related content
+
+- [Model Catalog and Collections](concept-model-catalog.md)
+- [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md)
+- [Plan and manage costs for Azure AI Studio](concept-plan-manage-cost.md)
machine-learning How To Migrate From Estimators To Scriptrunconfig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-migrate-from-estimators-to-scriptrunconfig.md
- Title: Migrate from Estimators to ScriptRunConfig-
-description: Migration guide for migrating from Estimators to ScriptRunConfig for configuring training jobs.
------ Previously updated : 09/14/2022----
-# Migrating from Estimators to ScriptRunConfig
--
-Up until now, there have been multiple methods for configuring a training job in Azure Machine Learning via the SDK, including Estimators, ScriptRunConfig, and the lower-level RunConfiguration. To address this ambiguity and inconsistency, we are simplifying the job configuration process in Azure Machine Learning. You should now use ScriptRunConfig as the recommended option for configuring training jobs.
-
-Estimators are deprecated with the 1.19. release of the Python SDK. You should also generally avoid explicitly instantiating a RunConfiguration object yourself, and instead configure your job using the ScriptRunConfig class.
-
-This article covers common considerations when migrating from Estimators to ScriptRunConfig.
-
-> [!IMPORTANT]
-> To migrate to ScriptRunConfig from Estimators, make sure you are using >= 1.15.0 of the Python SDK.
-
-## ScriptRunConfig documentation and samples
-Azure Machine Learning documentation and samples have been updated to use [ScriptRunConfig](/python/api/azureml-core/azureml.core.script_run_config.scriptrunconfig) for job configuration and submission.
-
-For information on using ScriptRunConfig, refer to the following documentation:
-* [Configure and submit training jobs](how-to-set-up-training-targets.md)
-* [Configuring PyTorch training jobs](how-to-train-pytorch.md)
-* [Configuring TensorFlow training jobs](how-to-train-tensorflow.md)
-* [Configuring scikit-learn training jobs](how-to-train-scikit-learn.md)
-
-In addition, refer to the following samples & tutorials:
-* [Azure/MachineLearningNotebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks)
-* [Azure/azureml-examples](https://github.com/Azure/azureml-examples)
-
-## Defining the training environment
-While the various framework estimators have preconfigured environments that are backed by Docker images, the Dockerfiles for these images are private. Therefore you do not have a lot of transparency into what these environments contain. In addition, the estimators take in environment-related configurations as individual parameters (such as `pip_packages`, `custom_docker_image`) on their respective constructors.
-
-When using ScriptRunConfig, all environment-related configurations are encapsulated in the `Environment` object that gets passed into the `environment` parameter of the ScriptRunConfig constructor. To configure a training job, provide an environment that has all the dependencies required for your training script. If no environment is provided, Azure Machine Learning will use one of the Azure Machine Learning base images, specifically the one defined by `azureml.core.environment.DEFAULT_CPU_IMAGE`, as the default environment. There are a couple of ways to provide an environment:
-
-* [Use a curated environment](../how-to-use-environments.md#use-a-curated-environment) - curated environments are predefined environments available in your workspace by default. There is a corresponding curated environment for each of the preconfigured framework/version Docker images that backed each framework estimator.
-* [Define your own custom environment](how-to-use-environments.md)
-
-Here is an example of using the curated environment for training:
-
-```python
-from azureml.core import Workspace, ScriptRunConfig, Environment
-
-curated_env_name = '<add Pytorch curated environment name here>'
-pytorch_env = Environment.get(workspace=ws, name=curated_env_name)
-
-compute_target = ws.compute_targets['my-cluster']
-src = ScriptRunConfig(source_directory='.',
- script='train.py',
- compute_target=compute_target,
- environment=pytorch_env)
-```
-
-> [!TIP]
-> For a list of curated environments, see [curated environments](../resource-curated-environments.md).
-
-If you want to specify **environment variables** that will get set on the process where the training script is executed, use the Environment object:
-```
-myenv.environment_variables = {"MESSAGE":"Hello from Azure Machine Learning"}
-```
-
-For information on configuring and managing Azure Machine Learning environments, see:
-* [How to use environments](how-to-use-environments.md)
-* [Curated environments](../resource-curated-environments.md)
-* [Train with a custom Docker image](how-to-train-with-custom-image.md)
-
-## Using data for training
-### Datasets
-If you are using an Azure Machine Learning dataset for training, pass the dataset as an argument to your script using the `arguments` parameter. By doing so, you will get the data path (mounting point or download path) in your training script via arguments.
-
-The following example configures a training job where the FileDataset, `mnist_ds`, will get mounted on the remote compute.
-```python
-src = ScriptRunConfig(source_directory='.',
- script='train.py',
- arguments=['--data-folder', mnist_ds.as_mount()], # or mnist_ds.as_download() to download
- compute_target=compute_target,
- environment=pytorch_env)
-```
-
-### DataReference (old)
-While we recommend using Azure Machine Learning Datasets over the old DataReference way, if you are still using DataReferences for any reason, you must configure your job as follows:
-```python
-# if you want to pass a DataReference object, such as the below:
-datastore = ws.get_default_datastore()
-data_ref = datastore.path('./foo').as_mount()
-
-src = ScriptRunConfig(source_directory='.',
- script='train.py',
- arguments=['--data-folder', str(data_ref)], # cast the DataReference object to str
- compute_target=compute_target,
- environment=pytorch_env)
-src.run_config.data_references = {data_ref.data_reference_name: data_ref.to_config()} # set a dict of the DataReference(s) you want to the `data_references` attribute of the ScriptRunConfig's underlying RunConfiguration object.
-```
-
-For more information on using data for training, see:
-* [Train with datasets in Azure Machine Learning](how-to-train-with-datasets.md)
-
-## Distributed training
-If you need to configure a distributed job for training, do so by specifying the `distributed_job_config` parameter in the ScriptRunConfig constructor. Pass in an [MpiConfiguration](/python/api/azureml-core/azureml.core.runconfig.mpiconfiguration), [PyTorchConfiguration](/python/api/azureml-core/azureml.core.runconfig.pytorchconfiguration), or [TensorflowConfiguration](/python/api/azureml-core/azureml.core.runconfig.tensorflowconfiguration) for distributed jobs of the respective types.
-
-The following example configures a PyTorch training job to use distributed training with MPI/Horovod:
-```python
-from azureml.core.runconfig import MpiConfiguration
-
-src = ScriptRunConfig(source_directory='.',
- script='train.py',
- compute_target=compute_target,
- environment=pytorch_env,
- distributed_job_config=MpiConfiguration(node_count=2, process_count_per_node=2))
-```
-
-For more information, see:
-* [Distributed training with PyTorch](how-to-train-pytorch.md#distributed-training)
-* [Distributed training with TensorFlow](how-to-train-tensorflow.md#distributed-training)
-
-## Miscellaneous
-If you need to access the underlying RunConfiguration object for a ScriptRunConfig for any reason, you can do so as follows:
-```
-src.run_config
-```
-
-## Next steps
-
-* [Configure and submit training jobs](how-to-set-up-training-targets.md)
machine-learning Tutorial 1St Experiment Bring Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-1st-experiment-bring-data.md
- Title: "Tutorial: Upload data and train a model (SDK v1)"-
-description: How to upload and use your own data in a remote training job, with SDK v1. This is part 3 of a three-part getting-started series.
------- Previously updated : 07/29/2022---
-# Tutorial: Upload data and train a model (SDK v1, part 3 of 3)
---
-This tutorial shows you how to upload and use your own data to train machine learning models in Azure Machine Learning. This tutorial is *part 3 of a three-part tutorial series*.
-
-In [Part 2: Train a model](tutorial-1st-experiment-sdk-train.md), you trained a model in the cloud, using sample data from `PyTorch`. You also downloaded that data through the `torchvision.datasets.CIFAR10` method in the PyTorch API. In this tutorial, you'll use the downloaded data to learn the workflow for working with your own data in Azure Machine Learning.
-
-In this tutorial, you:
-
-> [!div class="checklist"]
-> * Upload data to Azure.
-> * Create a control script.
-> * Understand the new Azure Machine Learning concepts (passing parameters, datasets, datastores).
-> * Submit and run your training script.
-> * View your code output in the cloud.
-
-## Prerequisites
-
-You'll need the data that was downloaded in the previous tutorial. Make sure you have completed these steps:
-
-1. [Create the training script](tutorial-1st-experiment-sdk-train.md#create-training-scripts).
-1. [Test locally](tutorial-1st-experiment-sdk-train.md#test-locally).
-
-## Adjust the training script
-
-By now you have your training script (get-started/src/train.py) running in Azure Machine Learning, and you can monitor the model performance. Let's parameterize the training script by introducing arguments. Using arguments will allow you to easily compare different hyperparameters.
-
-Our training script is currently set to download the CIFAR10 dataset on each run. The following Python code has been adjusted to read the data from a directory.
-
->[!NOTE]
-> The use of `argparse` parameterizes the script.
-
-1. Open *train.py* and replace it with this code:
-
- ```python
- import os
- import argparse
- import torch
- import torch.optim as optim
- import torchvision
- import torchvision.transforms as transforms
- from model import Net
- from azureml.core import Run
- run = Run.get_context()
- if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument(
- '--data_path',
- type=str,
- help='Path to the training data'
- )
- parser.add_argument(
- '--learning_rate',
- type=float,
- default=0.001,
- help='Learning rate for SGD'
- )
- parser.add_argument(
- '--momentum',
- type=float,
- default=0.9,
- help='Momentum for SGD'
- )
- args = parser.parse_args()
- print("===== DATA =====")
- print("DATA PATH: " + args.data_path)
- print("LIST FILES IN DATA PATH...")
- print(os.listdir(args.data_path))
- print("================")
- # prepare DataLoader for CIFAR10 data
- transform = transforms.Compose([
- transforms.ToTensor(),
- transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
- ])
- trainset = torchvision.datasets.CIFAR10(
- root=args.data_path,
- train=True,
- download=False,
- transform=transform,
- )
- trainloader = torch.utils.data.DataLoader(
- trainset,
- batch_size=4,
- shuffle=True,
- num_workers=2
- )
- # define convolutional network
- net = Net()
- # set up pytorch loss / optimizer
- criterion = torch.nn.CrossEntropyLoss()
- optimizer = optim.SGD(
- net.parameters(),
- lr=args.learning_rate,
- momentum=args.momentum,
- )
- # train the network
- for epoch in range(2):
- running_loss = 0.0
- for i, data in enumerate(trainloader, 0):
- # unpack the data
- inputs, labels = data
- # zero the parameter gradients
- optimizer.zero_grad()
- # forward + backward + optimize
- outputs = net(inputs)
- loss = criterion(outputs, labels)
- loss.backward()
- optimizer.step()
- # print statistics
- running_loss += loss.item()
- if i % 2000 == 1999:
- loss = running_loss / 2000
- run.log('loss', loss) # log loss metric to AML
- print(f'epoch={epoch + 1}, batch={i + 1:5}: loss {loss:.2f}')
- running_loss = 0.0
- print('Finished Training')
- ```
-
-1. **Save** the file. Close the tab if you wish.
-
-### Understanding the code changes
-
-The code in `train.py` has used the `argparse` library to set up `data_path`, `learning_rate`, and `momentum`.
-
-```python
-# .... other code
-parser = argparse.ArgumentParser()
-parser.add_argument('--data_path', type=str, help='Path to the training data')
-parser.add_argument('--learning_rate', type=float, default=0.001, help='Learning rate for SGD')
-parser.add_argument('--momentum', type=float, default=0.9, help='Momentum for SGD')
-args = parser.parse_args()
-# ... other code
-```
-
-Also, the `train.py` script was adapted to update the optimizer to use the user-defined parameters:
-
-```python
-optimizer = optim.SGD(
- net.parameters(),
- lr=args.learning_rate, # get learning rate from command-line argument
- momentum=args.momentum, # get momentum from command-line argument
-)
-```
--
-## Upload the data to Azure
-
-To run this script in Azure Machine Learning, you need to make your training data available in Azure. Your Azure Machine Learning workspace comes equipped with a _default_ datastore. This is an Azure Blob Storage account where you can store your training data.
-
->[!NOTE]
-> Azure Machine Learning allows you to connect other cloud-based datastores that store your data. For more details, see the [datastores documentation](./concept-data.md).
-
-1. Create a new Python control script in the **get-started** folder (make sure it is in **get-started**, *not* in the **/src** folder). Name the script *upload-data.py* and copy this code into the file:
-
- ```python
- # upload-data.py
- from azureml.core import Workspace
- from azureml.core import Dataset
- from azureml.data.datapath import DataPath
-
- ws = Workspace.from_config()
- datastore = ws.get_default_datastore()
- Dataset.File.upload_directory(src_dir='data',
- target=DataPath(datastore, "datasets/cifar10")
- )
- ```
-
- The `target_path` value specifies the path on the datastore where the CIFAR10 data will be uploaded.
-
- >[!TIP]
- > While you're using Azure Machine Learning to upload the data, you can use [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) to upload ad hoc files. If you need an ETL tool, you can use [Azure Data Factory](../../data-factory/introduction.md) to ingest your data into Azure.
-
-2. Select **Save and run script in terminal** to run the *upload-data.py* script.
-
- You should see the following standard output:
-
- ```txt
- Uploading ./data\cifar-10-batches-py\data_batch_2
- Uploaded ./data\cifar-10-batches-py\data_batch_2, 4 files out of an estimated total of 9
- .
- .
- Uploading ./data\cifar-10-batches-py\data_batch_5
- Uploaded ./data\cifar-10-batches-py\data_batch_5, 9 files out of an estimated total of 9
- Uploaded 9 files
- ```
-
-## Create a control script
-
-As you've done previously, create a new Python control script called *run-pytorch-data.py* in the **get-started** folder:
-
-```python
-# run-pytorch-data.py
-from azureml.core import Workspace
-from azureml.core import Experiment
-from azureml.core import Environment
-from azureml.core import ScriptRunConfig
-from azureml.core import Dataset
-
-if __name__ == "__main__":
- ws = Workspace.from_config()
- datastore = ws.get_default_datastore()
- dataset = Dataset.File.from_files(path=(datastore, 'datasets/cifar10'))
-
- experiment = Experiment(workspace=ws, name='day1-experiment-data')
-
- config = ScriptRunConfig(
- source_directory='./src',
- script='train.py',
- compute_target='cpu-cluster',
- arguments=[
- '--data_path', dataset.as_named_input('input').as_mount(),
- '--learning_rate', 0.003,
- '--momentum', 0.92],
- )
-
- # set up pytorch environment
- env = Environment.from_conda_specification(
- name='pytorch-env',
- file_path='pytorch-env.yml'
- )
- config.run_config.environment = env
-
- run = experiment.submit(config)
- aml_url = run.get_portal_url()
- print("Submitted to compute cluster. Click link below")
- print("")
- print(aml_url)
-```
-
-> [!TIP]
-> If you used a different name when you created your compute cluster, make sure to adjust the name in the code `compute_target='cpu-cluster'` as well.
-
-### Understand the code changes
-
-The control script is similar to the one from [part 3 of this series](tutorial-1st-experiment-sdk-train.md), with the following new lines:
-
- :::column span="":::
- `dataset = Dataset.File.from_files( ... )`
- :::column-end:::
- :::column span="2":::
- A [dataset](/python/api/azureml-core/azureml.core.dataset.dataset) is used to reference the data you uploaded to Azure Blob Storage. Datasets are an abstraction layer on top of your data that are designed to improve reliability and trustworthiness.
- :::column-end:::
- :::column span="":::
- `config = ScriptRunConfig(...)`
- :::column-end:::
- :::column span="2":::
- [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) is modified to include a list of arguments that will be passed into `train.py`. The `dataset.as_named_input('input').as_mount()` argument means the specified directory will be _mounted_ to the compute target.
- :::column-end:::
-
-## Submit the run to Azure Machine Learning
-
-Select **Save and run script in terminal** to run the *run-pytorch-data.py* script. This run will train the model on the compute cluster using the data you uploaded.
-
-This code will print a URL to the experiment in the Azure Machine Learning studio. If you go to that link, you'll be able to see your code running.
-
-> [!NOTE]
-> You may see some warnings starting with *Failure while loading azureml_run_type_providers...*. You can ignore these warnings. Use the link at the bottom of these warnings to view your output.
--
-### Inspect the log file
-
-In the studio, go to the experiment job (by selecting the previous URL output) followed by **Outputs + logs**. Select the `std_log.txt` file. Scroll down through the log file until you see the following output:
-
-```txt
-Processing 'input'.
-Processing dataset FileDataset
-{
- "source": [
- "('workspaceblobstore', 'datasets/cifar10')"
- ],
- "definition": [
- "GetDatastoreFiles"
- ],
- "registration": {
- "id": "XXXXX",
- "name": null,
- "version": null,
- "workspace": "Workspace.create(name='XXXX', subscription_id='XXXX', resource_group='X')"
- }
-}
-Mounting input to /tmp/tmp9kituvp3.
-Mounted input to /tmp/tmp9kituvp3 as folder.
-Exit __enter__ of DatasetContextManager
-Entering Job History Context Manager.
-Current directory: /mnt/batch/tasks/shared/LS_root/jobs/dsvm-aml/azureml/tutorial-session-3_1600171983_763c5381/mounts/workspaceblobstore/azureml/tutorial-session-3_1600171983_763c5381
-Preparing to call script [ train.py ] with arguments: ['--data_path', '$input', '--learning_rate', '0.003', '--momentum', '0.92']
-After variable expansion, calling script [ train.py ] with arguments: ['--data_path', '/tmp/tmp9kituvp3', '--learning_rate', '0.003', '--momentum', '0.92']
-
-Script type = None
-===== DATA =====
-DATA PATH: /tmp/tmp9kituvp3
-LIST FILES IN DATA PATH...
-['cifar-10-batches-py', 'cifar-10-python.tar.gz']
-```
-
-Notice:
--- Azure Machine Learning has mounted Blob Storage to the compute cluster automatically for you.-- The ``dataset.as_named_input('input').as_mount()`` used in the control script resolves to the mount point.--
-## Clean up resources
-
-If you plan to continue now to another tutorial, or to start your own training jobs, skip to [Next steps](#next-steps).
-
-### Stop compute instance
-
-If you're not going to use it now, stop the compute instance:
-
-1. In the studio, on the left, select **Compute**.
-1. In the top tabs, select **Compute instances**
-1. Select the compute instance in the list.
-1. On the top toolbar, select **Stop**.
--
-### Delete all resources
--
-You can also keep the resource group but delete a single workspace. Display the workspace properties and select **Delete**.
-
-## Next steps
-
-In this tutorial, we saw how to upload data to Azure by using `Datastore`. The datastore served as cloud storage for your workspace, giving you a persistent and flexible place to keep your data.
-
-You saw how to modify your training script to accept a data path via the command line. By using `Dataset`, you were able to mount a directory to the remote job.
-
-Now that you have a model, learn:
-
-> [!div class="nextstepaction"]
-> [How to deploy MLflow models](how-to-deploy-mlflow-models.md).
machine-learning Tutorial 1St Experiment Hello World https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-1st-experiment-hello-world.md
Title: 'Tutorial: Get started with a Python script (v1)'
-description: Get started with your first Python script in Azure Machine Learning, with SDK v1. This is part 1 of a three-part getting-started series.
+description: Get started with your first Python script in Azure Machine Learning, with SDK v1. This is part 1 of a two-part getting-started series.
Previously updated : 07/29/2022 Last updated : 04/03/2024
[!INCLUDE [sdk v1](../includes/machine-learning-sdk-v1.md)]
-In this tutorial, you run your first Python script in the cloud with Azure Machine Learning. This tutorial is *part 1 of a three-part tutorial series*.
+In this tutorial, you run your first Python script in the cloud with Azure Machine Learning. This tutorial is *part 1 of a two-part tutorial series*.
-This tutorial avoids the complexity of training a machine learning model. You will run a "Hello World" Python script in the cloud. You will learn how a control script is used to configure and create a run in Azure Machine Learning.
+This tutorial avoids the complexity of training a machine learning model. You'll run a "Hello World" Python script in the cloud. You'll learn how a control script is used to configure and create a run in Azure Machine Learning.
In this tutorial, you will:
In this tutorial, you will:
## Create and run a Python script
-This tutorial will use the compute instance as your development computer. First create a few folders and the script:
+This tutorial uses the compute instance as your development computer. First create a few folders and the script:
1. Sign in to the [Azure Machine Learning studio](https://ml.azure.com) and select your workspace if prompted. 1. On the left, select **Notebooks** 1. In the **Files** toolbar, select **+**, then select **Create new folder**.
- :::image type="content" source="../media/tutorial-1st-experiment-hello-world/create-folder.png" alt-text="Screenshot shows create a new folder tool in toolbar.":::
+ :::image type="content" source="./media/tutorial-1st-experiment-hello-world/create-folder.png" alt-text="Screenshot shows create a new folder tool in toolbar.":::
1. Name the folder **get-started**. 1. To the right of the folder name, use the **...** to create another folder under **get-started**.
- :::image type="content" source="../media/tutorial-1st-experiment-hello-world/create-sub-folder.png" alt-text="Screenshot shows create a subfolder menu.":::
-1. Name the new folder **src**. Use the **Edit location** link if the file location is not correct.
+ :::image type="content" source="./media/tutorial-1st-experiment-hello-world/create-sub-folder.png" alt-text="Screenshot shows create a subfolder menu.":::
+1. Name the new folder **src**. Use the **Edit location** link if the file location isn't correct.
1. To the right of the **src** folder, use the **...** to create a new file in the **src** folder.
-1. Name your file *hello.py*. Switch the **File type** to *Python (*.py)*.
+1. Name your file *hello.py*. Switch the **File type** to *Python (*.py)*.
Copy this code into your file:
print("Hello world!")
Your project folder structure will now look like: ### Test your script
-You can run your code locally, which in this case means on the compute instance. Running code locally has the benefit of interactive debugging of code.
+You can run your code locally, which in this case means on the compute instance. Running code locally has the benefit of interactive debugging of code.
If you have previously stopped your compute instance, start it now with the **Start compute** tool to the right of the compute dropdown. Wait about a minute for state to change to *Running*. Select **Save and run script in terminal** to run the script.
-You'll see the output of the script in the terminal window that opens. Close the tab and select **Terminate** to close the session.
+You see the output of the script in the terminal window that opens. Close the tab and select **Terminate** to close the session.
## Create a control script
-A *control script* allows you to run your `hello.py` script on different compute resources. You use the control script to control how and where your machine learning code is run.
+A *control script* allows you to run your `hello.py` script on different compute resources. You use the control script to control how and where your machine learning code is run.
-Select the **...** at the end of **get-started** folder to create a new file. Create a Python file called *run-hello.py* and copy/paste the following code into that file:
+Select the **...** at the end of **get-started** folder to create a new file. Create a Python file called *run-hello.py* and copy/paste the following code into that file:
```python # get-started/run-hello.py
Here's a description of how the control script works:
`config = ScriptRunConfig( ... )` :::column-end::: :::column span="2":::
- [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) wraps your `hello.py` code and passes it to your workspace. As the name suggests, you can use this class to _configure_ how you want your _script_ to _run_ in Azure Machine Learning. It also specifies what compute target the script will run on. In this code, the target is the compute cluster that you created in the [setup tutorial](../quickstart-create-resources.md).
+ [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) wraps your `hello.py` code and passes it to your workspace. As the name suggests, you can use this class to _configure_ how you want your _script_ to _run_ in Azure Machine Learning. It also specifies what compute target the script runs on. In this code, the target is the compute cluster that you created in the [setup tutorial](../quickstart-create-resources.md).
:::column-end::: :::row-end::: :::row:::
Here's a description of how the control script works:
`run = experiment.submit(config)` :::column-end::: :::column span="2":::
- Submits your script. This submission is called a [run](/python/api/azureml-core/azureml.core.run%28class%29). In v2, it has been renamed to a job. A run/job encapsulates a single execution of your code. Use a job to monitor the script progress, capture the output, analyze the results, visualize metrics, and more.
+ Submits your script. This submission is called a [run](/python/api/azureml-core/azureml.core.run%28class%29). In v2, it has been renamed to a job. A run/job encapsulates a single execution of your code. Use a job to monitor the script progress, capture the output, analyze the results, visualize metrics, and more.
:::column-end::: :::row-end::: :::row:::
Here's a description of how the control script works:
`aml_url = run.get_portal_url()` :::column-end::: :::column span="2":::
- The `run` object provides a handle on the execution of your code. Monitor its progress from the Azure Machine Learning studio with the URL that's printed from the Python script.
+ The `run` object provides a handle on the execution of your code. Monitor its progress from the Azure Machine Learning studio with the URL that prints from the Python script.
:::column-end::: :::row-end:::
Here's a description of how the control script works:
1. Select **Save and run script in terminal** to run your control script, which in turn runs `hello.py` on the compute cluster that you created in the [setup tutorial](../quickstart-create-resources.md).
-1. In the terminal, you may be asked to sign in to authenticate. Copy the code and follow the link to complete this step.
+1. In the terminal, you may be asked to sign in to authenticate. Copy the code and follow the link to complete this step.
-1. Once you're authenticated, you'll see a link in the terminal. Select the link to view the job.
-
- > [!NOTE]
- > You may see some warnings starting with *Failure while loading azureml_run_type_providers...*. You can ignore these warnings. Use the link at the bottom of these warnings to view your output.
-
-## View the output
-
-1. In the page that opens, you'll see the job status.
-1. When the status of the job is **Completed**, select **Output + logs** at the top of the page.
-1. Select **std_log.txt** to view the output of your job.
+1. Once you're authenticated, you see a link in the terminal. Select the link to view the job.
## Monitor your code in the cloud in the studio
-The output from your script will contain a link to the studio that looks something like this:
+The output from your script contains a link to the studio that looks something like this:
`https://ml.azure.com/experiments/hello-world/runs/<run-id>?wsid=/subscriptions/<subscription-id>/resourcegroups/<resource-group>/workspaces/<workspace-name>`.
-Follow the link. At first, you'll see a status of **Queued** or **Preparing**. The very first run will take 5-10 minutes to complete. This is because the following occurs:
+Follow the link. At first, you see a status of **Queued** or **Preparing**. The first run takes 5-10 minutes to complete. This is because the following occurs:
* A docker image is built in the cloud * The compute cluster is resized from 0 to 1 node * The docker image is downloaded to the compute.
-Subsequent jobs are much quicker (~15 seconds) as the docker image is cached on the compute. You can test this by resubmitting the code below after the first job has completed.
-
-Wait about 10 minutes. You'll see a message that the job has completed. Then use **Refresh** to see the status change to *Completed*. Once the job completes, go to the **Outputs + logs** tab. There you can see a `std_log.txt` file that looks like this:
-
-```txt
- 1: [2020-08-04T22:15:44.407305] Entering context manager injector.
- 2: [context_manager_injector.py] Command line Options: Namespace(inject=['ProjectPythonPath:context_managers.ProjectPythonPath', 'RunHistory:context_managers.RunHistory', 'TrackUserError:context_managers.TrackUserError', 'UserExceptions:context_managers.UserExceptions'], invocation=['hello.py'])
- 3: Starting the daemon thread to refresh tokens in background for process with pid = 31263
- 4: Entering Job History Context Manager.
- 5: Preparing to call script [ hello.py ] with arguments: []
- 6: After variable expansion, calling script [ hello.py ] with arguments: []
- 7:
- 8: Hello world!
- 9: Starting the daemon thread to refresh tokens in background for process with pid = 31263
-10:
-11:
-12: The experiment completed successfully. Finalizing job...
-13: Logging experiment finalizing status in history service.
-14: [2020-08-04T22:15:46.541334] TimeoutHandler __init__
-15: [2020-08-04T22:15:46.541396] TimeoutHandler __enter__
-16: Cleaning up all outstanding Job operations, waiting 300.0 seconds
-17: 1 items cleaning up...
-18: Cleanup took 0.1812913417816162 seconds
-19: [2020-08-04T22:15:47.040203] TimeoutHandler __exit__
-```
-
-On line 8, you see the "Hello world!" output.
+Subsequent jobs are quicker (~15 seconds) as the docker image is cached on the compute. You can test this by resubmitting the code below after the first job has completed.
-The `70_driver_log.txt` file contains the standard output from a job. This file can be useful when you're debugging remote jobs in the cloud.
+Wait about 10 minutes. You see a message that the job has completed. Then use **Refresh** to see the status change to *Completed*. Once the job completes, go to the **Outputs + logs** tab. There you can see a `std_log.txt` file in the `user_logs` folder. The output of your script is in this file.
+The `azureml-logs` and `system-logs` folders contain files that can be useful when you're debugging remote jobs in the cloud.
-## Next steps
+## Next step
In this tutorial, you took a simple "Hello world!" script and ran it on Azure. You saw how to connect to your Azure Machine Learning workspace, create an experiment, and submit your `hello.py` code to the cloud.
In the next tutorial, you build on these learnings by running something more int
> [Tutorial: Train a model](tutorial-1st-experiment-sdk-train.md) >[!NOTE]
-> If you want to finish the tutorial series here and not progress to the next step, remember to [clean up your resources](tutorial-1st-experiment-bring-data.md#clean-up-resources).
+> If you want to finish the tutorial series here and not progress to the next step, remember to [clean up your resources](tutorial-1st-experiment-sdk-train.md#clean-up-resources).
machine-learning Tutorial 1St Experiment Sdk Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-1st-experiment-sdk-train.md
Title: "Tutorial: Train a first Python machine learning model (SDK v1)"
-description: How to train a machine learning model in Azure Machine Learning, with SDK v1. This is part 2 of a three-part getting-started series.
+description: How to train a machine learning model in Azure Machine Learning, with SDK v1. This is part 2 of a two-part getting-started series.
Previously updated : 07/29/2022 Last updated : 04/03/2024
[!INCLUDE [sdk v1](../includes/machine-learning-sdk-v1.md)]
-This tutorial shows you how to train a machine learning model in Azure Machine Learning. This tutorial is *part 2 of a three-part tutorial series*.
+This tutorial shows you how to train a machine learning model in Azure Machine Learning. This tutorial is *part 2 of a two-part tutorial series*.
- In [Part 1: Run "Hello world!"](tutorial-1st-experiment-hello-world.md) of the series, you learned how to use a control script to run a job in the cloud.
+ In [Part 1: Run "Hello world!"](tutorial-1st-experiment-hello-world.md) of the series, you learned how to use a control script to run a job in the cloud.
-In this tutorial, you take the next step by submitting a script that trains a machine learning model. This example will help you understand how Azure Machine Learning eases consistent behavior between local debugging and remote runs.
+In this tutorial, you take the next step by submitting a script that trains a machine learning model. This example helps you understand how Azure Machine Learning eases consistent behavior between local debugging and remote runs.
In this tutorial, you:
In this tutorial, you:
## Create training scripts
-First you define the neural network architecture in a *model.py* file. All your training code will go into the `src` subdirectory, including *model.py*.
+First you define the neural network architecture in a *model.py* file. All your training code goes into the `src` subdirectory, including *model.py*.
-The training code is taken from [this introductory example](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html) from PyTorch. Note that the Azure Machine Learning concepts apply to any machine learning code, not just PyTorch.
+The training code is taken from [this introductory example](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html) from PyTorch. The Azure Machine Learning concepts apply to any machine learning code, not just PyTorch.
1. Create a *model.py* file in the **src** subfolder. Copy this code into the file:
The training code is taken from [this introductory example](https://pytorch.org/
x = self.fc3(x) return x ```
-1. On the toolbar, select **Save** to save the file. Close the tab if you wish.
+1. On the toolbar, select **Save** to save the file. Close the tab if you wish.
1. Next, define the training script, also in the **src** subfolder. This script downloads the CIFAR10 dataset by using PyTorch `torchvision.dataset` APIs, sets up the network defined in *model.py*, and trains it for two epochs by using standard SGD and cross-entropy loss.
The training code is taken from [this introductory example](https://pytorch.org/
1. You now have the following folder structure:
- :::image type="content" source="../media/tutorial-1st-experiment-sdk-train/directory-structure.png" alt-text="Directory structure shows train.py in src subdirectory":::
+ :::image type="content" source="./media/tutorial-1st-experiment-sdk-train/directory-structure.png" alt-text="Directory structure shows train.py in src subdirectory":::
## Test locally Select **Save and run script in terminal** to run the *train.py* script directly on the compute instance.
-After the script completes, select **Refresh** above the file folders. You'll see the new data folder called **get-started/data** Expand this folder to view the downloaded data.
+After the script completes, select **Refresh** above the file folders. You see the new data folder called **get-started/data** Expand this folder to view the downloaded data.
## Create a Python environment
-Azure Machine Learning provides the concept of an [environment](/python/api/azureml-core/azureml.core.environment.environment) to represent a reproducible, versioned Python environment for running experiments. It's easy to create an environment from a local Conda or pip environment.
+Azure Machine Learning provides the concept of an [environment](/python/api/azureml-core/azureml.core.environment.environment) to represent a reproducible, versioned Python environment for running experiments. It's easy to create an environment from a local Conda or pip environment.
-First you'll create a file with the package dependencies.
+First you create a file with the package dependencies.
1. Create a new file in the **get-started** folder called `pytorch-env.yml`:
First you'll create a file with the package dependencies.
- pytorch - torchvision ```
-1. On the toolbar, select **Save** to save the file. Close the tab if you wish.
+1. On the toolbar, select **Save** to save the file. Close the tab if you wish.
## Create the control script
if __name__ == "__main__":
1. Select **Save and run script in terminal** to run the *run-pytorch.py* script.
-1. You'll see a link in the terminal window that opens. Select the link to view the job.
+1. You see a link in the terminal window that opens. Select the link to view the job.
> [!NOTE] > You may see some warnings starting with *Failure while loading azureml_run_type_providers...*. You can ignore these warnings. Use the link at the bottom of these warnings to view your output. ### View the output
-1. In the page that opens, you'll see the job status. The first time you run this script, Azure Machine Learning will build a new Docker image from your PyTorch environment. The whole job might take around 10 minutes to complete. This image will be reused in future jobs to make them run much quicker.
-1. You can see view Docker build logs in the Azure Machine Learning studio. Select the **Outputs + logs** tab, and then select **20_image_build_log.txt**.
+1. In the page that opens, you see the job status. The first time you run this script, Azure Machine Learning builds a new Docker image from your PyTorch environment. The whole job might take around 10 minutes to complete. This image will be reused in future jobs to make them run much quicker.
+1. You can see view Docker build logs in the Azure Machine Learning studio. to view the build logs:
+ 1. Select the **Outputs + logs** tab.
+ 1. Select **azureml-logs** folder.
+ 1. Select **20_image_build_log.txt**.
1. When the status of the job is **Completed**, select **Output + logs**.
-1. Select **std_log.txt** to view the output of your job.
+1. Select **user_logs**, then **std_log.txt** to view the output of your job.
```txt Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ../data/cifar-10-python.tar.gz
Finished Training
If you see an error `Your total snapshot size exceeds the limit`, the **data** folder is located in the `source_directory` value used in `ScriptRunConfig`.
-Select the **...** at the end of the folder, then select **Move** to move **data** to the **get-started** folder.
+Select the **...** at the end of the folder, then select **Move** to move **data** to the **get-started** folder.
## Log training metrics
Make sure you save this file before you submit the run.
### Submit the run to Azure Machine Learning
-Select the tab for the *run-pytorch.py* script, then select **Save and run script in terminal** to re-run the *run-pytorch.py* script. Make sure you've saved your changes to `pytorch-env.yml` first.
+Select the tab for the *run-pytorch.py* script, then select **Save and run script in terminal** to rerun the *run-pytorch.py* script. Make sure you save your changes to `pytorch-env.yml` first.
-This time when you visit the studio, go to the **Metrics** tab where you can now see live updates on the model training loss! It may take a 1 to 2 minutes before the training begins.
+This time when you visit the studio, go to the **Metrics** tab where you can now see live updates on the model training loss! It may take a 1 to 2 minutes before the training begins.
-## Next steps
+## Clean up resources
-In this session, you upgraded from a basic "Hello world!" script to a more realistic training script that required a specific Python environment to run. You saw how to use curated Azure Machine Learning environments. Finally, you saw how in a few lines of code you can log metrics to Azure Machine Learning.
+If you plan to continue now to another tutorial, or to start your own training jobs, skip to [Related resources](#related-resources).
+
+### Stop compute instance
+
+If you're not going to use it now, stop the compute instance:
+
+1. In the studio, on the left, select **Compute**.
+1. In the top tabs, select **Compute instances**
+1. Select the compute instance in the list.
+1. On the top toolbar, select **Stop**.
-There are other ways to create Azure Machine Learning environments, including [from a pip requirements.txt](/python/api/azureml-core/azureml.core.environment.environment#from-pip-requirements-name--file-path-) file or [from an existing local Conda environment](/python/api/azureml-core/azureml.core.environment.environment#from-existing-conda-environment-name--conda-environment-name-).
-In the next session, you'll see how to work with data in Azure Machine Learning by uploading the CIFAR10 dataset to Azure.
+### Delete all resources
-> [!div class="nextstepaction"]
-> [Tutorial: Bring your own data](tutorial-1st-experiment-bring-data.md)
->[!NOTE]
-> If you want to finish the tutorial series here and not progress to the next step, remember to [clean up your resources](tutorial-1st-experiment-bring-data.md#clean-up-resources).
+You can also keep the resource group but delete a single workspace. Display the workspace properties and select **Delete**.
+
+## Related resources
+
+In this session, you upgraded from a basic "Hello world!" script to a more realistic training script that required a specific Python environment to run. You saw how to use curated Azure Machine Learning environments. Finally, you saw how in a few lines of code you can log metrics to Azure Machine Learning.
+
+There are other ways to create Azure Machine Learning environments, including [from a pip requirements.txt](/python/api/azureml-core/azureml.core.environment.environment#from-pip-requirements-name--file-path-) file or [from an existing local Conda environment](/python/api/azureml-core/azureml.core.environment.environment#from-existing-conda-environment-name--conda-environment-name-).
migrate Concepts Azure Spring Apps Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-azure-spring-apps-assessment-calculation.md
Previously updated : 09/05/2023 Last updated : 04/01/2024
The Azure Migrate: Discovery and assessment tool supports the following four typ
| **Azure VM** | Assessments to migrate your on-premises servers to Azure virtual machines. <br/><br/> You can assess your on-premises servers in [VMware environment](how-to-set-up-appliance-vmware.md), [Hyper-V environment](how-to-set-up-appliance-hyper-v.md), and [physical servers](how-to-set-up-appliance-physical.md) for migration to Azure VMs using this assessment type. **Azure SQL** | Assessments to migrate your on-premises SQL servers from your VMware environment to Azure SQL Database or Azure SQL Managed Instance.
-**Web apps on Azure** | Assessments to migrate your on-premises Spring Boot apps to Azure Spring Apps or ASP.NET web apps to Azure App Service.
+**Web apps on Azure** | Assessments to migrate your on-premises Spring Boot apps to Azure Spring Apps or ASP.NET/Java web apps to Azure App Service.
**Azure VMware Solution (AVS)** | Assessments to migrate your on-premises [VMware VMs](how-to-set-up-appliance-vmware.md) to [Azure VMware Solution (AVS)](../azure-vmware/introduction.md). [Learn more](concepts-azure-vmware-solution-assessment-calculation.md). An Azure Spring Apps assessment provides the following sizing criteria:
migrate Concepts Azure Vmware Solution Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-azure-vmware-solution-assessment-calculation.md
Assessments you create with Azure Migrate are a point-in-time snapshot of data.
| **Azure VM** | Assessments to migrate your on-premises servers to Azure virtual machines. You can assess your on-premises servers in [VMware vSphere](how-to-set-up-appliance-vmware.md) and [Hyper-V](how-to-set-up-appliance-hyper-v.md) environment, and [physical servers](how-to-set-up-appliance-physical.md) for migration to Azure VMs using this assessment type. **Azure SQL** | Assessments to migrate your on-premises SQL servers from your VMware environment to Azure SQL Database or Azure SQL Managed Instance.
-**Azure App Service** | Assessments to migrate your on-premises ASP.NET web apps, running on IIS web servers, from your VMware vSphere environment to Azure App Service.
+**Azure App Service** | Assessments to migrate your on-premises ASP.NET web apps, running on IIS web servers, or Java web applications, running on Tomcat servers from your VMware vSphere environment to Azure App Service.
**Azure VMware Solution (AVS)** | Assessments to migrate your on-premises vSphere servers to [Azure VMware Solution](../azure-vmware/introduction.md). You can assess your on-premises [VMware vSphere VMs](how-to-set-up-appliance-vmware.md) for migration to Azure VMware Solution using this assessment type. [Learn more](concepts-azure-vmware-solution-assessment-calculation.md) > [!NOTE]
migrate Concepts Azure Webapps Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-azure-webapps-assessment-calculation.md
Previously updated : 08/02/2023 Last updated : 04/01/2024 # Assessment overview (migrate to Azure App Service)
-This article provides an overview of assessments for migrating on-premises ASP.NET web apps to Azure App Service using the [Azure Migrate: Discovery and assessment tool](./migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool).
+This article provides an overview of assessments for migrating on-premises ASP.NET/Java web apps to Azure App Service using the [Azure Migrate: Discovery and assessment tool](./migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool).
## What's an assessment? An assessment with the Discovery and assessment tool is a point in time snapshot of data and measures the readiness and provides cost details to host on-premises servers, databases, and web apps to Azure.
There are four types of assessments you can create using the Azure Migrate: Disc
| **Azure VM** | Assessments to migrate your on-premises servers to Azure virtual machines. <br/><br/> You can assess your on-premises servers in [VMware](how-to-set-up-appliance-vmware.md) and [Hyper-V](how-to-set-up-appliance-hyper-v.md) environment, and [physical servers](how-to-set-up-appliance-physical.md) for migration to Azure VMs using this assessment type. **Azure SQL** | Assessments to migrate your on-premises SQL servers from your VMware environment to Azure SQL Database or Azure SQL Managed Instance.
-**Azure App Service** | Assessments to migrate your on-premises ASP.NET web apps running on IIS web servers to Azure App Service.
+**Azure App Service** | Assessments to migrate your on-premises ASP.NET web apps running on IIS web servers or Java web apps running on Tomcat servers to Azure App Service.
**Azure VMware Solution (AVS)** | Assessments to migrate your on-premises servers to [Azure VMware Solution (AVS)](../azure-vmware/introduction.md). <br/><br/> You can assess your on-premises [VMware VMs](how-to-set-up-appliance-vmware.md) for migration to Azure VMware Solution (AVS) using this assessment type. [Learn more](concepts-azure-vmware-solution-assessment-calculation.md) An Azure App Service assessment provides one sizing criteria:
An Azure App Service assessment provides one sizing criteria:
| | **Configuration-based** | Assessments that make recommendations based on collected configuration data | The Azure App Service assessment takes only configuration data in to consideration for assessment calculation. Performance data for web apps isn't collected.
-## How do I assess my on-premises ASP.NET web apps?
+## How do I assess my on-premises ASP.NET/Java web apps?
You can assess your on-premises web apps by using the configuration data collected by a lightweight Azure Migrate appliance. The appliance discovers on-premises web apps and sends the configuration data to Azure Migrate. [Learn More](how-to-set-up-appliance-vmware.md)
migrate Concepts Business Case Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-business-case-calculation.md
ms. Previously updated : 12/12/2023 Last updated : 04/01/2024
Currently, you can create a Business case with the two discovery sources:
**Discovery Source** | **Details** | **Migration strategies that can be used to build a business case** | |
- Use more accurate data insights collected via **Azure Migrate appliance** | You need to set up an Azure Migrate appliance for [VMware](how-to-set-up-appliance-vmware.md) or [Hyper-V](how-to-set-up-appliance-hyper-v.md) or [Physical/Bare-metal or other clouds](how-to-set-up-appliance-physical.md). The appliance discovers servers, SQL Server instance and databases, and ASP.NET webapps and sends metadata and performance (resource utilization) data to Azure Migrate. [Learn more](migrate-appliance.md). | Azure recommended to minimize cost, Migrate to all IaaS (Infrastructure as a Service), Modernize to PaaS (Platform as a Service)
+ Use more accurate data insights collected via **Azure Migrate appliance** | You need to set up an Azure Migrate appliance for [VMware](how-to-set-up-appliance-vmware.md) or [Hyper-V](how-to-set-up-appliance-hyper-v.md) or [Physical/Bare-metal or other clouds](how-to-set-up-appliance-physical.md). The appliance discovers servers, SQL Server instance and databases, and ASP.NET/Java webapps and sends metadata and performance (resource utilization) data to Azure Migrate. [Learn more](migrate-appliance.md). | Azure recommended to minimize cost, Migrate to all IaaS (Infrastructure as a Service), Modernize to PaaS (Platform as a Service)
Build a quick business case using the **servers imported via a .csv file** | You need to provide the server inventory in a [.CSV file and import in Azure Migrate](tutorial-discover-import.md) to get a quick business case based on the provided inputs. You don't need to set up the Azure Migrate appliance to discover servers for this option. | Migrate to all IaaS (Infrastructure as a Service) ## How do I use the appliance?
migrate How To Create Azure App Service Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-azure-app-service-assessment.md
Previously updated : 03/03/2023 Last updated : 04/01/2024 # Create an Azure App Service assessment As part of your migration journey to Azure, you assess your on-premises workloads to measure cloud readiness, identify risks, and estimate costs and complexity.
-This article shows you how to assess discovered ASP.NET web apps for migration to Azure App Service, using the Azure Migrate: Discovery and assessment tool.
+This article shows you how to assess discovered ASP.NET/Java web apps for migration to Azure App Service, using the Azure Migrate: Discovery and assessment tool.
> [!Note]
-> Discovery and assessment of ASP.NET web apps is now in preview. If you want to try out this feature in an existing project, please ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
+> Discovery and assessment of ASP.NET/Java web apps is now in preview. If you want to try out this feature in an existing project, please ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
## Before you start - Make sure you've [created](./create-manage-projects.md) an Azure Migrate project and have the Azure Migrate: Discovery and assessment tool added.-- To create an assessment, you need to set up an Azure Migrate appliance. The [appliance](migrate-appliance.md) discovers on-premises servers, and sends metadata and performance data to Azure Migrate. The same appliance discovers ASP.NET web apps running in your environment.
+- To create an assessment, you need to set up an Azure Migrate appliance. The [appliance](migrate-appliance.md) discovers on-premises servers, and sends metadata and performance data to Azure Migrate. The same appliance discovers ASP.NET/Java web apps running in your environment.
## Azure App Service assessment overview
migrate How To Create Azure Vmware Solution Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-azure-vmware-solution-assessment.md
ms. Previously updated : 04/06/2022 Last updated : 04/01/2024
There are three types of assessments you can create using Azure Migrate: Discove
| **Azure VM** | Assessments to migrate your on-premises servers to Azure virtual machines. You can assess your on-premises VMs in [VMware vSphere](how-to-set-up-appliance-vmware.md) and [Hyper-V](how-to-set-up-appliance-hyper-v.md) environment, and [physical servers](how-to-set-up-appliance-physical.md) for migration to Azure VMs using this assessment type. **Azure SQL** | Assessments to migrate your on-premises SQL servers from your VMware environment to Azure SQL Database or Azure SQL Managed Instance.
-**Azure App Service** | Assessments to migrate your on-premises ASP.NET web apps, running on IIS web servers, from your VMware vSphere environment to Azure App Service.
+**Azure App Service** | Assessments to migrate your on-premises ASP.NET/Java web apps, running on IIS web servers, from your VMware vSphere environment to Azure App Service.
**Azure VMware Solution (AVS)** | Assessments to migrate your on-premises servers to [Azure VMware Solution (AVS)](../azure-vmware/introduction.md). You can assess your on-premises VMs in [VMware vSphere environment](how-to-set-up-appliance-vmware.md) for migration to Azure VMware Solution (AVS) using this assessment type. [Learn more](concepts-azure-vmware-solution-assessment-calculation.md) > [!NOTE]
migrate Migrate Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix.md
Migration and modernization | N/A | Migrate [VMware VMs](tutorial-migrate-vmware
[DMA](/sql/dma/dma-overview) | Assess SQL Server databases. | N/A [DMS](../dms/dms-overview.md) | N/A | Migrate SQL Server, Oracle, MySQL, PostgreSQL, MongoDB. [Lakeside](https://go.microsoft.com/fwlink/?linkid=2104908) | Assess virtual desktop infrastructure (VDI) | N/A
-[Movere](https://www.movere.io/) | Assess VMware VMs, Hyper-V VMs, Xen VMs, physical servers, workstations (including VDI) and other cloud workloads. | N/A
+[Movere](/movere/overview) | Assess VMware VMs, Hyper-V VMs, Xen VMs, physical servers, workstations (including VDI) and other cloud workloads. | N/A
[RackWare](https://go.microsoft.com/fwlink/?linkid=2102735) | N/A | Migrate VMware VMs, Hyper-V VMs, Xen VMs, KVM VMs, physical servers, and other cloud workloads [Turbonomic](https://go.microsoft.com/fwlink/?linkid=2094295) | Assess VMware VMs, Hyper-V VMs, physical servers, and other cloud workloads. | N/A [UnifyCloud](https://go.microsoft.com/fwlink/?linkid=2097195) | Assess VMware VMs, Hyper-V VMs, physical servers and other cloud workloads, and SQL Server databases. | N/A
migrate Troubleshoot Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-assessment.md
Support for certain Operating System versions has been deprecated by VMware and
## Common web apps discovery errors
-Azure Migrate provides options to assess discovered ASP.NET web apps for migration to Azure App Service by using the Azure Migrate: Discovery and assessment tool. Refer to the [assessment](tutorial-assess-webapps.md) tutorial to get started.
+Azure Migrate provides options to assess discovered ASP.NET/Java web apps for migration to Azure App Service and Azure KubernetesService (AKS) by using the Azure Migrate: Discovery and assessment tool. Refer to the [assessment](tutorial-assess-webapps.md) tutorial to get started.
Here, typical App Service assessment errors are summarized.
migrate Tutorial Assess Aspnet Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-aspnet-aks.md
Title: Assess ASP.NET web apps for migration to Azure Kubernetes Service
+ Title: Assess ASP.NET/Java web apps for migration to Azure Kubernetes Service
description: Assessments of ASP.NET web apps to Azure Kubernetes Service using Azure Migrate Previously updated : 08/10/2023 Last updated : 04/01/2024
+zone_pivot_groups: web-apps-assessment-aks
-# Assess ASP.NET web apps for migration to Azure Kubernetes Service (preview)
+# Assess web apps for migration to Azure Kubernetes Service (preview)
+ This article shows you how to assess ASP.NET web apps for migration to [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) using Azure Migrate. Creating an assessment for your ASP.NET web app provides key insights such as **app-readiness**, **target right-sizing** and **cost** to host and run these apps month over month. ++
+This article shows you how to assess Java web apps for migration to [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) using Azure Migrate. Creating an assessment for your Java web app provides key insights such as **app-readiness**, **target right-sizing** and **cost** to host and run these apps month over month.
++ In this tutorial, you'll learn how to: > [!div class="checklist"] > * Choose a set of discovered ASP.NET web apps to assess for migration to AKS. > * Provide assessment configurations such as Azure Reserved Instances, target region etc. > * Get insights about the migration readiness of their assessed apps. > * Get insights on the AKS Node SKUs that can optimally host and run these apps. > * Get the estimated cost of running these apps on AKS.+
+> [!div class="checklist"]
+> * Choose a set of discovered Java web apps to assess for migration to AKS.
+> * Provide assessment configurations such as Azure Reserved Instances, target region etc.
+> * Get insights about the migration readiness of their assessed apps.
+> * Get insights on the AKS Node SKUs that can optimally host and run these apps.
+> * Get the estimated cost of running these apps on AKS.
> [!NOTE] > Tutorials show you the simplest deployment path for a scenario so that you can quickly set up a proof-of-concept. Tutorials use default options where possible and don't show all possible settings and paths.
In this tutorial, you'll learn how to:
- Deploy and configure the Azure Migrate appliance in your [VMware](./tutorial-discover-vmware.md), [Hyper-V](./tutorial-discover-hyper-v.md) or [physical environment](./tutorial-discover-physical.md). - Check the [appliance requirements](./migrate-appliance.md#appliancevmware) and [URL access](./migrate-appliance.md#url-access) to be provided. - Follow [these steps](./how-to-discover-sql-existing-project.md) to discover ASP.NET web apps running on your environment.
+- Follow [these steps](./how-to-discover-sql-existing-project.md) to discover Java web apps running on your environment.
## Create an assessment
-1. On the **Servers, databases and web apps** page, select **Assess** and then select **Web apps on Azure**.
+1. Sign into the [Azure portal](https://ms.portal.azure.com/#home) and search for Azure Migrate.
+1. On the **Azure Migrate** page, under **Migration goals**, select **Servers, databases and web apps**.
+1. On the **Servers, databases and web apps** page, under **Assessments tools**, select **Web apps on Azure** from the **Assess** dropdown menu.
:::image type="content" source="./media/tutorial-assess-aspnet-aks/hub-assess-webapps.png" alt-text="Screenshot of selecting web app assessments.":::
-2. On the **Basics** tab, select the **Scenario** dropdown and select **Web apps to AKS**.
+1. On the **Create assessment** page, under **Basics** tab, do the following:
+ 1. **Scenario**: Select **Web apps to AKS**.
:::image type="content" source="./media/tutorial-assess-aspnet-aks/create-basics-scenario.png" alt-text="Screenshot of selecting the scenario for web app assessment.":::
-3. On the same tab, select **Edit** to modify assessment settings. See the table below to update the various assessment settings.
+ 2. Select **Edit** to modify assessment settings. See the table below to update the various assessment settings.
:::image type="content" source="./media/tutorial-assess-aspnet-aks/create-basics-settings.png" alt-text="Screenshot of changing the target settings for web app assessment."::: | Setting | Possible Values | Comments | | | | | | Target Location | All locations supported by AKS | Used to generate regional cost for AKS. |
- | Environment Type | Production <br> Dev/Test | Allows you to toggle between Pay-As-You-Go and Pay-As-You-Go Dev/Test [offers](https://azure.microsoft.com/support/legal/offer-details/). |
- | Offer/Licensing program | Pay-As-You-Go <br> Enterprise Agreement | Allows you to toggle between Pay-As-You-Go and Enterprise Agreement [offers](https://azure.microsoft.com/support/legal/offer-details/). |
+ | Environment Type | Production <br> Dev/Test | Allows you to toggle between pay-as-you-go and pay-as-you-go Dev/Test [offers](https://azure.microsoft.com/support/legal/offer-details/). |
+ | Offer/Licensing program | Pay-as-you-go <br> Enterprise Agreement | Allows you to toggle between pay-as-you-go and Enterprise Agreement [offers](https://azure.microsoft.com/support/legal/offer-details/). |
| Currency | All common currencies such as USD, INR, GBP, Euro | We generate the cost in the currency selected here. | | Discount Percentage | Numeric decimal value | Use this to factor in any custom discount agreements with Microsoft. This is disabled if Savings options are selected. | | EA subscription | Subscription ID | Select the subscription ID for which you have an Enterprise Agreement. |
- | Savings options | 1 year reserved <br> 3 years reserved <br> 1 year savings plan <br> 3 years savings plan <br> None | Select a savings option if you have opted for [Reserved Instances](../cost-management-billing/reservations/save-compute-costs-reservations.md) or [Savings Plan](https://azure.microsoft.com/pricing/offers/savings-plan-compute/). |
- | Category | All <br> Compute optimized <br> General purpose <br> GPU <br> High performance compute <br> Isolated <br> Memory optimized <br> Storage optimized | Selecting a particular SKU category will ensure we recommend the best AKS Node SKUs from that category. |
+ | Savings options | 1 year reserved <br> 3 years reserved <br> 1 year savings plan <br> 3 years savings plan <br> None | Select a savings option if you've opted for [Reserved Instances](../cost-management-billing/reservations/save-compute-costs-reservations.md) or [Savings Plan](https://azure.microsoft.com/pricing/offers/savings-plan-compute/). |
+ | Category | All <br> Compute optimized <br> General purpose <br> GPU <br> High performance compute <br> Isolated <br> Memory optimized <br> Storage optimized | Selecting a particular SKU category ensures we recommend the best AKS Node SKUs from that category. |
| AKS pricing tier | Standard | Pricing tier for AKS |
-4. After reviewing the assessment settings, select **Next**.
+1. After reviewing the assessment settings, select **Next: Select servers to assess**.
-5. Select the list of servers which host the web apps to be assessed. Provide a name to this group of servers as well as the assessment. You can also filter web apps discovered by a specific appliance, in case your project has more than one.
+1. Under the **Select servers to assess** tab, do the following:
+ - **Assessment name**: Specify a name for the assessment.
+ - **Select or create a group**: Select **Create New** and specify a group name. You can also use an existing group.
+ - **Appliance name**: Select the appliance.
+ ::: zone pivot="asp-net"
+ - **Web app type**: Select **ASP.NET**.
+ ::: zone-end
+ ::: zone pivot="java"
+ - **Web app type**: Select **Java**.
+ ::: zone-end
+ - Select the servers, which host the web apps to be assessed from the table.
+ - Select **Next** to review the high-level assessment details.
- :::image type="content" source="./media/tutorial-assess-aspnet-aks/create-server-selection.png" alt-text="Screenshot of selecting servers containing the web apps to be assessed.":::
+ :::image type="content" source="./media/tutorial-assess-aspnet-aks/create-server-selection.png" alt-text="Screenshot of selecting servers containing the web apps to be assessed.":::
-6. Select **Next** to review the high-level assessment details. Select **Create assessment**.
+1. Under **Review + create assessment** tab, review the assessment details, and select **Create assessment** to create the group and run the assessment.
:::image type="content" source="./media/tutorial-assess-aspnet-aks/create-review.png" alt-text="Screenshot of reviewing the high-level assessment details before creation.":::
In this tutorial, you'll learn how to:
The assessment can take around 10 minutes to complete.
-1. On the **Servers, databases and web apps** page, select the hyperlink next to **Web apps on Azure**.
+1. On the **Azure Migrate** page, under **Migration goals**, select **Servers, databases and web apps**.
+1. On the **Servers, databases and web apps** page, under **Assessment tools** > **Assessments**, select the number next to the Web apps on Azure assessment.
+1. On the **Assessments** page, select a desired assessment name to view from the list of assessments.
:::image type="content" source="./media/tutorial-assess-aspnet-aks/hub-view-assessments.png" alt-text="Screenshot of clicking the hyperlink to see the list of web app assessments.":::
-2. On the **Assessments** page, use the search bar to filter for your assessment. It should be in the **Ready** state.
+2. Use the search bar to filter for your assessment. It should be in the **Ready** state.
:::image type="content" source="./media/tutorial-assess-aspnet-aks/assessment-list.png" alt-text="Screenshot of filtering for the created assessment.":::
The assessment can take around 10 minutes to complete.
### Assessment overview :::image type="content" source="./media/tutorial-assess-aspnet-aks/assessment-overview.png" alt-text="Screenshot of the assessment overview.":::+ On the **Overview** page, you're provided with the following details:
For each issue or warning, you're provided the description, cause and mitigation
:::image type="content" source="./media/tutorial-assess-aspnet-aks/assessment-readiness-errors.png" alt-text="Screenshot of the readiness errors and warnings for a web app.":::
-Selecting the recommended cluster for the app opens the **Cluster details** page. This page surfaces details such as the number of system and user node pools, the SKU for each node pool as well as the web apps recommended for this cluster. Typically, an assessment will only generate a single cluster. The number of clusters increases when the web apps in the assessment start hitting AKS cluster limits.
+Selecting the recommended cluster for the app opens the **Cluster details** page. This page surfaces details such as the number of system and user node pools, the SKU for each node pool and the web apps recommended for this cluster. Typically, an assessment will only generate a single cluster. The number of clusters increases when the web apps in the assessment start hitting AKS cluster limits.
:::image type="content" source="./media/tutorial-assess-aspnet-aks/assessment-cluster.png" alt-text="Screenshot of the recommended cluster page.":::
For each node pool, you see the associated node SKU, node count and the number o
- [Modernize](./tutorial-modernize-asp-net-aks.md) your ASP.NET web apps at-scale to Azure Kubernetes Service. - Optimize [Windows Dockerfiles](/virtualization/windowscontainers/manage-docker/optimize-windows-dockerfile?context=/azure/aks/context/aks-context).-- [Review and implement best practices](../aks/best-practices.md) to build and manage apps on AKS.
+- [Review and implement best practices](../aks/best-practices.md) to build and manage apps on AKS.
migrate Tutorial Assess Webapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-webapps.md
Last updated 08/24/2023
+zone_pivot_groups: web-apps-assessment-app-service
-# Tutorial: Assess ASP.NET web apps for migration to Azure App Service
+# Tutorial: Assess web apps for migration to Azure App Service
+++
+As part of your migration journey to Azure, assess your on-premises workloads to measure cloud readiness, identify risks, and estimate costs and complexity.
-As part of your migration journey to Azure, you assess your on-premises workloads to measure cloud readiness, identify risks, and estimate costs and complexity.
This article shows you how to assess discovered ASP.NET web apps running on IIS web servers in preparation for migration to Azure App Service Code and Azure App Service Containers, using the Azure Migrate: Discovery and assessment tool. [Learn more](../app-service/overview.md) about Azure App Service. ++
+As part of your migration journey to Azure, assess your on-premises workloads to measure cloud readiness, identify risks, and estimate costs and complexity.
+
+This article shows you how to assess discovered Java web apps running on Tomcat servers in preparation for migration to Azure App Service Code and Azure App Service Containers, using the Azure Migrate: Discovery and assessment tool. [Learn more](../app-service/overview.md) about Azure App Service.
++ In this tutorial, you learn how to: > [!div class="checklist"] > * Run an assessment based on web apps configuration data.
-> * Review an Azure App Service assessment
+> * Review an Azure App Service assessment.
> [!NOTE] > Tutorials show the quickest path for trying out a scenario and use default options where possible.
In this tutorial, you learn how to:
To run an assessment, follow these steps:
-1. On the **Get started** page > **Servers, databases and web apps**, select **Discover, assess and migrate**.
-2. On **Azure Migrate: Discovery and assessment**, select **Assess** and choose the assessment type as **Web apps on Azure**.
+1. Sign into the [Azure portal](https://ms.portal.azure.com/#home) and search for Azure Migrate.
+1. On the **Azure Migrate** page, under **Migration goals**, select **Servers, databases and web apps**.
+2. On the **Servers, databases and web apps** page, under **Assessments tools**, select **Web apps on Azure** from the **Assess** dropdown menu.
:::image type="content" source="./media/tutorial-assess-webapps/assess-web-apps.png" alt-text="Screenshot of Overview page for Azure Migrate.":::
-3. In **Create assessment**, the assessment type is pre-selected as **Web apps on Azure** and the discovery source defaulted to **Servers discovered from Azure Migrate appliance**. Select the **Scenario** as **Web apps to App Service**.
+3. On the **Create assessment** page, under **Basics** tab, do the following:
+ 1. The assessment type is pre-selected as **Web apps on Azure** and the discovery source defaulted to **Servers discovered from Azure Migrate appliance**. Select the **Scenario** as **Web apps to App Service**.
- :::image type="content" source="./media/tutorial-assess-webapps/create-assess-scenario.png" alt-text="Screenshot of Create assessment page for Azure Migrate.":::
+ :::image type="content" source="./media/tutorial-assess-webapps/create-assess-scenario.png" alt-text="Screenshot of Create assessment page for Azure Migrate.":::
-4. Select **Edit** to review the assessment properties.
+ 1. Select **Edit** to review the assessment properties.
- The following are included in Azure App Service assessment properties:
+ The following are included in Azure App Service assessment properties:
- :::image type="content" source="./media/tutorial-assess-webapps/settings.png" alt-text="Screenshot of assessment settings for Azure Migrate.":::
+ :::image type="content" source="./media/tutorial-assess-webapps/settings.png" alt-text="Screenshot of assessment settings for Azure Migrate.":::
- **Property** | **Details**
- |
- **Target location** | The Azure region to which you want to migrate. Azure App Service configuration and cost recommendations are based on the location that you specify.
- **Environment type** | Type of environment in which it's running.
- **Offer** | The [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) in which you're enrolled. The assessment estimates the cost for that offer.
- **Currency** | The billing currency for your account.
- **Discount (%)** | Any subscription-specific discounts that you receive on top of the Azure offer. The default setting is 0%.
- **EA subscription** | Specifies that an Enterprise Agreement (EA) subscription is used for cost estimation. Takes into account the discount applicable to the subscription. <br/><br/> Retain the default settings for reserved instances and discount (%) properties.
- **Savings options (Compute)** | The Savings option the assessment must consider.
- **Isolation required** | Select **Yes** if you want your web apps to run in a private and dedicated environment in an Azure datacenter.
+ **Property** | **Details**
+ |
+ **Target location** | The Azure region to which you want to migrate. Azure App Service configuration and cost recommendations are based on the location that you specify.
+ **Environment type** | Type of environment in which it's running.
+ **Offer** | The [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) in which you're enrolled. The assessment estimates the cost for that offer.
+ **Currency** | The billing currency for your account.
+ **Discount (%)** | Any subscription-specific discounts that you receive on top of the Azure offer. The default setting is 0%.
+ **EA subscription** | Specifies that an Enterprise Agreement (EA) subscription is used for cost estimation. Takes into account the discount applicable to the subscription. <br/><br/> Retain the default settings for reserved instances and discount (%) properties.
+ **Savings options (Compute)** | The Savings option the assessment must consider.
+ **Isolation required** | Select **Yes** if you want your web apps to run in a private and dedicated environment in an Azure datacenter.
- In **Savings options (Compute)**, specify the savings option that you want the assessment to consider, helping to optimize your Azure Compute cost. - [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) (one year or three year reserved) are a good option for the most consistently running resources.
To run an assessment, follow these steps:
- When you select *None*, the Azure Compute cost is based on the Pay-as-you-go rate or based on actual usage. - You need to select Pay-as-you-go in offer/licensing program to be able to use Reserved Instances or Azure Savings Plan. When you select any savings option other than *None*, the **Discount (%)** setting isn't applicable.
-1. Select **Save** if you made any changes.
-1. In **Create assessment**, select **Next**.
-1. In **Select servers to assess** > **Assessment name**, specify a name for the assessment.
-1. In **Select or create a group**, select **Create New** and specify a group name. You can also use an existing group.
-1. Select the appliance and select the servers that you want to add to the group. Select **Next**.
+ 1. Select **Save** if you made any changes.
+1. On the **Create assessment** page, select **Next: Select servers to assess**.
+1. Under the **Select servers to assess** tab, do the following:
+ - **Assessment name**: Specify a name for the assessment.
+ - **Select or create a group**: Select **Create New** and specify a group name. You can also use an existing group.
+ - **Appliance name**: Select the appliance.
+ ::: zone pivot="asp-net"
+ - **Web app type**: Select **ASP.NET**.
+ ::: zone-end
+ ::: zone pivot="java"
+ - **Web app type**: Select **Java**.
+ ::: zone-end
+ - Select the servers that you want to add to the group from the table.
+ - Select **Next**.
:::image type="content" source="./media/tutorial-assess-webapps/server-selection.png" alt-text="Screenshot of selected servers.":::
-1. In **Review + create assessment**, review the assessment details, and select **Create Assessment** to create the group and run the assessment.
+1. Under **Review + create assessment** tab, review the assessment details, and select **Create assessment** to create the group and run the assessment.
:::image type="content" source="./media/tutorial-assess-webapps/create-app-review.png" alt-text="Screenshot of create assessment."::: 1. After the assessment is created, go to **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**. Refresh the tile data by selecting the **Refresh** option on top of the tile. Wait for the data to refresh.
-1. Select the number next to **Web apps on Azure** in the **Assessment** section.
+1. On the **Servers, databases and web apps** page, under **Assessment tools** > **Assessments**, select the number next to **Web apps on Azure** in the **Assessment** section.
1. Select the assessment name, which you wish to view. ## Review an assessment To view an assessment, follow these steps:
-1. In **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select the number next to the Web apps on Azure assessment.
-2. Select the assessment name, which you wish to view.
+1. On the **Azure Migrate** page, under **Migration goals**, select **Servers, databases and web apps**.
+1. On the **Servers, databases and web apps** page, under **Assessment tools** > **Assessments**, select the number next to the Web apps on Azure assessment.
+1. On the **Assessments** page, select a desired assessment name to view from the list of assessments.
:::image type="content" source="./media/tutorial-assess-webapps/overview.png" alt-text="Screenshot of Overview screen.":::
- The **Overview** screen contains 3 sections: Essentials, Assessed entities, and Migration scenario.
-
- **Essentials**
-
- The **Essentials** section displays the group the assessed entity belongs to, its status, the location, discovery source, and currency in US dollars.
-
- **Assessed entities**
-
- This section displays the number of servers selected for the assessments, number of Azure app services in the selected servers, and the number of distinct Sprint Boot app instances that were assessed.
-
- **Migration scenario**
+ The **Overview** page contains 3 sections:
- This section provides a pictorial representation of the number of apps that are ready, ready with conditions, and not ready. You can see two graphical representations, one for *All Web applications to App Service Code* and the other for *All Web applications to App Service Containers*. In addition, it also lists the number of apps ready to migrate and the estimated cost for the migration for the apps that are ready to migrate.
+ - **Essentials**: The **Essentials** section displays the group the assessed entity belongs to, its status, the location, discovery source, and currency in US dollars.
+ - **Assessed entities**: This section displays the number of servers selected for the assessments, number of Azure app services in the selected servers, and the number of distinct Sprint Boot app instances that were assessed.
+ - **Migration scenario**: This section provides a pictorial representation of the number of apps that are ready, ready with conditions, and not ready. You can see two graphical representations, one for *All Web applications to App Service Code* and the other for *All Web applications to App Service Containers*. In addition, it also lists the number of apps ready to migrate and the estimated cost for the migration for the apps that are ready to migrate.
3. Review the assessment summary. You can also edit the assessment properties or recalculate the assessment. ### Review readiness
-Review the Readiness for the web apps by following these steps:
+To review the readiness for the web apps, follow these steps:
-1. In **Assessments**, select the name of the assessment that you want to view.
+1. On **Assessments**, select the name of the assessment that you want to view.
1. Select **View more details** to view more details about each app and instances. Review the Azure App service Code and Azure App service Container readiness column in the table for the assessed web apps:
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
ms. Previously updated : 02/26/2024 Last updated : 04/01/2024
[Azure Migrate](migrate-services-overview.md) helps you to discover, assess and migrate on-premises servers, apps, and data to the Microsoft Azure cloud. This article summarizes new releases and features in Azure Migrate.
+## Update (April 2024)
+
+- Public Preview: You now have the capability to assess your Java (Tomcat) web apps to both Azure App Service and Azure Kubernetes Service (AKS).
+ ## Update (March 2024) - Public preview: Springboot Apps discovery and assessment is now available using Packaged solution to deploy Kubernetes appliance.
mysql Migrate Single Flexible In Place Auto Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-single-flexible-in-place-auto-migration.md
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
-**In-place automigration** from Azure Database for MySQL ΓÇô Single Server to Flexible Server is a service-initiated in-place migration during planned maintenance window for Single Server database workloads with **Basic or General Purpose SKU**, data storage used **<= 20 GiB** and **no complex features (CMK, AAD, Read Replica, Private Link) enabled**. The eligible servers are identified by the service and are sent an advance notification detailing steps to review migration details.
+**In-place automigration** from Azure Database for MySQL ΓÇô Single Server to Flexible Server is a service-initiated in-place migration during planned maintenance window for Single Server database workloads with **Basic, General Purpose or Memory Optimized SKU**, data storage used **<= 20 GiB** and **no complex features (CMK, AAD, Read Replica, Private Link) enabled**. The eligible servers are identified by the service and are sent an advance notification detailing steps to review migration details.
The in-place migration provides a highly resilient and self-healing offline migration experience during a planned maintenance window, with less than **5 mins** of downtime. It uses backup and restore technology for faster migration time. This migration removes the overhead to manually migrate your server and ensure you can take advantage of the benefits of Flexible Server, including better price & performance, granular control over database configuration, and custom maintenance windows. Following described are the key phases of the migration:
The in-place migration provides a highly resilient and self-healing offline migr
* The **migrated Flexible Server is online** and can now be managed via Azure portal/CLI. Stopped Single Server is deleted 7 days after the migration. > [!NOTE]
-> In-place migration is only for Single Server database workloads with Basic or GP SKU, data storage used < 10 GiB and no complex features (CMK, AAD, Read Replica, Private Link) enabled. All other Single Server workloads are recommended to use user-initiated migration tooling offered by Azure - Azure DMS, Azure MySQL Import to migrate.
+> If your Single Server instance has General Purpose V1 storage, your scheduled instance will undergo an additional restart operation 12 hours prior to the scheduled migration time. This restart operation serves to enable the log_bin server parameter needed to upgrade the instance to General Purpose V2 storage before undergoing the in-place auto-migration.
## Eligibility
-* If you own a Single Server workload with Basic or GP SKU, data storage used <= 20 GiB and no complex features (CMK, AAD, Read Replica, Private Link) enabled, you can now nominate yourself (if not already scheduled by the service) for auto-migration by submitting your server details through this [form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR4lhLelkCklCuumNujnaQ-ZUQzRKSVBBV0VXTFRMSDFKSUtLUDlaNTA5Wi4u).
+* If you own a Single Server workload with Basic, General Purpose or Memory Optimized SKU, data storage used <= 20 GiB and no complex features (CMK, AAD, Read Replica, Private Link) enabled, you can now nominate yourself (if not already scheduled by the service) for auto-migration by submitting your server details through this [form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR4lhLelkCklCuumNujnaQ-ZUQzRKSVBBV0VXTFRMSDFKSUtLUDlaNTA5Wi4u).
## Configure migration alerts and review migration schedule
networking Networking Partners Msp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/networking-partners-msp.md
Use the links in this section for more information about managed cloud networkin
|[Aryaka Networks](https://www.aryaka.com/azure-msp-vwan-managed-service-provider-launch-partner-aryaka/)||[Aryaka Azure Connect](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/aryaka.cloudconnect_azure_19?tab=Overview)|[Aryaka Managed SD-WAN for Azure Networking Virtual](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/aryaka.aryaka_azure_virtual_wan?tab=Overview) | | | |[AXESDN](https://www.axesdn.com/en/azure-msp.html)||[AXESDN Managed Azure ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/1584591601184.axesdn_managed_azure_expressroute?tab=Overview)|[AXESDN Managed Azure Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/1584591601184.axesdn_managed_azure_virtualwan?tab=Overview) | | | |[BT](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)|[Network Transformation Consulting: 1-Hr Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/bt-americas-inc.network-transformation-consulting);[BT Cloud Connect Azure](https://azuremarketplace.microsoft.com/marketplace/consulting-services/btenterprise-hq.bt_caf_vwan_landingzone)|[BT Cloud Connect Azure ExpressRoute](https://azuremarketplace.microsoft.com/marketplace/consulting-services/btenterprise-hq.bt_caf_vwan_landingzone)|[BT Cloud Connect Azure VWAN](https://azuremarketplace.microsoft.com/marketplace/consulting-services/btenterprise-hq.bt_caf_vwan_landingzone)|||
-|[BUI](https://www.bui.co.za/)|[a2zManaged Cloud Management](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.a2zmanagement?tab=Overview)||[BUI Managed Azure vWAN using VMware SD-WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.bui_managed_vwan?tab=Overview)||[BUI CyberSoC](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.bui_mxdr_saas)|
+|[BUI](https://www.bui.co.za/)|[a2zManaged Cloud Management](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.bui_a2zmanaged)||[BUI Managed Azure vWAN using VMware SD-WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.bui_managed_vwan?tab=Overview)||[BUI CyberSoC](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.bui_mxdr_saas)|
|[Coevolve](https://www.coevolve.com/services/azure-networking-services/)|||[Managed Azure Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/coevolveptylimited1581027739259.coevolve-managed-azure-vwan?tab=Overview);[Managed VMware SD-WAN Virtual Edge](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/coevolveptylimited1581027739259.managed-vmware-sdwan-edge?tab=Overview)||| |[Colt](https://cloud.telekom.de/de/infrastruktur/microsoft-azure/azure-networking)|[Network optimization on Azure: 2-hr Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/colttechnologyservices.azure_networking)||||| |[Deutsche Telekom](https://cloud.telekom.de/de/infrastruktur/microsoft-azure/azure-networking)|[Network connectivity to Azure: 2-Hr assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/telekomdeutschlandgmbh1617272539503.azure_netzwerkoptimierung_2_stunden?search=telekom&page=1); [Cloud Transformation with Azure: 1-Day Workshop](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/telekomdeutschlandgmbh1617272539503.azure_cloudtransformation_1_tag?search=telekom&page=1)|[Managed ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/telekomdeutschlandgmbh1617272539503.azure_intraselect_cloud_connect_implementation?search=telekom&page=1)|||[Azure Networking and Security: 1-Day Workshop](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/telekomdeutschlandgmbh1617272539503.azure_netzwerke_und_sicherheit_1_tag?search=telekom&page=1); [Intraselect SecureConnect: 1-Week Implementation](https://appsource.microsoft.com/de-de/marketplace/consulting-services/telekomdeutschlandgmbh1617272539503.azure_intraselect_secure_connect_implementation?tab=Overview)|
operator-nexus Howto Kubernetes Cluster Agent Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-kubernetes-cluster-agent-pools.md
In this article, you learn how to work with agent pools in a Nexus Kubernetes cluster. Agent pools serve as groups of nodes with the same configuration and play a key role in managing your applications. Nexus Kubernetes clusters offer two types of agent pools.
- * System agent pools are designed for hosting critical system pods like CoreDNS and metrics-server.
- * User agent pools are designed for hosting your application pods.
-Application pods can be scheduled on system node pools if you wish to only have one pool in your Kubernetes cluster. Nexus Kubernetes cluster must have an initial agent pool that includes at least one system node pool with at least one node.
+* System agent pools are designed for hosting critical system pods like CoreDNS and metrics-server.
+* User agent pools are designed for hosting your application pods.
+
+Application pods can be scheduled on system agent pools if you wish to only have one pool in your Kubernetes cluster. Nexus Kubernetes cluster must have an initial agent pool that includes at least one system agent pool with at least one node.
## Prerequisites Before proceeding with this how-to guide, it's recommended that you:
- * Refer to the Nexus Kubernetes cluster [QuickStart guide](./quickstarts-kubernetes-cluster-deployment-bicep.md) for a comprehensive overview and steps involved.
- * Ensure that you meet the outlined prerequisites to ensure smooth implementation of the guide.
+* Refer to the Nexus Kubernetes cluster [QuickStart guide](./quickstarts-kubernetes-cluster-deployment-bicep.md) for a comprehensive overview and steps involved.
+* Ensure that you meet the outlined prerequisites to ensure smooth implementation of the guide.
## Limitations
- * You can delete system node pools, provided you have another system node pool to take its place in the Nexus Kubernetes cluster.
- * System pools must contain at least one node.
- * You can't change the VM size of a node pool after you create it.
- * Each Nexus Kubernetes cluster requires at least one system node pool.
- * Don't run application workloads on Kubernetes control plane nodes, as they're designed only for managing the cluster, and doing so can harm its performance and stability.
+
+* You can delete system agent pools, provided you have another system agent pool to take its place in the Nexus Kubernetes cluster.
+* System pools must contain at least one node.
+* You can't change the VM size of an agent pool after you create it.
+* Each Nexus Kubernetes cluster requires at least one system agent pool.
+* Don't run application workloads on Kubernetes control plane nodes, as they're designed only for managing the cluster, and doing so can harm its performance and stability.
## System pool
-For a system node pool, Nexus Kubernetes automatically assigns the label `kubernetes.azure.com/mode: system` to its nodes. This label causes Nexus Kubernetes to prefer scheduling system pods on node pools that contain this label. This label doesn't prevent you from scheduling application pods on system node pools. However, we recommend you isolate critical system pods from your application pods to prevent misconfigured or rogue application pods from accidentally killing system pods.
-You can enforce this behavior by creating a dedicated system node pool. Use the `CriticalAddonsOnly=true:NoSchedule` taint to prevent application pods from being scheduled on system node pools. If you intend to use the system pool for application pods (not dedicated), don't apply any application specific taints to the pool, as applying such taints can lead to cluster creation failures.
+For a system agent pool, Nexus Kubernetes automatically assigns the label `kubernetes.azure.com/mode: system` to its nodes. This label causes Nexus Kubernetes to prefer scheduling system pods on agent pools that contain this label. This label doesn't prevent you from scheduling application pods on system agent pools. However, we recommend you isolate critical system pods from your application pods to prevent misconfigured or rogue application pods from accidentally killing system pods.
+
+You can enforce this behavior by creating a dedicated system agent pool. Use the `CriticalAddonsOnly=true:NoSchedule` taint to prevent application pods from being scheduled on system agent pools. If you intend to use the system pool for application pods (not dedicated), don't apply any application specific taints to the pool, as applying such taints can lead to cluster creation failures.
> [!IMPORTANT]
-> If you run a single system node pool for your Nexus Kubernetes cluster in a production environment, we recommend you use at least three nodes for the node pool.
+> If you run a single system agent pool for your Nexus Kubernetes cluster in a production environment, we recommend you use at least three nodes for the agent pool.
## User pool
The user pool, on the other hand, is designed for your applications. This dedica
Choosing how to utilize your system pool and user pool depends largely on your specific requirements and use case. Both dedicated and shared methods offer unique advantages. Dedicated pools can isolate workloads and provide guaranteed resources, while shared pools can optimize resource usage across the cluster.
-Always consider your cluster's resource capacity, the nature of your workloads, and the required level of resiliency when making your decision. By managing and understanding these node pools effectively, you can optimize your Nexus Kubernetes cluster to best fit your operational needs.
+Always consider your cluster's resource capacity, the nature of your workloads, and the required level of resiliency when making your decision. By managing and understanding these agent pools effectively, you can optimize your Nexus Kubernetes cluster to best fit your operational needs.
Refer to the [QuickStart guide](./quickstarts-kubernetes-cluster-deployment-bicep.md#add-an-agent-pool) to add new agent pools and experiment with configurations in your Nexus Kubernetes cluster.
operator-nexus Howto Kubernetes Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-kubernetes-cluster-connect.md
Title: Connect to Azure Operator Nexus Kubernetes cluster
-description: Learn how to connect to Azure Operator Nexus Kubernetes cluster for interacting, troubleshooting, and maintenance tasks
+description: Learn how to connect to Azure Operator Nexus Kubernetes cluster for interacting, troubleshooting, and maintenance tasks.
# Connect to Azure Operator Nexus Kubernetes cluster
-This article provides instructions on how to connect to Azure Operator Nexus Kubernetes cluster and its nodes. It includes details on how to connect to the cluster from both Azure and on-premises environments, and how to do so when the ExpressRoute is in both connected and disconnected modes.
-
-In Azure, connected mode and disconnected mode refer to the state of an ExpressRoute circuit. [ExpressRoute](../expressroute/expressroute-introduction.md) is a service provided by Azure that enables organizations to establish a private, high-throughput connection between their on-premises infrastructure and Azure datacenters.
-
-* Connected Mode: In connected mode, the ExpressRoute circuit is fully operational and provides a private connection between your on-premises infrastructure and Azure services. This mode is ideal for scenarios where you need constant connectivity to Azure.
-* Disconnected Mode: In disconnected mode, the ExpressRoute circuit is partially or fully down and is unable to provide connectivity to Azure services. This mode is useful when you want to perform maintenance on the circuit or need to temporarily disconnect from Azure.
-
-> [!IMPORTANT]
-> While the ExpressRoute circuit is in disconnected mode, traffic will not be able to flow between your on-premises environment and Azure. Therefore, it is recommended to only use disconnected mode when necessary, and to monitor the circuit closely to ensure it is brought back to connected mode as soon as possible.
+Throughout the lifecycle of your Azure Operator Nexus Kubernetes cluster, you eventually need to directly access a cluster node. This access could be for maintenance, log collection, or troubleshooting operations. You access a node through authentication, which methods vary depending on your method of connection. You securely authenticate against cluster nodes through two options discussed in this article. For security reasons, cluster nodes aren't exposed to the internet. Instead, to connect directly to cluster nodes, you need to use either `kubectl debug` or the host's IP address from a jumpbox.
## Prerequisites * An Azure Operator Nexus Kubernetes cluster deployed in a resource group in your Azure subscription. * SSH private key for the cluster nodes.
-* If you're connecting in disconnected mode, you must have a jumpbox VM deployed in the same virtual network as the cluster nodes.
+* To SSH using the node IP address, you must deploy a jumpbox VM on the same Container Network Interface (CNI) network as the cluster nodes.
-## Connected mode access
+## Access to cluster nodes via Azure Arc for servers
-When operating in connected mode, it's possible to connect to the cluster's kube-api server using the `az connectedk8s proxy` CLI command. Also it's possible to SSH into the worker nodes for troubleshooting or maintenance tasks from Azure using the ExpressRoute circuit.
+The `az ssh arc` command allows users to remotely access a cluster VM that has been connected to Azure Arc. This method is a secure way to SSH into the cluster node directly from the command line, making it a quick and efficient method for remote management.
-### Azure Arc for Kubernetes
+> [!NOTE]
+> Operator Nexus Kubernetes cluster nodes are Arc connected servers by default.
+1. Set the required variables. Replace the placeholders with the actual values relevant to your Azure environment and Nexus Kubernetes cluster.
-### Access to cluster nodes via Azure Arc for Kubernetes
-Once you are connected to a cluster via Arc for Kuberentes, you can connect to individual Kubernetes Node using the `kubectl debug` command to run a privileged container on your node.
+ ```bash
+ RESOURCE_GROUP="myResourceGroup" # Resource group where the Nexus Kubernetes cluster is deployed
+ CLUSTER_NAME="myNexusK8sCluster" # Name of the Nexus Kubernetes cluster
+ SUBSCRIPTION_ID="<Subscription ID>" # Azure subscription ID
+ ADMIN_USERNAME="azureuser" # Username for the cluster administrator (--admin-username parameter value used during cluster creation)
+ SSH_PRIVATE_KEY_FILE="<vm_ssh_id_rsa>" # Path to the SSH private key file
+ MANAGED_RESOURCE_GROUP=$(az networkcloud kubernetescluster show -n $CLUSTER_NAME -g $RESOURCE_GROUP --subscription $SUBSCRIPTION_ID --output tsv --query managedResourceGroupConfiguration.name)
+ ```
-1. List the nodes in your Nexus Kubernetes cluster:
+2. Get the available cluster node names.
- ```console
- $> kubectl get nodes
- NAME STATUS ROLES AGE VERSION
- cluster-01-627e99ee-agentpool1-md-chfwd Ready <none> 125m v1.27.1
- cluster-01-627e99ee-agentpool1-md-kfw4t Ready <none> 125m v1.27.1
- cluster-01-627e99ee-agentpool1-md-z2n8n Ready <none> 124m v1.27.1
- cluster-01-627e99ee-control-plane-5scjz Ready control-plane 129m v1.27.1
+ ```azurecli-interactive
+ az networkcloud kubernetescluster show --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --subscription $SUBSCRIPTION_ID -o json | jq '.nodes[].name'
```
-2. Start a privileged container on your node and connect to it:
+3. Sample output:
- ```console
- $> kubectl debug node/cluster-01-627e99ee-agentpool1-md-chfwd -it --image=mcr.microsoft.com/cbl-mariner/base/core:2.0
- Creating debugging pod node-debugger-cluster-01-627e99ee-agentpool1-md-chfwd-694gg with container debugger on node cluster-01-627e99ee-agentpool1-md-chfwd.
- If you don't see a command prompt, try pressing enter.
- root [ / ]#
+ ```bash
+ "mynexusk8scluster-0b32128d-agentpool1-md-7h9t4"
+ "mynexusk8scluster-0b32128d-agentpool1-md-c6xbs"
+ "mynexusk8scluster-0b32128d-control-plane-qq5jm"
```
- This privileged container gives access to the node. Execute commands on the baremetal host machine by running `chroot /host` at the command line.
-
-3. When you are done with a debugging pod, enter the `exit` command to end the interactive shell session. After exiting the shell, make sure to delete the pod:
+4. Set the cluster node name to the VM_NAME variable.
```bash
- kubectl delete pod node-debugger-cluster-01-627e99ee-agentpool1-md-chfwd-694gg
+ VM_NAME="mynexusk8scluster-0b32128d-agentpool1-md-7h9t4"
```
-### Azure Arc for servers
+5. Run the following command to SSH into the cluster node.
-The `az ssh arc` command allows users to remotely access a cluster VM that has been connected to Azure Arc. This method is a secure way to SSH into the cluster node directly from the command line, while in connected mode. Once the cluster VM has been registered with Azure Arc, the `az ssh arc` command can be used to manage the machine remotely, making it a quick and efficient method for remote management.
+ ```azurecli-interactive
+ az ssh arc --subscription $SUBSCRIPTION_ID \
+ --resource-group $MANAGED_RESOURCE_GROUP \
+ --name $VM_NAME \
+ --local-user $ADMIN_USERNAME \
+ --private-key-file $SSH_PRIVATE_KEY_FILE
+ ```
-1. Set the required variables.
+## Access nodes using the Kubernetes API
- ```bash
- RESOURCE_GROUP="myResourceGroup"
- CLUSTER_NAME="myNexusK8sCluster"
- SUBSCRIPTION_ID="<Subscription ID>"
- USER_NAME="azureuser"
- SSH_PRIVATE_KEY_FILE="<vm_ssh_id_rsa>"
- MANAGED_RESOURCE_GROUP=$(az networkcloud kubernetescluster show -n $CLUSTER_NAME -g $RESOURCE_GROUP --subscription $SUBSCRIPTION_ID --output tsv --query managedResourceGroupConfiguration.name)
- ```
+This method requires usage of `kubectl debug` command. This method is limited to containers and may miss wider system issues, unlike SSH (using 'az ssh arc' or direct IP), which offers full node access and control.
-2. Get the available cluster node names.
+### Access to Kubernetes API via Azure Arc for Kubernetes
- ```azurecli
- az networkcloud kubernetescluster show --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --subscription $SUBSCRIPTION_ID -o json | jq '.nodes[].name'
+
+### Access to cluster nodes via Azure Arc for Kubernetes
+
+Once you're connected to a cluster via Arc for Kubernetes, you can connect to individual Kubernetes node using the `kubectl debug` command to run a privileged container on your node.
+
+1. List the nodes in your Nexus Kubernetes cluster:
+
+ ```console
+ $> kubectl get nodes
+ NAME STATUS ROLES AGE VERSION
+ mynexusk8scluster-0b32128d-agentpool1-md-7h9t4 Ready <none> 125m v1.24.9
+ mynexusk8scluster-0b32128d-agentpool1-md-c6xbs Ready <none> 125m v1.24.9
+ mynexusk8scluster-0b32128d-control-plane-qq5jm Ready <none> 124m v1.24.9
```
-3. Sample output:
+2. Start a privileged container on your node and connect to it:
- ```bash
- "mynexusk8scluster-0b32128d-agentpool1-md-7h9t4"
- "mynexusk8scluster-0b32128d-agentpool1-md-c6xbs"
- "mynexusk8scluster-0b32128d-control-plane-qq5jm"
+ ```console
+ $> kubectl debug node/mynexusk8scluster-0b32128d-agentpool1-md-7h9t4 -it --image=mcr.microsoft.com/cbl-mariner/base/core:2.0
+ Creating debugging pod node-debugger-mynexusk8scluster-0b32128d-agentpool1-md-7h9t4-694gg with container debugger on node mynexusk8scluster-0b32128d-agentpool1-md-7h9t4.
+ If you don't see a command prompt, try pressing enter.
+ root [ / ]#
```
-4. Run the following command to SSH into the cluster node.
+ This privileged container gives access to the node. Execute commands on the cluster node by running `chroot /host` at the command line.
- ```azurecli
- az ssh arc --subscription $SUBSCRIPTION_ID \
- --resource-group $MANAGED_RESOURCE_GROUP \
- --name <VM Name> \
- --local-user $USER_NAME \
- --private-key-file $SSH_PRIVATE_KEY_FILE
+3. When you're done with a debugging pod, enter the `exit` command to end the interactive shell session. After exiting the shell, make sure to delete the pod:
+
+ ```bash
+ kubectl delete pod node-debugger-mynexusk8scluster-0b32128d-agentpool1-md-7h9t4-694gg
```
-### Direct access to cluster nodes
+## Create an interactive shell connection to a node using the IP address
+
+### Connect to the cluster node from Azure jumpbox
-Another option for securely connecting to an Azure Operator Nexus Kubernetes cluster node is to set up a direct access to the cluster's CNI network from Azure. Using this approach, you can SSH into the cluster nodes, also execute kubectl commands against the cluster using the `kubeconfig` file. Reach out to your network administrator to set up this direct connection from Azure to the cluster's CNI network.
+Another option for securely connecting to an Azure Operator Nexus Kubernetes cluster node is to set up a direct access to the cluster's CNI network from Azure jumpbox VM. Using this approach, you can SSH into the cluster nodes, also execute `kubectl` commands against the cluster using the `kubeconfig` file.
-## Disconnected mode access
+Reach out to your network administrator to set up a direct connection from Azure jumpbox VM to the cluster's CNI network.
-When the ExpressRoute is in a disconnected mode, you can't access the cluster's kube-api server using the `az connectedk8s proxy` CLI command. Similarly, the `az ssh` CLI command doesn't work for accessing the worker nodes, which can be crucial for troubleshooting or maintenance tasks.
+### Connect to the cluster node from on-premises jumpbox
-However, you can still ensure a secure and effective connection to your cluster. To do so, establish direct access to the cluster's CNI (Container Network Interface) from within your on-premises infrastructure. This direct access enables you to SSH into the cluster nodes, and lets you execute `kubectl` commands using the `kubeconfig` file.
+Establish direct access to the cluster's CNI (Container Network Interface) from within your on-premises jumpbox. This direct access enables you to SSH into the cluster nodes, and lets you execute `kubectl` commands using the `kubeconfig` file.
Reach out to your network administrator to set up this direct connection to the cluster's CNI network.
Before you can connect to the cluster nodes, you need to find the IP address of
2. Execute the following command to get the IP address of the nodes.
- ```azurecli
+ ```azurecli-interactive
az networkcloud kubernetescluster show --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --subscription $SUBSCRIPTION_ID -o json | jq '.nodes[] | select(any(.networkAttachments[]; .networkAttachmentName == "defaultcni")) | {name: .name, ipv4Address: (.networkAttachments[] | select(.networkAttachmentName == "defaultcni").ipv4Address)}' ```
operator-nexus Quickstarts Tenant Workload Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-tenant-workload-prerequisites.md
You need to create various networks based on your workload needs. The following
- Determine the BGP peering info for each network, and whether the networks need to talk to each other. You should group networks that need to talk to each other into the same L3 isolation domain, because each L3 isolation domain can support multiple L3 networks. - The platform provides a proxy to allow your VM to reach other external endpoints. Creating a `cloudservicesnetwork` instance requires the endpoints to be proxied, so gather the list of endpoints. You can modify the list of endpoints after the network creation.
-## Create networks for tenant workloads
-
-The following sections explain the steps to create networks for tenant workloads (VMs and Kubernetes clusters).
-
-### Create isolation domains
-
-Isolation domains enable creation of layer 2 (L2) and layer 3 (L3) connectivity between network functions running on Azure Operator Nexus. This connectivity enables inter-rack and intra-rack communication between the workloads.
-You can create as many L2 and L3 isolation domains as needed.
-
-You should have the following information already:
--- The network fabric resource ID to create isolation domains.-- VLAN and subnet info for each L3 network.-- Which networks need to talk to each other. (Remember to put VLANs and subnets that need to talk to each other into the same L3 isolation domain.)-- BGP peering and network policy information for your L3 isolation domains.-- VLANs for all your L2 networks.-- VLANs for all your trunked networks.-- MTU values for your networks.-
-#### L2 isolation domain
--
-#### L3 isolation domain
+## Create isolation domains
+The isolation-domains enable communication between workloads hosted in the same rack (intra-rack communication) or different racks (inter-rack communication). You can find more details about creating isolation domains [here](./howto-configure-isolation-domain.md).
-### Create networks for tenant workloads
+## Create networks for tenant workloads
The following sections describe how to create these networks: - Layer 2 network - Layer 3 network - Trunked network-- Cloud services network
-#### Create an L2 network
+### Create an L2 network
Create an L2 network, if necessary, for your workloads. You can repeat the instructions for each required L2 network.
-Gather the resource ID of the L2 isolation domain that you [created](#l2-isolation-domain) to configure the VLAN for this network.
+Gather the resource ID of the L2 isolation domain that you created to configure the VLAN for this network.
-### [Azure CLI](#tab/azure-cli)
+#### [Azure CLI](#tab/azure-cli)
```azurecli-interactive az networkcloud l2network create --name "<YourL2NetworkName>" \
Gather the resource ID of the L2 isolation domain that you [created](#l2-isolati
--l2-isolation-domain-id "<YourL2IsolationDomainId>" ```
-### [Azure PowerShell](#tab/azure-powershell)
+#### [Azure PowerShell](#tab/azure-powershell)
```azurepowershell-interactive New-AzNetworkCloudL2Network -Name "<YourL2NetworkName>" `
New-AzNetworkCloudL2Network -Name "<YourL2NetworkName>" `
-#### Create an L3 network
+### Create an L3 network
Create an L3 network, if necessary, for your workloads. Repeat the instructions for each required L3 network. You need: -- The `resourceID` value of the L3 isolation domain that you [created](#l3-isolation-domain) to configure the VLAN for this network.
+- The `resourceID` value of the L3 isolation domain that you created to configure the VLAN for this network.
- The `ipv4-connected-prefix` value, which must match the `i-pv4-connected-prefix` value that's in the L3 isolation domain. - The `ipv6-connected-prefix` value, which must match the `i-pv6-connected-prefix` value that's in the L3 isolation domain. - The `ip-allocation-type` value, which can be `IPv4`, `IPv6`, or `DualStack` (default). - The `vlan` value, which must match what's in the L3 isolation domain.
-### [Azure CLI](#tab/azure-cli)
+#### [Azure CLI](#tab/azure-cli)
```azurecli-interactive az networkcloud l3network create --name "<YourL3NetworkName>" \
You need:
--vlan <YourNetworkVlan> ```
-### [Azure PowerShell](#tab/azure-powershell)
+#### [Azure PowerShell](#tab/azure-powershell)
```azurepowershell-interactive New-AzNetworkCloudL3Network -Name "<YourL3NetworkName>" `
New-AzNetworkCloudL3Network -Name "<YourL3NetworkName>" `
-#### Create a trunked network
+### Create a trunked network
Create a trunked network, if necessary, for your VM. Repeat the instructions for each required trunked network. Gather the `resourceId` values of the L2 and L3 isolation domains that you created earlier to configure the VLANs for this network. You can include as many L2 and L3 isolation domains as needed.
-### [Azure CLI](#tab/azure-cli)
+#### [Azure CLI](#tab/azure-cli)
```azurecli-interactive az networkcloud trunkednetwork create --name "<YourTrunkedNetworkName>" \
Gather the `resourceId` values of the L2 and L3 isolation domains that you creat
"<YourL3IsolationDomainId3>" \ --vlans <YourVlanList> ```
-### [Azure PowerShell](#tab/azure-powershell)
+
+#### [Azure PowerShell](#tab/azure-powershell)
```azurepowershell-interactive New-AzNetworkCloudTrunkedNetwork -Name "<YourTrunkedNetworkName>" `
New-AzNetworkCloudTrunkedNetwork -Name "<YourTrunkedNetworkName>" `
-#### Create a cloud services network
+## Create a cloud services network
To create an Operator Nexus virtual machine (VM) or Operator Nexus Kubernetes cluster, you must have a cloud services network. Without this network, you can't create a VM or cluster.
After setting up the cloud services network, you can use it to create a VM or cl
> [!NOTE] > To ensure that the VNF image can be pulled correctly, ensure the ACR URL is in the egress allow list of the cloud services network that you will use with your Operator Nexus virtual machine.
+>
+> In addition, if your ACR has dedicated data endpoints enabled, you will need to add all the new data-endpoints to the egress allow list. To find all the possible endpoints for your ACR follow the instruction [here](../container-registry/container-registry-dedicated-data-endpoints.md#dedicated-data-endpoints).
-#### Using the proxy to reach outside of the virtual machine
+### Use the proxy to reach outside of the virtual machine
After creating your Operator Nexus VM or Operator Nexus Kubernetes cluster with this cloud services network, you need to additionally set appropriate environment variables within VM to use tenant proxy and to reach outside of virtual machine. This tenant proxy is useful if you need to access resources outside of the virtual machine, such as managing packages or installing software.
payment-hsm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/overview.md
The Azure Payment HSM solution uses hardware from [Thales](https://cpl.thalesgro
## Azure payment HSM high-level architecture
-After a Payment HSM is provisioned, the HSM device is connected directly to a customer's virtual network, with full remote HSM management capabilities, through Thales payShield Manager and the payShield Trusted Management Device (TMD).
+After a payment HSM is provisioned, the HSM device is connected directly to a customer's virtual network, with full remote HSM management capabilities, through Thales payShield Manager and the payShield Trusted Management Device (TMD).
Two host network interfaces and one management network interface are created at HSM provision. :::image type="content" source="./media/high-level-architecture.png" lightbox="./media/high-level-architecture.png" alt-text="An architecture diagram, showing a provisioned Payment HSM and the network interfaces.":::
+With the Azure Payment HSM provisioning service, customers have native access to two host network interfaces and one management interface on the payment HSM. This screenshot displays the Azure Payment HSM resources within a resource group.
++ ## Why use Azure Payment HSM? Momentum is building as financial institutions move some or all of their payment applications to the cloud, requiring a migration from the legacy on-premises applications and HSMs to a cloud-based infrastructure that isn't generally under their direct control. Often it means a subscription service rather than perpetual ownership of physical equipment and software. Corporate initiatives for efficiency and a scaled-down physical presence are the drivers for this shift. Conversely, with cloud-native organizations, the adoption of cloud-first without any on-premises presence is their fundamental business model. Whatever the reason, end users of a cloud-based payment infrastructure expect reduced IT complexity, streamlined security compliance, and flexibility to scale their solution seamlessly as their business grows.
postgresql Concepts Networking Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking-private.md
Last updated 01/19/2024
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article describes connectivity and networking concepts for Azure Database for PostgreSQL flexible server.
+This article describes connectivity and networking concepts for Azure Database for PostgreSQL flexible server.
When you create an Azure Database for PostgreSQL flexible server instance, you must choose one of the following networking options: **Private access (VNet integration)** or **Public access (allowed IP addresses) and Private Endpoint**. This document will describe **Private access (VNet integration)** networking option.
An Azure virtual network contains a private IP address space that's configured f
Here are some concepts to be familiar with when you're using virtual networks where resources are [integrated into virtual network](../../virtual-network/virtual-network-for-azure-services.md) with Azure Database for PostgreSQL flexible server instances:
-* **Delegated subnet**. A virtual network contains subnets (subnetworks). Subnets enable you to segment your virtual network into smaller address spaces. Azure resources are deployed into specific subnets within a virtual network.
+* **Delegated subnet**. A virtual network contains subnets (subnetworks). Subnets enable you to segment your virtual network into smaller address spaces. Azure resources are deployed into specific subnets within a virtual network.
Your VNET integrated Azure Database for PostgreSQL flexible server instance must be in a subnet that's *delegated*. That is, only Azure Database for PostgreSQL flexible server instances can use that subnet. No other Azure resource types can be in the delegated subnet. You delegate a subnet by assigning its delegation property as `Microsoft.DBforPostgreSQL/flexibleServers`.
- The smallest CIDR range you can specify for the subnet is /28, which provides 16 IP addresses, however the first and last address in any network or subnet can't be assigned to any individual host. Azure reserves five IPs to be utilized internally by Azure networking, which include two IPs that can't be assigned to host, mentioned above. This leaves you 11 available IP addresses for /28 CIDR range, whereas a single Azure Database for PostgreSQL flexible server instance with High Availability features utilizes four addresses.
+ The smallest CIDR range you can specify for the subnet is /28, which provides 16 IP addresses, however the first and last address in any network or subnet can't be assigned to any individual host. Azure reserves five IPs to be utilized internally by Azure networking, which include two IPs that can't be assigned to host, mentioned above. This leaves you 11 available IP addresses for /28 CIDR range, whereas a single Azure Database for PostgreSQL flexible server instance with High Availability features utilizes four addresses.
For Replication and Microsoft Entra connections, please make sure Route Tables don't affect traffic.A common pattern is routed all outbound traffic via an Azure Firewall or a custom on-premises network filtering appliance. If the subnet has a Route Table associated with the rule to route all traffic to a virtual appliance: * Add a rule with Destination Service Tag ΓÇ£AzureActiveDirectoryΓÇ¥ and next hop ΓÇ£InternetΓÇ¥
Here are some concepts to be familiar with when you're using virtual networks wh
* **Network security group (NSG)**. Security rules in NSGs enable you to filter the type of network traffic that can flow in and out of virtual network subnets and network interfaces. For more information, see the [NSG overview](../../virtual-network/network-security-groups-overview.md). Application security groups (ASGs) make it easy to control Layer-4 security by using NSGs for flat networks. You can quickly:
-
+ - Join virtual machines to an ASG, or remove virtual machines from an ASG.
- - Dynamically apply rules to those virtual machines, or remove rules from those virtual machines.
-
- For more information, see the [ASG overview](../../virtual-network/application-security-groups.md).
-
- At this time, we don't support NSGs where an ASG is part of the rule with Azure Database for PostgreSQL flexible server. We currently advise using [IP-based source or destination filtering](../../virtual-network/network-security-groups-overview.md#security-rules) in an NSG.
+ - Dynamically apply rules to those virtual machines, or remove rules from those virtual machines.
+
+ For more information, see the [ASG overview](../../virtual-network/application-security-groups.md).
+
+ At this time, we don't support NSGs where an ASG is part of the rule with Azure Database for PostgreSQL flexible server. We currently advise using [IP-based source or destination filtering](../../virtual-network/network-security-groups-overview.md#security-rules) in an NSG.
> [!IMPORTANT] > High availability and other Features of Azure Database for PostgreSQL flexible server require ability to send/receive traffic to **destination port 5432** within Azure virtual network subnet where Azure Database for PostgreSQL flexible server is deployed, as well as to **Azure storage** for log archival. If you create **[Network Security Groups (NSG)](../../virtual-network/network-security-groups-overview.md)** to deny traffic flow to or from your Azure Database for PostgreSQL flexible server instance within the subnet where it's deployed, **make sure to allow traffic to destination port 5432** within the subnet, and also to Azure storage by using **[service tag](../../virtual-network/service-tags-overview.md) Azure Storage** as a destination. You can further [filter](../../virtual-network/tutorial-filter-network-traffic.md) this exception rule by adding your Azure region to the label like *us-east.storage*. Also, if you elect to use [Microsoft Entra authentication](concepts-azure-ad-authentication.md) to authenticate logins to your Azure Database for PostgreSQL flexible server instance, allow outbound traffic to Microsoft Entra ID using Microsoft Entra [service tag](../../virtual-network/service-tags-overview.md).
- > When setting up [Read Replicas across Azure regions](./concepts-read-replicas.md), Azure Database for PostgreSQL flexible server requires ability to send/receive traffic to **destination port 5432** for both primary and replica, as well as to **[Azure storage](../../virtual-network/service-tags-overview.md#available-service-tags)** in primary and replica regions from both primary and replica servers.
+ > When setting up [Read Replicas across Azure regions](./concepts-read-replicas.md), Azure Database for PostgreSQL flexible server requires ability to send/receive traffic to **destination port 5432** for both primary and replica, as well as to **[Azure storage](../../virtual-network/service-tags-overview.md#available-service-tags)** in primary and replica regions from both primary and replica servers.
-* **Private DNS zone integration**. Azure private DNS zone integration allows you to resolve the private DNS within the current virtual network or any in-region peered virtual network where the private DNS zone is linked.
+* **Private DNS zone integration**. Azure private DNS zone integration allows you to resolve the private DNS within the current virtual network or any in-region peered virtual network where the private DNS zone is linked.
### Using a private DNS zone
-[Azure Private DNS](../../dns/private-dns-overview.md) provides a reliable and secure DNS service for your virtual network. Azure Private DNS manages and resolves domain names in the virtual network without the need to configure a custom DNS solution.
+[Azure Private DNS](../../dns/private-dns-overview.md) provides a reliable and secure DNS service for your virtual network. Azure Private DNS manages and resolves domain names in the virtual network without the need to configure a custom DNS solution.
-When using private network access with Azure virtual network, providing the private DNS zone information is **mandatory** in order to be able to do DNS resolution. For new Azure Database for PostgreSQL flexible server instance creation using private network access, private DNS zones need to be used while configuring Azure Database for PostgreSQL flexible server instances with private access.
+When using private network access with Azure virtual network, providing the private DNS zone information is **mandatory** in order to be able to do DNS resolution. For new Azure Database for PostgreSQL flexible server instance creation using private network access, private DNS zones need to be used while configuring Azure Database for PostgreSQL flexible server instances with private access.
For new Azure Database for PostgreSQL flexible server instance creation using private network access with API, ARM, or Terraform, create private DNS zones and use them while configuring Azure Database for PostgreSQL flexible server instances with private access. See more information on [REST API specifications for Microsoft Azure](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/postgresql/resource-manager/Microsoft.DBforPostgreSQL/stable/2021-06-01/postgresql.json). If you use the [Azure portal](./how-to-manage-virtual-network-portal.md) or [Azure CLI](./how-to-manage-virtual-network-cli.md) for creating Azure Database for PostgreSQL flexible server instances, you can either provide a private DNS zone name that you had previously created in the same or a different subscription or a default private DNS zone is automatically created in your subscription. If you use an Azure API, an Azure Resource Manager template (ARM template), or Terraform, **create private DNS zones that end with `.postgres.database.azure.com`**. Use those zones while configuring Azure Database for PostgreSQL flexible server instances with private access. For example, use the form `[name1].[name2].postgres.database.azure.com` or `[name].postgres.database.azure.com`. If you choose to use the form `[name].postgres.database.azure.com`, the name **can't** be the name you use for one of your Azure Databases for PostgreSQL flexible server instances or an error message will be shown during provisioning. For more information, see the [private DNS zones overview](../../dns/private-dns-overview.md).
-Using Azure portal, API, CLI or ARM, you can also change private DNS Zone from the one you provided when creating your Azure Database for PostgreSQL flexible server instance to another private DNS zone that exists the same or different subscription.
+Using Azure portal, API, CLI or ARM, you can also change private DNS Zone from the one you provided when creating your Azure Database for PostgreSQL flexible server instance to another private DNS zone that exists the same or different subscription.
> [!IMPORTANT]
- > Ability to change private DNS Zone from the one you provided when creating your Azure Database for PostgreSQL flexible server instance to another private DNS zone is currently disabled for servers with High Availability feature enabled.
+ > Ability to change private DNS Zone from the one you provided when creating your Azure Database for PostgreSQL flexible server instance to another private DNS zone is currently disabled for servers with High Availability feature enabled.
-After you create a private DNS zone in Azure, you need to [link](../../dns/private-dns-virtual-network-links.md) a virtual network to it. Once linked, resources hosted in that virtual network can access the private DNS zone.
+After you create a private DNS zone in Azure, you need to [link](../../dns/private-dns-virtual-network-links.md) a virtual network to it. Once linked, resources hosted in that virtual network can access the private DNS zone.
> [!IMPORTANT]
- > We no longer validate virtual network link presence on server creation for Azure Database for PostgreSQL flexible server with private networking. When creating server through the portal we provide customer choice to create link on server creation via checkbox *"Link Private DNS Zone your virtual network"* in the Azure portal.
+ > We no longer validate virtual network link presence on server creation for Azure Database for PostgreSQL flexible server with private networking. When creating server through the portal we provide customer choice to create link on server creation via checkbox *"Link Private DNS Zone your virtual network"* in the Azure portal.
[DNS private zones are resilient](../../dns/private-dns-overview.md) to regional outages because zone data is globally available. Resource records in a private zone are automatically replicated across regions. Azure Private DNS is an availability zone foundational, zone-reduntant service. For more information, see [Azure services with availability zone support](../../reliability/availability-zones-service-support.md#azure-services-with-availability-zone-support). ### Integration with a custom DNS server
-If you're using a custom DNS server, you must use a DNS forwarder to resolve the FQDN of Azure Database for PostgreSQL flexible server. The forwarder IP address should be [168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md).
+If you're using a custom DNS server, you must use a DNS forwarder to resolve the FQDN of Azure Database for PostgreSQL flexible server. The forwarder IP address should be [168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md).
The custom DNS server should be inside the virtual network or reachable via the virtual network's DNS server setting. To learn more, see [Name resolution that uses your own DNS server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server).
The custom DNS server should be inside the virtual network or reachable via the
Private DNS zone settings and virtual network peering are independent of each other. If you want to connect to the Azure Database for PostgreSQL flexible server instance from a client that's provisioned in another virtual network from the same region or a different region, you have to **link** the private DNS zone with the virtual network. For more information, see [Link the virtual network](../../dns/private-dns-getstarted-portal.md#link-the-virtual-network). > [!NOTE]
-> Only private DNS zone names that end with **'postgres.database.azure.com'** can be linked. Your DNS zone name cannot be the same as your Azure Database for PostgreSQL flexible server instance(s) otherwise name resolution will fail.
+> Only private DNS zone names that end with **'postgres.database.azure.com'** can be linked. Your DNS zone name cannot be the same as your Azure Database for PostgreSQL flexible server instance(s) otherwise name resolution will fail.
To map a Server name to the DNS record, you can run *nslookup* command in [Azure Cloud Shell](../../cloud-shell/overview.md) using Azure PowerShell or Bash, substituting name of your server for <server_name> parameter in example below:
There are three main patterns for connecting spoke virtual networks to each othe
Use [Azure Virtual Network Manager (AVNM)](../../virtual-network-manager/overview.md) to create new (and onboard existing) hub and spoke virtual network topologies for the central management of connectivity and security controls.
-### Communication with privately networked clients in different regions
+### Communication with privately networked clients in different regions
Frequently customers have a need to connect to clients different Azure regions. More specifically, this question typically boils down to how to connect two VNETs (one of which has Azure Database for PostgreSQL - Flexible Server and another application client) that are in different regions. There are multiple ways to achieve such connectivity, some of which are:
-* **[Global VNET peering](../../virtual-network/virtual-network-peering-overview.md)**. Most common methodology, as it's the easiest way to connect networks in different regions together. Global VNET peering creates a connection over the Azure backbone directly between the two peered VNETs. This provides best network throughput and lowest latencies for connectivity using this method. When VNETs are peered, Azure will also handle the routing automatically for you, these VNETs can communicate with all resources in the peered VNET, established on a VPN gateway.
-* **[VNET-to-VNET connection](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md)**. A VNET-to-VNET connection is essentially a VPN between the two different Azure locations. The VNET-to-VNET connection is established on a VPN gateway. This means your traffic incurs two additional traffic hops as compared to global VNET peering. There's also additional latency and lower bandwidth as compared to that method.
-* **[Communication via network appliance in Hub and Spoke architecture](#using-hub-and-spoke-private-networking-design)**.
-Instead of connecting spoke virtual networks directly to each other, you can use network appliances to forward traffic between spokes. Network appliances provide more network services like deep packet inspection and traffic segmentation or monitoring, but they can introduce latency and performance bottlenecks if they're not properly sized.
+* **[Global VNET peering](../../virtual-network/virtual-network-peering-overview.md)**. Most common methodology, as it's the easiest way to connect networks in different regions together. Global VNET peering creates a connection over the Azure backbone directly between the two peered VNETs. This provides best network throughput and lowest latencies for connectivity using this method. When VNETs are peered, Azure will also handle the routing automatically for you, these VNETs can communicate with all resources in the peered VNET, established on a VPN gateway.
+* **[VNET-to-VNET connection](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md)**. A VNET-to-VNET connection is essentially a VPN between the two different Azure locations. The VNET-to-VNET connection is established on a VPN gateway. This means your traffic incurs two additional traffic hops as compared to global VNET peering. There's also additional latency and lower bandwidth as compared to that method.
+* **[Communication via network appliance in Hub and Spoke architecture](#using-hub-and-spoke-private-networking-design)**.
+Instead of connecting spoke virtual networks directly to each other, you can use network appliances to forward traffic between spokes. Network appliances provide more network services like deep packet inspection and traffic segmentation or monitoring, but they can introduce latency and performance bottlenecks if they're not properly sized.
### Replication across Azure regions and virtual networks with private networking
Here are some limitations for working with virtual networks created via VNET int
* After an Azure Database for PostgreSQL flexible server instance is deployed to a virtual network and subnet, you can't move it to another virtual network or subnet. You can't move the virtual network into another resource group or subscription. * Subnet size (address spaces) can't be increased after resources exist in the subnet.
-* VNET injected resources can't interact with Private Link by default. If you with to use **[Private Link](../../private-link/private-link-overview.md) for private networking see [Azure Database for PostgreSQL flexible server networking with Private Link - Preview](./concepts-networking-private-link.md)**
+* VNET injected resources can't interact with Private Link by default. If you want to use **[Private Link](../../private-link/private-link-overview.md) for private networking, see [Azure Database for PostgreSQL flexible server networking with Private Link - Preview](./concepts-networking-private-link.md)**
> [!IMPORTANT]
-> Azure Resource Manager supports the ability to **lock** resources, as a security control. Resource locks are applied to the resource, and are effective across all users and roles. There are two types of resource lock: **CanNotDelete** and **ReadOnly**. These lock types can be applied either to a Private DNS zone, or to an individual record set. **Applying a lock of either type against Private DNS Zone or individual record set may interfere with the ability of Azure Database for PostgreSQL flexible server to update DNS records** and cause issues during important operations on DNS, such as High Availability failover from primary to secondary. For these reasons, please make sure you are **not** utilizing DNS private zone or record locks when utilizing High Availability features with Azure Database for PostgreSQL flexible server.
+> Azure Resource Manager supports the ability to **lock** resources, as a security control. Resource locks are applied to the resource, and are effective across all users and roles. There are two types of resource lock: **CanNotDelete** and **ReadOnly**. These lock types can be applied either to a Private DNS zone, or to an individual record set. **Applying a lock of either type against Private DNS Zone or individual record set may interfere with the ability of Azure Database for PostgreSQL flexible server to update DNS records** and cause issues during important operations on DNS, such as High Availability failover from primary to secondary. For these reasons, please make sure you are **not** utilizing DNS private zone or record locks when utilizing High Availability features with Azure Database for PostgreSQL flexible server.
## Host name
-Regardless of the networking option that you choose, we recommend that you always use an **FQDN** as host name when connecting to your Azure Database for PostgreSQL flexible server instance. The server's IP address isn't guaranteed to remain static. Using the FQDN helps you avoid making changes to your connection string.
+Regardless of the networking option that you choose, we recommend that you always use an **FQDN** as host name when connecting to your Azure Database for PostgreSQL flexible server instance. The server's IP address isn't guaranteed to remain static. Using the FQDN helps you avoid making changes to your connection string.
An example that uses an FQDN as a host name is `hostname = servername.postgres.database.azure.com`. Where possible, avoid using `hostname = 10.0.0.4` (a private address) or `hostname = 40.2.45.67` (a public address).
postgresql Concepts Networking Ssl Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking-ssl-tls.md
All incoming connections that use earlier versions of the TLS protocol, such as
[Certificate authentication](https://www.postgresql.org/docs/current/auth-cert.html) is performed using **SSL client certificates** for authentication. In this scenario, PostgreSQL server compares the CN (common name) attribute of the client certificate presented, against the requested database user. **Azure Database for PostgreSQL flexible server doesn't support SSL certificate based authentication at this time.**
+> [!NOTE]
+> Azure Database for PostgreSQL - Flexible server doesn't support [custom SSL\TLS certificates](https://www.postgresql.org/docs/current/ssl-tcp.html#SSL-CERTIFICATE-CREATION) at this time.
+ To determine your current TLS\SSL connection status, you can load the [sslinfo extension](concepts-extensions.md) and then call the `ssl_is_used()` function to determine if SSL is being used. The function returns t if the connection is using SSL, otherwise it returns f. You can also collect all the information about your Azure Database for PostgreSQL flexible server instance's SSL usage by process, client, and application by using the following query: ```sql
postgresql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-security.md
Multiple layers of security are available to help protect the data on your Azure
Azure Database for PostgreSQL - Flexible Server encrypts data in two ways: -- **Data in transit**: Azure Database for PostgreSQL - Flexible Server encrypts in-transit data with Secure Sockets Layer and Transport Layer Security (SSL/TLS). Encryption is enforced by default. See this [guide](how-to-connect-tls-ssl.md) for more details. For better security, you might choose to enable [SCRAM authentication in Azure Database for PostgreSQL - Flexible Server](how-to-connect-scram.md).
+- **Data in transit**: Azure Database for PostgreSQL - Flexible Server encrypts in-transit data with Secure Sockets Layer and Transport Layer Security (SSL/TLS). Encryption is enforced by default. For more detailed information on connection security with SSL\TLS see this [documentation](../flexible-server/concepts-networking-ssl-tls.md). For better security, you might choose to enable [SCRAM authentication in Azure Database for PostgreSQL - Flexible Server](how-to-connect-scram.md).
- Although it's not recommended, if needed, you have an option to disable TLS\SSL for connections to Azure Database for PostgreSQL - Flexible Server by updating the `require_secure_transport` server parameter to OFF. You can also set TLS version by setting `ssl_max_protocol_version` server parameters.
+ Although it's highly not recommended, if needed, due to legacy client incompatibility, you have an option to disable TLS\SSL for connections to Azure Database for PostgreSQL - Flexible Server by updating the `require_secure_transport` server parameter to OFF. You can also set TLS version by setting `ssl_max_protocol_version` server parameters.
- **Data at rest**: For storage encryption, Azure Database for PostgreSQL - Flexible Server uses the FIPS 140-2 validated cryptographic module. Data is encrypted on disk, including backups and the temporary files created while queries are running. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys are system managed. This is similar to other at-rest encryption technologies, like transparent data encryption in SQL Server or Oracle databases. Storage encryption is always on and can't be disabled.
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
Title: Release notes
description: Release notes for Azure Database for PostgreSQL - Flexible Server. -
- - references_regions
- - build-2023
- - ignite-2023
Previously updated : 4/1/2024 Last updated : 4/4/2024 # Release notes - Azure Database for PostgreSQL - Flexible Server
This page provides latest news and updates regarding feature additions, engine v
* Public preview of [real-time language translations](generative-ai-azure-cognitive.md#language-translation) with azure_ai extension on Azure Database for PostgreSQL flexible server. * Public preview of [real-time machine learning predictions](generative-ai-azure-machine-learning.md) with azure_ai extension on Azure Database for PostgreSQL flexible server. * General availability of version 0.6.0 of [vector](how-to-use-pgvector.md) extension on Azure Database for PostgreSQL flexible server.
+* General availability of [Migration service](../../postgresql/migrate/migration-service/concepts-migration-service-postgresql.md) in Azure Database for PostgreSQL flexible server.
## Release: February 2024
-* Support for [minor versions](./concepts-supported-versions.md) 16.1, 15.5, 14.10, 13.13, 12.17, 11.22 <sup>$</sup>
+* Support for new [minor versions](./concepts-supported-versions.md) 16.1, 15.5, 14.10, 13.13, 12.17, 11.22 <sup>$</sup>
* General availability of [Major Version Upgrade logs](./concepts-major-version-upgrade.md#major-version-upgrade-logs) * General availability of [private endpoints](concepts-networking-private-link.md).
This page provides latest news and updates regarding feature additions, engine v
* Public preview of [long-term backup retention](concepts-backup-restore.md). ## Release: October 2023
-* Support for [minor versions](./concepts-supported-versions.md) 15.4, 14.9, 13.12, 12.16, 11.21 <sup>$</sup>
+* Support for new [minor versions](./concepts-supported-versions.md) 15.4, 14.9, 13.12, 12.16, 11.21 <sup>$</sup>
* General availability of [Grafana Monitoring Dashboard](https://grafana.com/grafana/dashboards/19556-azure-azure-postgresql-flexible-server-monitoring/) for Azure Database for PostgreSQL flexible server. ## Release: September 2023
search Search Api Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-preview.md
- ignite-2023 Previously updated : 02/22/2024 Last updated : 04/04/2024 # Preview features in Azure AI Search
Preview features are removed from this list if they're retired or transition to
|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Category | Description | Availability | |||-||
-| [**Integrated vectorization**](vector-search-integrated-vectorization.md) | Index, skillset, queries | Skills-driven data chunking and vectorization during indexing, and text-to-vector conversion during query execution. | [Create or Update Index (preview)](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2023-10-01-preview&preserve-view=true) for `vectorizer`, [Create or Update Skillset (preview)](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2023-10-01-preview&preserve-view=true) for AzureOpenAIEmbedding skill and the data chunking properties of the Text Split skill, and [Search POST (preview)](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2023-10-01-preview&preserve-view=true) for `vectorQueries`, 2023-10-01-Preview or later. |
+| [**Vector quantization**](vector-search-how-to-configure-compression-storage.md#option-3-configure-scalar-quantization) | Index | Compress vector index size in memory and on disk using built-in scalar quantization. | [Create or Update Index (preview)](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2024-03-01-preview&preserve-view=true) to add a `compressions` section to a vector profile. |
+| [**Narrow data types**](vector-search-how-to-configure-compression-storage.md#option-1-assign-narrow-data-types-to-vector-fields) | Index | Assign a smaller data type on vector fields, assuming incoming data is of that data type. | [Create or Update Index (preview)](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2024-03-01-preview&preserve-view=true) to specify a vector field definition. |
+| [**stored property**](vector-search-how-to-configure-compression-storage.md#option-2-set-the-stored-property-to-remove-retrievable-storage) | Index | Boolean that reduces storage of vector indexes by *not* storing retrievable vectors. | [Create or Update Index (preview)](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2024-03-01-preview&preserve-view=true) to set `stored` on a vector field. |
+| [**Vectorizers**](vector-search-integrated-vectorization.md) | Queries | Text-to-vector conversion during query execution. | [Create or Update Index (preview)](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2023-10-01-preview&preserve-view=true) to define a `vectorizer`. [Search POST (preview)](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2023-10-01-preview&preserve-view=true) for `vectorQueries`, 2023-10-01-Preview or later. |
+| [**Integrated vectorization**](vector-search-integrated-vectorization.md) | Index, skillset | Skills-driven data chunking and embedding during indexing. | [Create or Update Skillset (preview)](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2023-10-01-preview&preserve-view=true) for AzureOpenAIEmbedding skill and the data chunking properties of the Text Split skill. |
| [**Import and vectorize data**](search-get-started-portal-import-vectors.md) | Azure portal | A wizard that creates a full indexing pipeline that includes data chunking and vectorization. The wizard creates all of the objects and configuration settings. | Available on all search services, in all regions. | | [**AzureOpenAIEmbedding skill**](cognitive-search-skill-azure-openai-embedding.md) | AI enrichment (skills) | A new skill type that calls Azure OpenAI embedding model to generate embeddings during queries and indexing. | [Create or Update Skillset (preview)](/rest/api/searchservice/preview-api/create-or-update-skillset), 2023-10-01-Preview or later. Also available in the portal through the [Import and vectorize data wizard](search-get-started-portal-import-vectors.md). | | [**Text Split skill**](cognitive-search-skill-textsplit.md) | AI enrichment (skills) | Text Split has two new chunking-related properties in preview: `maximumPagesToTake`, `pageOverlapLength`. | [Create or Update Skillset (preview)](/rest/api/searchservice/preview-api/create-or-update-skillset), 2023-10-01-Preview or later. Also available in the portal through the [Import and vectorize data wizard](search-get-started-portal-import-vectors.md). |
search Search Capacity Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-capacity-planning.md
- ignite-2023 Previously updated : 01/10/2024 Last updated : 04/03/2024 # Estimate and manage capacity of a search service
With a rough estimate in hand, you might double that amount to budget for two in
Dedicated resources can accommodate larger sampling and processing times for more realistic estimates of index quantity, size, and query volumes during development. Some customers jump right in with a billable tier and then re-evaluate as the development project matures.
-1. [Review service limits at each tier](./search-limits-quotas-capacity.md#index-limits) to determine whether lower tiers can support the number of indexes you need. Across the Basic, S1, and S2 tiers, index limits are 15, 50, and 200, respectively. The Storage Optimized tier has a limit of 10 indexes because it's designed to support a low number of very large indexes.
+1. [Review service limits at each tier](./search-limits-quotas-capacity.md#service-limits) to determine whether lower tiers can support the number of indexes you need. Across the Basic, S1, and S2 tiers, index limits are 15, 50, and 200, respectively. The Storage Optimized tier has a limit of 10 indexes because it's designed to support a low number of very large indexes.
1. [Create a service at a billable tier](search-create-service-portal.md):
If your search service appears to be stalled in a provisioning state, check for
## Partition and replica combinations
-A Basic service can have exactly one partition and up to three replicas, for a maximum limit of three SUs. The only adjustable resource is replicas. You need a minimum of two replicas for high availability on queries.
+On search services created before April 3, 2024: Basic can have exactly one partition and up to three replicas, for a maximum limit of three SUs. The only adjustable resource is replicas.
+
+On search services created after April 3, 2024 in [supported regions](search-limits-quotas-capacity.md#supported-regions-with-higher-storage-limits): Basic can have up to three partitions and three replicas. The maximum SU limit is nine to support a full complement of partitions and replicas.
+
+For search services on any billable tier, regardless of creation date, you need a minimum of two replicas for high availability on queries.
All Standard and Storage Optimized search services can assume the following combinations of replicas and partitions, subject to the 36-SU limit allowed for these tiers.
search Search Create Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-create-service-portal.md
- ignite-2023
+ - references_regions
Previously updated : 03/05/2024 Last updated : 04/03/2024 # Create an Azure AI Search service in the portal
-[**Azure AI Search**](search-what-is-azure-search.md) adds vector and full text search as an information retrieval solution for the enterprise, and for traditional and generative AI scenarios.
+[**Azure AI Search**](search-what-is-azure-search.md) is a vector and full text information retrieval solution for the enterprise, and for traditional and generative AI scenarios.
If you have an Azure subscription, including a [trial subscription](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=A261C142F), you can create a search service for free. Free services have limitations, but you can complete all of the quickstarts and most tutorials, except for those featuring semantic ranking (it requires a billable service).
The following service properties are fixed for the lifetime of the service. Cons
+ Service name becomes part of the URL endpoint ([review tips for helpful service names](#name-the-service)). + [Tier](search-sku-tier.md) (Free, Basic, Standard, and so forth) determines the underlying physical hardware and billing. Some features are tier-constrained.
-+ [Service region](#choose-a-region) can determine the availability of certain scenarios. If you need high availability or [AI enrichment](cognitive-search-concept-intro.md), create the resource in a region that provides the feature.
++ [Service region](#choose-a-region) can determine the availability of certain scenarios and higher storage limits. If you need availability zones or [AI enrichment](cognitive-search-concept-intro.md) or more storage, create the resource in a region that provides the feature. ## Subscribe (free or paid)
Service name requirements:
Azure AI Search is available in most regions, as listed in the [**Products available by region**](https://azure.microsoft.com/global-infrastructure/services/?products=search) page.
+We strongly recommend the following regions because they provide [more storage per partition](search-limits-quotas-capacity.md#service-limits), three to seven times more depending on the tier, at the same billing rate. Extra capacity applies to search services created after April 3, 2024:
+
+| Country | Regions providing extra capacity per partition |
+|||
+| **United States** | East USΓÇï, East US 2, ΓÇïCentral USΓÇï, North Central USΓÇï, South Central USΓÇï, West USΓÇï, West US 2ΓÇï, West US 3ΓÇï, West Central USΓÇï |
+| **United Kingdom** | UK SouthΓÇï, UK WestΓÇï ΓÇï |
+| **United Arab Emirates** | UAE NorthΓÇïΓÇï |
+| **Switzerland** | Switzerland WestΓÇï |
+| **Sweden** | Sweden CentralΓÇïΓÇï |
+| **Poland** | Poland CentralΓÇïΓÇï |
+| **Norway** | Norway EastΓÇïΓÇï |
+| **Korea** | Korea Central, Korea SouthΓÇï ΓÇï |
+| **Japan** | Japan East, Japan WestΓÇï |
+| **Italy** | Italy NorthΓÇïΓÇï |
+| **India** | Central India, Jio India WestΓÇï ΓÇï |
+| **France** | France CentralΓÇïΓÇï |
+| **Europe** | North EuropeΓÇïΓÇï |
+| **Canada** | Canada CentralΓÇï, Canada EastΓÇïΓÇï |
+| **Bazil** | Brazil SouthΓÇïΓÇï |
+| **Asia Pacific** | East Asia, Southeast AsiaΓÇï ΓÇï |
+| **Australia** | Australia EastΓÇï, Australia SoutheastΓÇïΓÇï |
+ If you use multiple Azure services, putting all of them in the same region minimizes or voids bandwidth charges. There are no charges for data exchanges among same-region services.
-Two notable exceptions might lead to provisioning Azure services in separate regions:
+Two notable exceptions might warrant provisioning Azure services in separate regions:
-+ [Outbound connections from Azure AI Search to Azure Storage](search-indexer-securing-resources.md). You might want Azure Storage in a different region if you're enabling a firewall.
++ [Outbound connections from Azure AI Search to Azure Storage](search-indexer-securing-resources.md). You might want search and storage in different regions if you're enabling a firewall. + Business continuity and disaster recovery (BCDR) requirements dictate creating multiple search services in [regional pairs](../availability-zones/cross-region-replication-azure.md#azure-paired-regions). For example, if you're operating in North America, you might choose East US and West US, or North Central US and South Central US, for each search service.
Basic and Standard are the most common choices for production workloads, but man
:::image type="content" source="media/search-create-service-portal/select-pricing-tier.png" lightbox="media/search-create-service-portal/select-pricing-tier.png" alt-text="Screenshot of Select a pricing tier page." border="true":::
+Search services created after April 3, 2024 have larger partitions and higher vector quotas.
+ Remember, a pricing tier can't be changed once the service is created. If you need a higher or lower tier, you should re-create the service. ## Create your service
An endpoint and key aren't needed for portal-based tasks. The portal is already
## Scale your service
-After a search service is provisioned, you can [scale it to meet your needs](search-limits-quotas-capacity.md). If you chose the Standard tier, you can scale the service in two dimensions: replicas and partitions. For the Basic tier, you can only add replicas. For the free service, scale isn't available.
+After a search service is provisioned, you can [scale it to meet your needs](search-limits-quotas-capacity.md). On a billable tier, you can scale the service in two dimensions: replicas and partitions. For the free service, scale up isn't available and replica and partition configuration isn't offered.
***Partitions*** allow your service to store and search through more documents.
search Search Features List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-features-list.md
- ignite-2023 Previously updated : 12/12/2023 Last updated : 04/04/2024 # Features of Azure AI Search
There's feature parity in all Azure public, private, and sovereign clouds, but s
| Vector filters | [Apply filters before or after query execution](vector-search-filters.md) for greater precision during information retrieval. | | Hybrid information retrieval | Search for concepts and keywords in a single [hybrid query request](hybrid-search-how-to-query.md). </p>[**Hybrid search**](hybrid-search-overview.md) consolidates vector and text search, with optional semantic ranking and relevance tuning for best results.| | Integrated data chunking and vectorization (preview) | Native data chunking through [Text Split skill](cognitive-search-skill-textsplit.md) and native vectorization through [vectorizers](vector-search-how-to-configure-vectorizer.md) and the [AzureOpenAIEmbeddingModel skill](cognitive-search-skill-azure-openai-embedding.md). </p>[**Integrated vectorization** (preview)](vector-search-integrated-vectorization.md) provides an end-to-end indexing pipeline from source files to queries.|
+| Integrated vector compression and quantization | Use [built-in scalar quantization](vector-search-how-to-configure-compression-storage.md) to reduce vector index size in memory and on disk. You can also forego storage of vectors you don't need, or assign narrow data types to vector fields for reduced storage requirements. |
| **Import and vectorize data** (preview)| A [new wizard](search-get-started-portal-import-vectors.md) in the Azure portal that creates a full indexing pipeline that includes data chunking and vectorization. The wizard creates all of the objects and configuration settings. | ## AI enrichment and knowledge mining
search Search Get Started Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-rest.md
Creating and loading the index are separate steps. In Azure AI Search, the index
The URI is extended to include the `docs` collections and `index` operation. -- Paste in the following example to upload JSON documents to the search index. Then select **Send request**.
+1. Paste in the following example to upload JSON documents to the search index.
```http ### Upload documents
The URI is extended to include the `docs` collections and `index` operation.
} ```
-In a few seconds, you should see an HTTP 201 response in the adjacent pane. If you get a 207, at least one document failed to upload. If you get a 404, you have a syntax error in either the header or body of the request. Verify that you changed the endpoint to include `/docs/index`.
+1. Select **Send request**. In a few seconds, you should see an HTTP 201 response in the adjacent pane.
+
+ If you get a 207, at least one document failed to upload. If you get a 404, you have a syntax error in either the header or body of the request. Verify that you changed the endpoint to include `/docs/index`.
## Run queries
search Search How To Create Search Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-create-search-index.md
To minimize churn in the design process, the following table describes which ele
| [Analyzer](search-analyzers.md) | You can add and modify custom analyzers in the index. Regarding analyzer assignments on string fields, you can only modify `searchAnalyzer`. All other assignments and modifications require a rebuild. | | [Scoring profiles](index-add-scoring-profiles.md) | Yes | | [Suggesters](index-add-suggesters.md) | No |
-| [cross-origin remote scripting (CORS)](#corsoptions) | Yes |
+| [cross-origin resource sharing (CORS)](#corsoptions) | Yes |
| [Encryption](search-security-manage-encryption-keys.md) | Yes | ## Next steps
search Search Howto Large Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-large-index.md
- ignite-2023 Previously updated : 01/17/2024 Last updated : 04/01/2024 # Index large data sets in Azure AI Search
This article assumes familiarity with the [two basic approaches for importing da
This article complements [Tips for better performance](search-performance-tips.md), which offers best practices on index and query design. A well-designed index that includes only the fields and attributes you need is an important prerequisite for large-scale indexing.
+We recommend using a newer search service, created after April 3, 2024, for [higher storage per partition](search-limits-quotas-capacity.md#service-limits).
+ > [!NOTE] > The strategies described in this article assume a single large data source. If your solution requires indexing from multiple data sources, see [Index multiple data sources in Azure AI Search](/samples/azure-samples/azure-search-dotnet-scale/multiple-data-sources/) for a recommended approach.
search Search Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-limits-quotas-capacity.md
Previously updated : 02/21/2024 Last updated : 04/03/2024 - references_regions - ignite-2023
# Service limits in Azure AI Search
-Maximum limits on storage, workloads, and quantities of indexes and other objects depend on whether you [provision Azure AI Search](search-create-service-portal.md) at **Free**, **Basic**, **Standard**, or **Storage Optimized** pricing tiers.
+Maximum limits on storage, workloads, and quantities of indexes and other objects depend on whether you [create Azure AI Search](search-create-service-portal.md) at **Free**, **Basic**, **Standard**, or **Storage Optimized** pricing tiers.
+ **Free** is a multitenant shared service that comes with your Azure subscription. + **Basic** provides dedicated computing resources for production workloads at a smaller scale, but shares some networking infrastructure with other tenants.
-+ **Standard** runs on dedicated machines with more storage and processing capacity at every level. Standard comes in four levels: S1, S2, S3, and S3 HD. S3 High Density (S3 HD) is engineered for [multi-tenancy](search-modeling-multitenant-saas-applications.md) and large quantities of small indexes (three thousand indexes per service). S3 HD doesn't provide the [indexer feature](search-indexer-overview.md) and data ingestion must use APIs that push data from source to index.
++ **Standard** runs on dedicated machines with more storage and processing capacity at every level. Standard comes in four levels: S1, S2, S3, and S3 HD. S3 High Density (S3 HD) is engineered for [multi-tenancy](search-modeling-multitenant-saas-applications.md) and large quantities of small indexes (3,000 indexes per service). S3 HD doesn't provide the [indexer feature](search-indexer-overview.md) and data ingestion must use APIs that push data from source to index. + **Storage Optimized** runs on dedicated machines with more total storage, storage bandwidth, and memory than **Standard**. This tier targets large, slow-changing indexes. Storage Optimized comes in two levels: L1 and L2.
Maximum limits on storage, workloads, and quantities of indexes and other object
## Index limits | Resource | Free | Basic&nbsp;<sup>1</sup> | S1 | S2 | S3 | S3&nbsp;HD | L1 | L2 |
-| -- | - | - | | | | | | |
+|-||--|-|-|-||-|-|
| Maximum indexes |3 |5 or 15 |50 |200 |200 |1000 per partition or 3000 per service |10 |10 | | Maximum simple fields per index&nbsp;<sup>2</sup> |1000 |100 |1000 |1000 |1000 |1000 |1000 |1000 | | Maximum dimensions per vector field | 3072 |3072 |3072 |3072 |3072 |3072 |3072 |3072 |
Maximum limits on storage, workloads, and quantities of indexes and other object
| Maximum [suggesters](/rest/api/searchservice/suggesters) per index |1 |1 |1 |1 |1 |1 |1 |1 | | Maximum [scoring profiles](/rest/api/searchservice/add-scoring-profiles-to-a-search-index) per index |100 |100 |100 |100 |100 |100 |100 |100 | | Maximum functions per profile |8 |8 |8 |8 |8 |8 |8 |8 |
+| Maximum index size&nbsp;<sup>4</sup> | N/A | N/A | N/A | 1.92 TB | 2.4 TB | 100 GB| N/A | N/A |
-<sup>1</sup> Basic services created before December 2017 have lower limits (5 instead of 15) on indexes. Basic tier is the only SKU with a lower limit of 100 fields per index.
+<sup>1</sup> Basic services created before December 2017 have lower limits (5 instead of 15) on indexes. Basic tier is the only tier with a lower limit of 100 fields per index.
<sup>2</sup> The upper limit on fields includes both first-level fields and nested subfields in a complex collection. For example, if an index contains 15 fields and has two complex collections with five subfields each, the field count of your index is 25. Indexes with a very large fields collection can be slow. [Limit fields and attributes](search-what-is-an-index.md#physical-structure-and-size) to just those you need, and run indexing and query test to ensure performance is acceptable.
-<sup>3</sup> An upper limit exists for elements because having a large number of them significantly increases the storage required for your index. An element of a complex collection is defined as a member of that collection. For example, assume a [Hotel document with a Rooms complex collection](search-howto-complex-data-types.md#indexing-complex-types), each room in the Rooms collection is considered an element. During indexing, the indexing engine can safely process a maximum of 3000 elements across the document as a whole. [This limit](search-api-migration.md#upgrade-to-2019-05-06) was introduced in `api-version=2019-05-06` and applies to complex collections only, and not to string collections or to complex fields.
+<sup>3</sup> An upper limit exists for elements because having a large number of them significantly increases the storage required for your index. An element of a complex collection is defined as a member of that collection. For example, assume a [Hotel document with a Rooms complex collection](search-howto-complex-data-types.md#indexing-complex-types), each room in the Rooms collection is considered an element. During indexing, the indexing engine can safely process a maximum of 3,000 elements across the document as a whole. [This limit](search-api-migration.md#upgrade-to-2019-05-06) was introduced in `api-version=2019-05-06` and applies to complex collections only, and not to string collections or to complex fields.
+
+<sup>4</sup> On most tiers, maximum index size is all available storage on your search service. For S2, S3, and S3 HD, the maximum size of any index is the number provided in the table. Applies to search services created after April 3, 2024.
You might find some variation in maximum limits if your service happens to be provisioned on a more powerful cluster. The limits here represent the common denominator. Indexes built to the above specifications are portable across equivalent service tiers in any region.
When estimating document size, remember to consider only those fields that can b
## Vector index size limits
-When you index documents with vector fields, Azure AI Search constructs internal vector indexes using the algorithm parameters you provide. The size of these vector indexes is restricted by the memory reserved for vector search for your service's tier (or SKU).
+When you index documents with vector fields, Azure AI Search constructs internal vector indexes using the algorithm parameters you provide. The size of these vector indexes is restricted by the memory reserved for vector search for your service's tier (or `SKU`).
The service enforces a vector index size quota **for every partition** in your search service. Each extra partition increases the available vector index size quota. This quota is a hard limit to ensure your service remains healthy, which means that further indexing attempts once the limit is exceeded results in failure. You can resume indexing once you free up available quota by either deleting some vector documents or by scaling up in partitions.
-The table describes the vector index size quota per partition across the service tiers (or SKU). For context, it includes:
+The table describes the vector index size quota per partition across the service tiers. For context, it includes:
+ [Partition storage limits](#service-limits) for each tier, repeated here for context. + Amount of each partition (in GB) available for vector indexes (created when you add vector fields to an index). + Approximate number of embeddings (floating point values) per partition.
-Use the [Get Service Statistics API (GET /servicestats)](/rest/api/searchservice/get-service-statistics) to retrieve your vector index size quota. See our [documentation on vector index size](vector-search-index-size.md) for more details.
+Use the [GET Service Statistics](/rest/api/searchservice/get-service-statistics) to retrieve your vector index size quota or review the **Indexes** page or **Usage** tab in the Azure portal.
+
+Vector limits vary by service creation date and tier. To check the age of your search service and learn more about vector indexes, see [Vector index size and staying under limits](vector-search-index-size.md).
+
+### Vector limits on services created after April 3, 2024 in supported regions
-### Services created before July 1, 2023
+The highest vector limits are available on search services created after April 3, 2024 in a [supported region](#supported-regions-with-higher-storage-limits).
| Tier | Storage quota (GB) | Vector quota per partition (GB) | Approx. floats per partition (assuming 15% overhead) |
-| -- | | | - |
-| Basic | 2 | 0.5 | 115 million |
-| S1 | 25 | 1 | 235 million |
-| S2 | 100 | 6 | 1,400 million |
-| S3 | 200 | 12 | 2,800 million |
-| L1 | 1,000 | 12 | 2,800 million |
-| L2 | 2,000 | 36 | 8,400 million |
+|--|--|--||
+| Basic | 15 | 5 | 1,100 million |
+| S1 | 160 | 35 | 8,200 million |
+| S2 | 350 | 100 | 23,500 million |
+| S3 | 700 | 200 | 47,000 million |
+| L1 | 1,000 | 12 | 2,800 million |
+| L2 | 2,000 | 36 | 8,400 million |
-### Services created after July 1, 2023 in supported regions
+Notice that L1 and L2 limits are unchanged in the April 3 rollout.
-Azure AI Search is rolling out increased vector index size limits worldwide for **new search services**, but the team is building out infrastructure capacity in certain regions. Unfortunately, existing services can't be migrated to the new limits.
+### Vector limits on services created between July 1, 2023 and April 3, 2024
-The following regions **do not** support increased limits:
+The following limits applied to new services created between July 1 and April 3, 2024, except for the following regions, which have the original limits from before July 1, 2023:
+ Germany West Central + West India + Qatar Central
+All other regions have these limits:
+ | Tier | Storage quota (GB) | Vector quota per partition (GB) | Approx. floats per partition (assuming 15% overhead) |
-| -- | | | - |
+|--|--|--||
| Basic | 2 | 1 | 235 million | | S1 | 25 | 3 | 700 million | | S2 | 100 | 12 | 2,800 million |
The following regions **do not** support increased limits:
| L1 | 1,000 | 12 | 2,800 million | | L2 | 2,000 | 36 | 8,400 million |
+### Vector limits on services created before July 1, 2023
+
+| Tier | Storage quota (GB) | Vector quota per partition (GB) | Approx. floats per partition (assuming 15% overhead) |
+|--|--|--||
+| Basic | 2 | 0.5 | 115 million |
+| S1 | 25 | 1 | 235 million |
+| S2 | 100 | 6 | 1,400 million |
+| S3 | 200 | 12 | 2,800 million |
+| L1 | 1,000 | 12 | 2,800 million |
+| L2 | 2,000 | 36 | 8,400 million |
+ ## Indexer limits Maximum running times exist to provide balance and stability to the service as a whole, but larger data sets might need more indexing time than the maximum allows. If an indexing job can't complete within the maximum time allowed, try running it on a schedule. The scheduler keeps track of indexing status. If a scheduled indexing job is interrupted for any reason, the indexer can pick up where it last left off at the next scheduled run. - | Resource | Free&nbsp;<sup>1</sup> | Basic&nbsp;<sup>2</sup>| S1 | S2 | S3 | S3&nbsp;HD&nbsp;<sup>3</sup>|L1 |L2 |
-| -- | -- | -- | | | | | | |
+|-||--|-|-|-||-|-|
| Maximum indexers |3 |5 or 15|50 |200 |200 |N/A |10 |10 | | Maximum datasources |3 |5 or 15 |50 |200 |200 |N/A |10 |10 | | Maximum skillsets <sup>4</sup> |3 |5 or 15 |50 |200 |200 |N/A |10 |10 |
Maximum running times exist to provide balance and stability to the service as a
Indexers can access other Azure resources [over private endpoints](search-indexer-howto-access-private.md) managed via the [shared private link resource API](/rest/api/searchmanagement/shared-private-link-resources). This section describes the limits associated with this capability.
-| Resource | Free | Basic | S1 | S2 | S3 | S3 HD | L1 | L2
-| | | | | | | | | |
+| Resource | Free | Basic | S1 | S2 | S3 | S3 HD | L1 | L2 |
+|-||-|-|-|-|-|-|-|
| Private endpoint indexer support | No | Yes | Yes | Yes | Yes | No | Yes | Yes | | Private endpoint support for indexers with a skillset<sup>1</sup> | No | No | No | Yes | Yes | No | Yes | Yes | | Maximum private endpoints | N/A | 10 or 30 | 100 | 400 | 400 | N/A | 20 | 20 |
Indexers can access other Azure resources [over private endpoints](search-indexe
Maximum number of synonym maps varies by tier. Each rule can have up to 20 expansions, where an expansion is an equivalent term. For example, given "cat", association with "kitty", "feline", and "felis" (the genus for cats) would count as 3 expansions. | Resource | Free | Basic | S1 | S2 | S3 | S3-HD |L1 | L2 |
-| -- | --| |-|-|-|-||-|
+|-||-|-|-|-|-|-|-|
| Maximum synonym maps |3 |3|5 |10 |20 |20 | 10 | 10 | | Maximum number of rules per map |5000 |20000|20000 |20000 |20000 |20000 | 20000 | 20000 |
Maximum number of synonym maps varies by tier. Each rule can have up to 20 expan
Maximum number of [index aliases](search-how-to-alias.md) varies by tier. In all tiers, the maximum number of aliases is double the maximum number of indexes allowed. | Resource | Free | Basic | S1 | S2 | S3 | S3-HD |L1 | L2 |
-| -- | --| |-|-|-|-||-|
+|-||-|-|-|-|-|-|-|
| Maximum aliases |6 |10 or 30 |100 |400 |400 |2000 per partition or 6000 per service |20 |20 | ## Data limits (AI enrichment)
Static rate request limits for operations related to a service:
## API request limits * Maximum of 16 MB per request <sup>1</sup>
-* Maximum 8 KB URL length
-* Maximum 1000 documents per batch of index uploads, merges, or deletes
+* Maximum 8-KB URL length
+* Maximum 1,000 documents per batch of index uploads, merges, or deletes
* Maximum 32 fields in $orderby clause * Maximum 100,000 characters in a search clause * The maximum number of clauses in `search` (expressions separated by AND or OR) is 1024 * Maximum search term size is 32,766 bytes (32 KB minus 2 bytes) of UTF-8 encoded text
-* Maximum search term size is 1000 characters for [prefix search](query-simple-syntax.md#prefix-queries) and [regex search](query-lucene-syntax.md#bkmk_regex)
+* Maximum search term size is 1,000 characters for [prefix search](query-simple-syntax.md#prefix-queries) and [regex search](query-lucene-syntax.md#bkmk_regex)
* [Wildcard search](query-lucene-syntax.md#bkmk_wildcard) and [Regular expression search](query-lucene-syntax.md#bkmk_regex) are limited to a maximum of 1000 states when processed by [Lucene](https://lucene.apache.org/core/7_0_1/core/org/apache/lucene/util/automaton/RegExp.html). <sup>1</sup> In Azure AI Search, the body of a request is subject to an upper limit of 16 MB, imposing a practical limit on the contents of individual fields or collections that aren't otherwise constrained by theoretical limits (see [Supported data types](/rest/api/searchservice/supported-data-types) for more information about field composition and restrictions).
Limits on query size and composition exist because unbounded queries can destabi
## API response limits
-* Maximum 1000 documents returned per page of search results
+* Maximum 1,000 documents returned per page of search results
* Maximum 100 suggestions returned per Suggest API request ## API key limits
search Search Performance Tips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-performance-tips.md
- ignite-2023 Previously updated : 02/15/2024 Last updated : 04/03/2024 # Tips for better performance in Azure AI Search
A service is overburdened when queries take too long or when the service starts
The tier of your search service and the number of replicas/partitions also have a large impact on performance. Each progressively higher tier provides faster CPUs and more memory, both of which have a positive impact on performance.
+### Tip: Create a new high capacity search service
+
+Basic and standard services created [in supported regions]([supported regions](search-limits-quotas-capacity.md#supported-regions-with-higher-storage-limits) after April 3, 2024 have more storage per partition than older services. Before upgrading to a higher tier and a higher billable rate, revisit the [tier service limits](search-limits-quotas-capacity.md#service-limits) to see if the same tier on a newer service gives you the necessary storage.
+ ### Tip: Upgrade to a Standard S2 tier The Standard S1 search tier is often where customers start. A common pattern for S1 services is that indexes grow over time, which requires more partitions. More partitions lead to slower response times, so more replicas are added to handle the query load. As you can imagine, the cost of running an S1 service has now progressed to levels beyond the initial configuration.
search Search Sku Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-sku-manage-costs.md
- ignite-2023 Previously updated : 01/11/2024 Last updated : 04/01/2024 # Plan and manage costs of an Azure AI Search service
Cost management is built into the Azure infrastructure. Review [Billing and cost
Follow these guidelines to minimize costs of an Azure AI Search solution.
-1. If possible, create all resources in the same region, or in as few regions as possible, to minimize or eliminate bandwidth charges.
+1. If possible, create a search service [in a region that has more storage per partition]([supported regions](search-limits-quotas-capacity.md#supported-regions-with-higher-storage-limits). If you're using multiple Azure resources in your solution, create them in the same region, or in as few regions as possible, to minimize or eliminate bandwidth charges.
1. [Scale up](search-capacity-planning.md) for resource-intensive operations like indexing, and then readjust downwards for regular query workloads. If there are predictable patterns to your workloads, you might be able to synchronize scale up to coincide with the expected volume (you would need to write code to automate this).
search Search Sku Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-sku-tier.md
Previously updated : 11/21/2023 Last updated : 04/04/2024 - ignite-2023
In a few instances, the tier you choose determines the availability of [premium
Pricing - or the estimated monthly cost of running the service - are shown in the portal's **Select Pricing Tier** page. You should check [service pricing](https://azure.microsoft.com/pricing/details/search/) to learn about estimated costs. > [!NOTE]
-> Looking for information about "Azure SKUs"? Start with [Azure pricing](https://azure.microsoft.com/pricing/) and then scroll down for links to per-service pricing pages.
+> Search services created after April 3, 2024 have larger partitions and higher vector quotas at almost every tier. For more information, see [service limits](search-limits-quotas-capacity.md#after-april-3-2024).
## Tier descriptions
Tiers include **Free**, **Basic**, **Standard**, and **Storage Optimized**. Stan
The most commonly used billable tiers include the following:
-+ **Basic** has just one partition but with the ability to meet SLA with its support for three replicas.
++ **Basic** has the ability to meet SLA with its support for three replicas. + **Standard (S1, S2, S3)** is the default. It gives you more flexibility in scaling for workloads. You can scale both partitions and replicas. With dedicated resources under your control, you can deploy larger projects, optimize performance, and increase capacity.
Tiers determine the maximum storage of the service itself, as well as the maxim
## Partition size and speed
-Tier pricing includes details about per-partition storage that ranges from 2 GB for Basic, up to 2 TB for Storage Optimized (L2) tiers. Other hardware characteristics, such as speed of operations, latency, and transfer rates, aren't published, but tiers that are designed for specific solution architectures are built on hardware that has the features to support those scenarios. For more information about partitions, see [Estimate and manage capacity](search-capacity-planning.md) and [Reliability in Azure AI Search](search-reliability.md).
+Tier pricing includes details about per-partition storage that ranges from 15 GB for Basic, up to 2 TB for Storage Optimized (L2) tiers. Other hardware characteristics, such as speed of operations, latency, and transfer rates, aren't published, but tiers that are designed for specific solution architectures are built on hardware that has the features to support those scenarios. For more information about partitions, see [Estimate and manage capacity](search-capacity-planning.md) and [Reliability in Azure AI Search](search-reliability.md).
## Billing rates
-Tiers have different billing rates, with higher rates for tiers that run on more expensive hardware or provide more expensive features. The per-tier billing rate can be found in the [Azure pricing pages](https://azure.microsoft.com/pricing/details/search/) for Azure AI Search.
+Tiers have different billing rates, with higher rates for tiers that run on more expensive hardware or provide more expensive features. The tier billing rate can be found in the [Azure pricing pages](https://azure.microsoft.com/pricing/details/search/) for Azure AI Search.
Once you create a service, the billing rate becomes both a *fixed cost* of running the service around the clock, and an *incremental cost* if you choose to add more capacity.
search Vector Search How To Configure Compression Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-configure-compression-storage.md
+
+ Title: Reduce vector size
+
+description: Configure vector compression options and vector storage using narrow data types, built-in scalar quantization, and storage options.
+++++ Last updated : 04/03/2024++
+# Configure vector quantization and reduced storage for smaller vectors in Azure AI Search
+
+> [!IMPORTANT]
+> These features are in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [2024-03-01-Preview REST API](/rest/api/searchservice/operation-groups?view=rest-searchservice-2024-03-01-preview&preserve-view=true) provides the new data types, vector compression properties, and the `stored` property.
+
+This article describes vector quantization and other techniques for compressing vector indexes in Azure AI Search.
+
+## Evaluate the options
+
+As a first step, review your options for reducing the amount of storage used by vector fields. These options aren't mutually exclusive so you can use multiple options together.
+
+We recommend scalar quantization because it's the most effective option for most scenarios. Narrow types (except for `Float16`) require a special effort into making them, and `stored` saves storage, which isn't as expensive as memory.
+
+| Approach | Why use this option |
+|-||
+| Assign smaller primitive data types to vector fields | Narrow data types, such as `Float16`, `Int16`, and `Int8`, consume less space in memory and on disk. This option is viable if your embedding model outputs vectors in a narrow data format. Or, if you have custom quantization logic that outputs small data. A more common use case is recasting the native `Float32` embeddings produced by most models to `Float16`. |
+| Eliminate optional storage of retrievable vectors | Vectors returned in a query response are stored separately from vectors used during query execution. If you don't need to return vectors, you can turn off retrievable storage, reducing overall per-field storage by up to 50 percent. |
+| Add scalar quantization | Use built-in scalar quantization to compress native `Float32` embeddings to `Int8`. This option reduces storage in memory and on disk with no degradation of query performance. Smaller data types like `Int8` produce vector indexes that are less content-rich than those with `Float32` embeddings. To offset information loss, built-in compression includes options for post-query processing using uncompressed embeddings and oversampling to return more relevant results. Reranking and oversampling are specific features of built-in scalar quantization of `Float32` or `Float16` fields and can't be used on embeddings that undergo custom quantization. |
+
+All of these options are defined on an empty index. To implement any of them, use the Azure portal, [2024-03-01-preview REST APIs](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2024-03-01-preview&preserve-view=true), or a beta Azure SDK package.
+
+After the index is defined, you can load and index documents as a separate step.
+
+## Option 1: Assign narrow data types to vector fields
+
+Vector fields store vector embeddings, which are represented as an array of numbers. When you specify a field type, you specify the underlying primitive data type used to hold each number within these arrays. The data type affects how much space each number takes up.
+
+Using preview APIs, you can assign narrow primitive data types to reduce the storage requirements of vector fields.
+
+1. Review the [data types for vector fields](/rest/api/searchservice/supported-data-types#edm-data-types-for-vector-fields):
+
+ + `Collection(Edm.Single)` 32-bit floating point (default)
+ + `Collection(Edm.Half)` 16-bit floating point
+ + `Collection(Edm.Int16)` 16-bit signed integer
+ + `Collection(Edm.SByte)` 8-bit signed integer
+
+ > [!NOTE]
+ > Binary data types aren't currently supported.
+
+1. Choose a data type that's valid for your embedding model's output, or for vectors that undergo custom quantization.
+
+ Most embedding models output 32-bit floating point numbers, but if you apply custom quantization, your output might be `Int16` or `Int8`. You can now define vector fields that accept the smaller format.
+
+ Text embedding models have a native output format of `Float32`, which maps to `Collection(Edm.Single)` in Azure AI Search. You can't map that output to `Int8` because casting from `float` to `int` is prohibited. However, you can cast from `Float32` to `Float16` (or `Collection(Edm.Half)`), and this is an easy way to use narrow data types without extra work.
+
+ The following table provides links to several embedding models that use the narrow data types.
+
+ | Embedding model | Native output | Valid types in Azure AI Search |
+ |||--|
+ | [text-embedding-ada-002](/azure/ai-services/openai/concepts/models#embeddings) | `Float32` | `Collection(Edm.Single)` or `Collection(Edm.Half)` |
+ | [text-embedding-3-small](/azure/ai-services/openai/concepts/models#embeddings) | `Float32` | `Collection(Edm.Single)` or `Collection(Edm.Half)` |
+ | [text-embedding-3-large](/azure/ai-services/openai/concepts/models#embeddings) | `Float32` | `Collection(Edm.Single)` or `Collection(Edm.Half)` |
+ | [Cohere V3 embedding models with int8 embedding_type](https://docs.cohere.com/reference/embed) | `Int8` | `Collection(Edm.SByte)` |
+
+1. Make sure you understand the tradeoffs of a narrow data type. `Collection(Edm.Half)` has less information, which results in lower resolution. If your data is homogenous or dense, losing extra detail or nuance could lead to unacceptable results at query time because there's less detail that can be used to distinguish nearby vectors apart.
+
+1. [Define and build the index](vector-search-how-to-create-index.md). You can use the Azure portal, [2024-03-01-preview](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2024-03-01-preview&preserve-view=true), or a beta Azure SDK package for this step.
+
+1. Check the results. Assuming the vector field is marked as retrievable, use [Search explorer](search-explorer.md) or [REST API](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-03-01-preview&preserve-view=true) to verify the field content matches the data type. Be sure to use the correct `2024-03-01-preview` API version for the query, otherwise the new properties aren't shown.
+<!--
+ Evidence of choosing the wrong data type, for example choosing `int8` for a `float32` embedding, is a field that's indexed as an array of zeros. If you encounter this problem, start over. -->
+
+ To check vector index size, use the Azure portal or the [2024-03-01-preview](/rest/api/searchservice/indexes/get-statistics?view=rest-searchservice-2024-03-01-preview&preserve-view=true).
+
+> [!NOTE]
+> The field's data type is used to create the physical data structure. If you want to change a data type later, either drop and rebuild the index, or create a second field with the new definition.
+
+## Option 2: Set the `stored` property to remove retrievable storage
+
+The `stored` property is a new boolean on a vector field definition that determines whether storage is allocated for retrievable vector field content. If you don't need vector content in a query response, you can save up to 50 percent storage per field by setting `stored` to false.
+
+Because vectors aren't human readable, they're typically omitted in a query response that's rendered on a search page. However, if you're using vectors in downstream processing, such as passing query results to a model or process that consumes vector content, you should keep `stored` set to true and choose a different technique for minimizing vector size.
+
+The following example shows the fields collection of a search index. Set `stored` to false to permanently remove retrievable storage for the vector field.
+
+ ```http
+ PUT https://[service-name].search.windows.net/indexes/[index-name]?api-version=2024-03-01-previewΓÇ»
+ ΓÇ» Content-Type: application/jsonΓÇ»
+ ΓÇ» api-key: [admin key]ΓÇ»
+
+ {
+ ΓÇ» "name": "myindex",
+ ΓÇ» "fields": [
+ ΓÇ» ΓÇ» {
+ ΓÇ» ΓÇ» ΓÇ» "name": "myvector",
+ ΓÇ» ΓÇ» ΓÇ» "type": "Collection(Edm.Single)",
+ ΓÇ» ΓÇ» ΓÇ» "retrievable": false,
+ ΓÇ» ΓÇ» ΓÇ» "stored": false,
+ ΓÇ» ΓÇ» ΓÇ» "dimensions": 1536,
+ ΓÇ» ΓÇ» ΓÇ» "vectorSearchProfile": "vectorProfile"
+ ΓÇ» ΓÇ» }
+ ΓÇ» ]
+ }
+ ```
+
+**Key points**:
+++ Applies to [vector fields](/rest/api/searchservice/supported-data-types#edm-data-types-for-vector-fields) only.+++ Affects storage on disk, not memory, and it has no effect on queries. Query execution uses a separate vector index that's unaffected by the `stored` property.+++ The `stored` property is set during index creation on vector fields and is irreversible. If you want retrievable content later, you must drop and rebuild the index, or create and load a new field that has the new attribution.+++ Defaults are `stored` set to true and `retrievable` set to false. In a default configuration, a retrievable copy is stored, but it's not automatically returned in results. When `stored` is true, you can toggle `retrievable` between true and false at any time without having to rebuild an index. When `stored` is false, `retrievable` must be false and can't be changed.+
+## Option 3: Configure scalar quantization
+
+Built-in scalar quantization is recommended because it reduces memory and disk storage requirements, and it adds reranking and oversampling to offset the effects of a smaller index. Built-in scalar quantization can be applied to vector fields containing `Float32` or `Float16` data.
+
+To use built-in vector compression:
+++ Add `vectorSearch.compressions` to a search index. The compression algorithm supported in this preview is *scalar quantization*.++ Set optional properties to mitigate the effects of lossy indexing. Both `rerankWithOriginalVectors` and `defaultOversampling` provide optimizations during query execution.++ Add `vectorSearch.profiles.compression` to a new vector profile.++ Assign the new vector profile to a new vector field.+
+### Add compression settings and set optional properties
+
+In an index definition created using [2024-03-01-preview REST API](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2024-03-01-preview&preserve-view=true), add a `compressions` section. Use the following JSON as a template.
+
+```json
+"compressions": [
+
+ΓÇ» ΓÇ» ΓÇ» {ΓÇ»
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "name": "my-scalar-quantization",ΓÇ»
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "kind": "scalarQuantization",ΓÇ»
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "rerankWithOriginalVectors": true,ΓÇ» (optional)
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "defaultOversampling": 10.0,ΓÇ» (optional)
+        "scalarQuantizationParameters": {  (optional)
+          "quantizedDataType": "int8",  (optional)
+       }
+ΓÇ» ΓÇ» ΓÇ» }ΓÇ»
+ΓÇ» ΓÇ»]
+```
+
+**Key points**:
+++ `kind` must be set to `scalarQuantization`. This is the only quantization method supported at this time.+++ `rerankWithOriginalVectors` uses the original, uncompressed vectors to recalculate similarity and rerank the top results returned by the initial search query. The uncompressed vectors exist in the search index even if `stored` is false. This property is optional. Default is true.+++ `defaultOversampling` considers a broader set of potential results to offset the reduction in information from quantization. The formula for potential results consists of the `k` in the query, with an oversampling multiplier. For example, if the query specifies a `k` of 5, and oversampling is 20, then the query effectively requests 100 documents for use in reranking, using the original uncompressed vector for that purpose. Only the top `k` reranked results are returned. This property is optional. Default is 4.+++ `quantizedDataType` must be set to `int8`. This is the only primitive data type supported at this time. This property is optional. Default is `int8`.+
+### Add a compression setting to a vector profile
+
+Scalar quantization is specified as a property in a *new* vector profile. Creation of a new vector profile is necessary for building compressed indexes in memory.
+
+Within the profile, you must use the Hierarchical Navigable Small Worlds (HNSW) algorithm. Built-in quantization isn't supported with exhaustive KNN.
+
+1. Create a new vector profile and add a compression property.
+
+ ```json
+ "profiles": [
+ {
+ "name": "my-vector-profile",
+ "compression": "my-scalar-quantization",ΓÇ»
+ "algorithm": "my-hnsw-vector-config-1",
+ "vectorizer": null
+ }
+ ]
+ ```
+
+1. Assign a vector profile to a *new* vector field. Scalar quantization reduces content to `Int8`, so make sure your content is either `Float32` or `Float16`.
+
+ In Azure AI Search, the Entity Data Model (EDM) equivalents of `Float32` and `Float16` types are `Collection(Edm.Single)` and `Collection(Edm.Half)`, respectively.
+
+ ```json
+ {
+ "name": "DescriptionVector",
+ "type": "Collection(Edm.Single)",
+ "searchable": true,
+ "retrievable": true,
+ "dimensions": 1536,
+ "vectorSearchProfile": "my-vector-profile"
+ }
+ ```
+
+1. [Load the index](search-what-is-data-import.md) using indexers for pull model indexing, or APIs for push model indexing.
+
+### How scalar quantization works in Azure AI Search
+
+Scalar quantization reduces the resolution of each number within each vector embedding. Instead of describing each number as a 32-bit floating point number, it uses an 8-bit integer. It identifies a range of numbers (typically 99th percentile minimum and maximum) and divides them into a finite number of levels or bin, assigning each bin an identifier. In 8-bit scalar quantization, there are 2^8, or 256, possible bins.
+
+Each component of the vector is mapped to the closest representative value within this set of quantization levels in a process akin to rounding a real number to the nearest integer. In the quantized 8-bit vector, the identifier number stands in place of the original value. After quantization, each vector is represented by an array of identifiers for the bins to which its components belong. These quantized vectors require much fewer bits to store compared to the original vector, thus reducing storage requirements and memory footprint.
+
+## Example index with vectorCompression, data types, and stored property
+
+Here's a JSON example of a search index that specifies `vectorCompression` on `Float32` field, a `Float16` data type on second vector field, and a `stored` property set to false. It's a composite of the vector compression and storage features in this preview.
+
+```json
+### Create a new index
+POST {{baseUrl}}/indexes?api-version=2024-03-01-preview HTTP/1.1
+ Content-Type: application/json
+ api-key: {{apiKey}}
+
+{
+ "name": "hotels-vector-quickstart",
+ "fields": [
+ {
+ "name": "HotelId",
+ "type": "Edm.String",
+ "searchable": false,
+ "filterable": true,
+ "retrievable": true,
+ "sortable": false,
+ "facetable": false,
+ "key": true
+ },
+ {
+ "name": "HotelName",
+ "type": "Edm.String",
+ "searchable": true,
+ "filterable": false,
+ "retrievable": true,
+ "sortable": true,
+ "facetable": false
+ },
+ {
+ "name": "HotelNameVector",
+ "type": "Collection(Edm.Half)",
+ "searchable": true,
+ "retrievable": false,
+ "dimensions": 1536,
+ "stored": false,
+ "vectorSearchProfile": "my-vector-profile-no-compression"
+ },
+ {
+ "name": "Description",
+ "type": "Edm.String",
+ "searchable": true,
+ "filterable": false,
+ "retrievable": false,
+ "sortable": false,
+ "facetable": false,
+ "stored": false,
+ },
+ {
+ "name": "DescriptionVector",
+ "type": "Collection(Edm.Single)",
+ "searchable": true,
+ "retrievable": true,
+ "dimensions": 1536,
+ "vectorSearchProfile": "my-vector-profile-with-compression"
+ },
+ {
+ "name": "Category",
+ "type": "Edm.String",
+ "searchable": true,
+ "filterable": true,
+ "retrievable": true,
+ "sortable": true,
+ "facetable": true
+ },
+ {
+ "name": "Tags",
+ "type": "Collection(Edm.String)",
+ "searchable": true,
+ "filterable": true,
+ "retrievable": true,
+ "sortable": false,
+ "facetable": true
+ },
+ {
+ "name": "Address",
+ "type": "Edm.ComplexType",
+ "fields": [
+ {
+ "name": "City", "type": "Edm.String",
+ "searchable": true, "filterable": true, "retrievable": true, "sortable": true, "facetable": true
+ },
+ {
+ "name": "StateProvince", "type": "Edm.String",
+ "searchable": true, "filterable": true, "retrievable": true, "sortable": true, "facetable": true
+ }
+ ]
+ },
+ {
+ "name": "Location",
+ "type": "Edm.GeographyPoint",
+ "searchable": false,
+ "filterable": true,
+ "retrievable": true,
+ "sortable": true,
+ "facetable": false
+ }
+ ],
+ "vectorSearch": {
+ ΓÇ» ΓÇ» "compressions": [
+ ΓÇ» ΓÇ» ΓÇ» {ΓÇ»
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» "name": "my-scalar-quantization",ΓÇ»
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» "kind": "scalarQuantization",ΓÇ»
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» "rerankWithOriginalVectors": true,ΓÇ»
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» "defaultOversampling": 10.0,ΓÇ»
+         "scalarQuantizationParameters": { 
+           "quantizedDataType": "int8",
+         } 
+ ΓÇ» ΓÇ» ΓÇ» }ΓÇ»
+ ΓÇ» ΓÇ» ],ΓÇ»
+ "algorithms": [
+ {
+ "name": "my-hnsw-vector-config-1",
+ "kind": "hnsw",
+ "hnswParameters":
+ {
+ "m": 4,
+ "efConstruction": 400,
+ "efSearch": 500,
+ "metric": "cosine"
+ }
+ },
+ {
+ "name": "my-hnsw-vector-config-2",
+ "kind": "hnsw",
+ "hnswParameters":
+ {
+ "m": 4,
+ "metric": "euclidean"
+ }
+ },
+ {
+ "name": "my-eknn-vector-config",
+ "kind": "exhaustiveKnn",
+ "exhaustiveKnnParameters":
+ {
+ "metric": "cosine"
+ }
+ }
+ ],
+ "profiles": [
+ {
+ "name": "my-vector-profile-with-compression",
+ "compression": "my-scalar-quantization",ΓÇ»
+ "algorithm": "my-hnsw-vector-config-1",
+ "vectorizer": null
+ },
+ {
+ "name": "my-vector-profile-no-compression",
+ "compression": null,ΓÇ»
+ "algorithm": "my-eknn-vector-config",
+ "vectorizer": null
+ }
+ ]
+ },
+ "semantic": {
+ "configurations": [
+ {
+ "name": "my-semantic-config",
+ "prioritizedFields": {
+ "titleField": {
+ "fieldName": "HotelName"
+ },
+ "prioritizedContentFields": [
+ { "fieldName": "Description" }
+ ],
+ "prioritizedKeywordsFields": [
+ { "fieldName": "Tags" }
+ ]
+ }
+ }
+ ]
+ }
+}
+```
+
+## Query a quantized vector field using oversampling
+
+The query syntax in this example applies to vector fields using built-in scalar quantization. By default, vector fields that use scalar quantization also use `rerankWithOriginalVectors` and `defaultOversampling` to mitigate the effects of a smaller vector index. Those settings are [specified in the search index](#add-compression-settings-and-set-optional-properties).
+
+On the query, you can override the oversampling default value. For example, if `defaultOversampling` is 10.0, you can change it to something else in the query request.
+
+You can set the oversampling parameter even if the index doesn't explicitly have a `rerankWithOriginalVectors` or `defaultOversampling` definition. Providing `oversampling` at query time overrides the index settings for that query and executes the query with an effective `rerankWithOriginalVectors` as true.
+
+```http
+POST https://[service-name].search.windows.net/indexes/[index-name]/docs/search?api-version=2024-03-01-Preview  
+  Content-Type: application/json  
+  api-key: [admin key]  
+
+ {   
+ "vectorQueries": [
+ {   
+     "kind": "vector",   
+     "vector": [8, 2, 3, 4, 3, 5, 2, 1],   
+ ΓÇ» ΓÇ» "fields": "myvector",
+ ΓÇ» ΓÇ» "oversampling": 12.0,
+     "k": 5  
+ }
+ ]   
+ }
+```
+
+**Key points**:
+++ Applies to vector fields that undergo vector compression, per the vector profile assignment.+++ Overrides the `defaultOversampling` value or introduces oversampling at query time, even if the index's compression configuration didn't specify oversampling or reranking options.+
+## See also
+++ [Get started with REST](search-get-started-rest.md)++ [Supported data types](/rest/api/searchservice/supported-data-types)++ [Search REST APIs](/rest/api/searchservice/)
search Vector Search Index Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-index-size.md
- ignite-2023 Previously updated : 02/14/2024 Last updated : 04/03/2024
-# Vector index size limits
+# Vector index size and staying under limits
-When you index documents with vector fields, Azure AI Search constructs internal vector indexes using the algorithm parameters that you specified for the field. Because Azure AI Search imposes limits on vector index size, it's important that you know how to retrieve metrics about the vector index size, and how to estimate the vector index size requirements for your use case.
+For each vector field, Azure AI Search constructs an internal vector index using the algorithm parameters specified on the field. Because Azure AI Search imposes quotas on vector index size, you should know how to estimate and monitor vector size to ensure you stay under the limits.
-## Key points about vector size limits
+> [!NOTE]
+> A note about terminology. Internally, the physical data structures of a search index include raw content (used for retrieval patterns requiring non-tokenized content), inverted indexes (used for searchable text fields), and vector indexes (used for searchable vector fields). This article explains the limits for the physical vector indexes that back each of your vector fields.
-The size of vector indexes is measured in bytes. The size constraints are based on memory reserved for vector search, but also have implications for storage at the service level. Size constraints vary by service tier (or SKU).
+> [!TIP]
+> [Vector quantization and storage configuration](vector-search-how-to-configure-compression-storage.md) is now in preview. You can use narrow data types, apply scalar quantization, and eliminate some storage requirements if you don't need the data.
-The service enforces a vector index size quota **based on the number of partitions** in your search service, where the quota per partition varies by tier and also by service creation date (see [Vector index size](search-limits-quotas-capacity.md#vector-index-size-limits) in service limits).
+## Key points about quota and vector index size
-Each extra partition that you add to your service increases the available vector index size quota. This quota is a hard limit to ensure your service remains healthy. It also means that if vector size exceeds this limit, any further indexing requests result in failure. You can resume indexing once you free up available quota by either deleting some vector documents or by scaling up in partitions.
++ Vector index size is measured in bytes.
-The following table shows vector quotas by partition, and by service if all partitions are in use. This table is for newer search services created *after July 1, 2023*. For more information, including limits for older search services and also limits on the approximate number of embeddings per partition, see [Search service limits](search-limits-quotas-capacity.md).
++ There's no quota at the search index level. Instead vector quotas are enforced service-wide at the partition level. Quota varies by service tier (or `SKU`) and the service creation date, with newer services having much higher quotas per partition.
-| Tier | Partitions | Storage (GB) | Vector quota per partition (GB) | Vector quota per service (GB) |
-| -- | - | --|-- | -- |
-| Basic | 1 | 2 | 1 | 1 |
-| S1 | 12 | 25 | 3 | 36 |
-| S2 | 12 | 100 | 12 | 144 |
-| S3 | 12 | 200 | 36 | 432 |
-| L1 | 12 | 1,000 | 12 | 144 |
-| L2 | 12 | 2,000 | 36 | 432 |
+ + [Vector quota for services created after April 3, 2024](search-limits-quotas-capacity.md#vector-limits-on-services-created-after-april-3-2024-in-supported-regions)
+ + [Vector quota for services created between July 1, 2023 and April 3, 2024](search-limits-quotas-capacity.md#vector-limits-on-services-created-between-july-1-2023-and-april-3-2024)
+ + [Vector quota for services created before July 1, 2023](search-limits-quotas-capacity.md#vector-limits-on-services-created-before-july-1-2023)
-**Key points**:
++ Vector quotas are primarily designed around memory constraints. All searchable vector indexes must be loaded into memory. At the same time, there must also be sufficient memory for other runtime operations. Vector quotas exist to ensure that the overall system remains stable and balanced for all workloads.
-+ Storage quota is the physical storage available to the search service for all search data. Basic has one partition sized at 2 GB that must accommodate all of the data on the service. S1 can have up to 12 partitions, sized at 25 GB each, for a maximum limit of 300 GB for all search data.
++ Vector quotas are expressed in terms of physical storage, and physical storage is contingent upon partition size and quantity. Each tier offers increasingly powerful and larger partitions. Higher tiers and more partitions give you more vector quota to work with. In [service limits](search-limits-quotas-capacity.md#service-limits), maximum vector quotas are based on the maximum amount of physical space that all vector indexes can consume collectively, assuming all partitions are in use for that service.
-+ Vector quotas for are the vector indexes created for each vector field, and they're enforced at the partition level. On Basic, the sum total of all vector fields can't be more than 1 GB because Basic only has one partition. On S1, which can have up to 12 partitions, the quota for vector data is 3 GB if you allocate just one partition, or up to 36 GB if you allocate all 12 partitions. For more information about partitions and replicas, see [Estimate and manage capacity](search-capacity-planning.md).
+ For example, on new services in a supported region, the sum total of all vector indexes on a Basic search service can't be more than 15 GB because Basic can have up to three partitions (5-GB quota per partition). On S1, which can have up to 12 partitions, the quota for vector data is 35 GB per partition, or up to 160 GB if you allocate all 12 partitions.
-## How to determine service creation date
+## How to check partition size and quantity
-Services created after July 1, 2023 offer at least twice as much vector storage as older ones at the same tier.
+If you aren't sure what your search service limits are, here are two ways to get that information:
-1. In Azure portal, open the resource group.
++ In the Azure portal, in the search service **Overview** page, both the **Properties** tab and **Usage** tab show partition size and storage, and also vector quota and vector index size.
-1. On the left nav pane, under **Settings**, select **Deployments**.
++ In the Azure portal, in the **Scale** page, you can review the number and size of partitions.+
+## How to check service creation date
+
+Newer services created after April 3, 2024 offer five to ten times more vector storage as older ones at the same tier billing rate. If your service is older, consider creating a new service and migrating your content.
+
+1. In Azure portal, open the resource group that contains your search service.
+
+1. On the leftmost pane, under **Settings**, select **Deployments**.
1. Locate your search service deployment. If there are many deployments, use the filter to look for "search".
Services created after July 1, 2023 offer at least twice as much vector storage
1. Now that you know the age of your search service, review the vector quota limits based on service creation:
- + [Before July 1, 2023](search-limits-quotas-capacity.md#services-created-before-july-1-2023)
- + [After July 1, 2023](search-limits-quotas-capacity.md#services-created-after-july-1-2023-in-supported-regions)
+ + [After April 3, 2024](search-limits-quotas-capacity.md#vector-limits-on-services-created-after-april-3-2024-in-supported-regions)
+ + [Between July 1, 2023 and April 3, 2024](search-limits-quotas-capacity.md#vector-limits-on-services-created-between-july-1-2023-and-april-3-2024)
+ + [Before July 1, 2023](search-limits-quotas-capacity.md#vector-limits-on-services-created-before-july-1-2023)
## How to get vector index size
A request for vector metrics is a data plane operation. You can use the Azure po
Usage information can be found on the **Overview** page's **Usage** tab. Portal pages refresh every few minutes so if you recently updated an index, wait a bit before checking results.
-The following screenshot is for a newer Standard 1 (S1) tier, configured for one partition and one replica. Vector index quota, measured in megabytes, refers to the internal vector indexes created for each vector field. Overall, indexes consume almost 460 megabytes of available storage, but the vector index component takes up just 93 megabytes of the 460 used on this search service.
+The following screenshot is for a Standard 1 (S1) tier, configured for one partition and one replica. Vector index quota, measured in megabytes, refers to the internal vector indexes created for each vector field. Overall, indexes consume almost 460 megabytes of available storage, but the vector index component takes up just 93 megabytes of the 460 used on this search service.
:::image type="content" source="media/vector-search-index-size/portal-vector-index-size.png" lightbox="media/vector-search-index-size/portal-vector-index-size.png" alt-text="Screenshot of the Overview page's usage tab showing vector index consumption against quota.":::
There are three major components that affect the size of your internal vector in
### Raw size of the data
-Each vector is an array of single-precision floating-point numbers, in a field of type `Collection(Edm.Single)`. Currently, only single-precision floats are supported.
+Each vector is usually an array of single-precision floating-point numbers, in a field of type `Collection(Edm.Single)`.
Vector data structures require storage, represented in the following calculation as the "raw size" of your data. Use this _raw size_ to estimate the vector index size requirements of your vector fields.
The storage size of one vector is determined by its dimensionality. Multiply the
`raw size = (number of documents) * (dimensions of vector field) * (size of data type)`
-For `Edm.Single`, the size of the data type is 4 bytes.
+| EDM data type | Size of the data type |
+||--|
+| `Collection(Edm.Single)` | 4 bytes |
+| `Collection(Edm.Half)` | 2 bytes |
+| `Collection(Edm.Int16)`| 2 bytes |
+| `Collection(Edm.SByte)`| 1 byte |
### Memory overhead from the selected algorithm
Every approximate nearest neighbor (ANN) algorithm generates extra data structur
The memory overhead is lower for higher dimensions because the raw size of the vectors increases, while the extra data structures remain a fixed size since they store information on the connectivity within the graph. Consequently, the contribution of the extra data structures constitutes a smaller portion of the overall size.
-The memory overhead is higher for larger values of the HNSW parameter `m`, which determines the number of bi-directional links created for every new vector during index construction. This is because `m` contributes approximately 8 to 10 bytes per document multiplied by `m`.
+The memory overhead is higher for larger values of the HNSW parameter `m`, which determines the number of bi-directional links created for every new vector during index construction. This is because `m` contributes approximately 8 bytes to 10 bytes per document multiplied by `m`.
The following table summarizes the overhead percentages observed in internal tests:
These results demonstrate the relationship between dimensions, HNSW parameter `m
When a document with a vector field is either deleted or updated (updates are internally represented as a delete and insert operation), the underlying document is marked as deleted and skipped during subsequent queries. As new documents are indexed and the internal vector index grows, the system cleans up these deleted documents and reclaims the resources. This means you'll likely observe a lag between deleting documents and the underlying resources being freed.
-We refer to this as the "deleted documents ratio". Since the deleted documents ratio depends on the indexing characteristics of your service, there's no universal heuristic to estimate this parameter, and there's no API or script that returns the ratio in effect for your service. We observe that half of our customers have a deleted documents ratio less than 10%. If you tend to perform high-frequency deletions or updates, then you might observe a higher deleted documents ratio.
+We refer to this as the *deleted documents ratio*. Since the deleted documents ratio depends on the indexing characteristics of your service, there's no universal heuristic to estimate this parameter, and there's no API or script that returns the ratio in effect for your service. We observe that half of our customers have a deleted documents ratio less than 10%. If you tend to perform high-frequency deletions or updates, then you might observe a higher deleted documents ratio.
This is another factor impacting the size of your vector index. Unfortunately, we don't have a mechanism to surface your current deleted documents ratio. ## Estimating the total size for your data in memory
-To estimate the total size of your vector index, use the following calculation:
+Taking the previously described factors into account, to estimate the total size of your vector index, use the following calculation:
**`(raw_size) * (1 + algorithm_overhead (in percent)) * (1 + deleted_docs_ratio (in percent))`**
search Vector Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-store.md
The following screenshot shows an S1 service configured with one partition and o
Vector index limits and estimations are covered in [another article](vector-search-index-size.md), but two points to emphasize up front is that maximum storage varies by service tier, and also by when the search service was created. Newer same-tier services have significantly more capacity for vector indexes. For these reasons, take the following actions:
-+ [Check the deployment date of your search service](vector-search-index-size.md#how-to-determine-service-creation-date). If it was created before July 1, 2023, consider creating a new search service for greater capacity.
++ [Check the deployment date of your search service](vector-search-index-size.md#how-to-check-service-creation-date). If it was created before April 3, 2024, consider creating a new search service for greater capacity.
-+ [Choose a scalable tier](search-sku-tier.md) if you anticipate fluctuations in vector storage requirements. The Basic tier is fixed at one partition. Consider Standard 1 (S1) and above for more flexibility and faster performance.
++ [Choose a scalable tier](search-sku-tier.md) if you anticipate fluctuations in vector storage requirements. The Basic tier is fixed at one partition on older search services. Consider Standard 1 (S1) and above for more flexibility and faster performance, or create a new search service that uses higher limits and more partitions at every nillable tier. ## Basic operations and interaction
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md
Title: What's new in Azure AI Search
-description: Announcements of new and enhanced features, including a service rename of Azure Search to Azure AI Search.
+description: Announcements of new and enhanced features, including a service rename of Azure Cognitive Search to Azure AI Search.
Previously updated : 02/21/2024 Last updated : 04/03/2024 - references_regions - ignite-2023
**Azure Cognitive Search is now Azure AI Search**. Learn about the latest updates to Azure AI Search functionality, docs, and samples.
+## April 2024
+
+| Item&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description |
+|--||--|
+| [**Storage expansion on Basic and Standard tiers**](search-limits-quotas-capacity.md#service-limits) | Feature | Basic now supports up to three partitions and three replicas. Basic and Standard (S1, S2, S3) tiers have significantly more storage per partition, at the same per-partition billing rate. Extra capacity is subject to [regional availability](search-limits-quotas-capacity.md#supported-regions-with-higher-storage-limits) and applies to new search services created after April 3, 2024. Currently, there's no in-place upgrade, so please create a new search service to get the extra storage. |
+| [**Increased quota for vectors**](search-limits-quotas-capacity.md#vector-limits-on-services-created-after-april-3-2024-in-supported-regions) | Feature | Vector quotas are also higher on new services created after April 3, 2024 in selected regions. |
+| [**Built-in vector quantization, narrow vector data types, and a new `stored` property (preview)**](vector-search-how-to-configure-compression-storage.md) | Feature | This preview adds support for larger vector workloads at a lower cost through three enhancements. First, *scalar quantization* reduces vector index size in memory and on disk. Second, [narrow data types](/rest/api/searchservice/supported-data-types) can be assigned to vector fields that can use them. Third, we added more flexible vector field storage options.|
+| [**2024-03-01-preview Search REST API**](/rest/api/searchservice/search-service-api-versions#2024-03-01-preview) | API | New preview version of the Search REST APIs for the new data types, vector compression properties, and storage options. |
+| [**2024-03-01-preview Management REST API**](/rest/api/searchmanagement/operation-groups?view=rest-searchmanagement-2024-03-01-preview&preserve-view=true) | API | New preview version of the Management REST APIs for control plane operations. |
+ ## February 2024 | Item&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description | |--||--|
-| **New dimension limits** | Feature | For vector fields, maximum dimension limits are now `3072`, up from `2048`. Next-generation embedding models support more dimensions. Limits have been increased accordingly. |
+| **New dimension limits** | Feature | For vector fields, maximum dimension limits are now `3072`, up from `2048`. |
## November 2023
| Item&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description | |--||--| | [**"Chat with your data" solution accelerator**](https://github.com/Azure-Samples/chat-with-your-data-solution-accelerator) | Sample | End-to-end RAG pattern that uses Azure AI Search as a retriever. It provides indexing, data chunking, orchestration and chat based on Azure OpenAI GPT. |
-| [**Exhaustive K-Nearest Neighbors (KNN)**](vector-search-overview.md#eknn) | Feature | Exhaustive K-Nearest Neighbors (KNN) is a new scoring algorithm for similarity search in vector space. It performs an exhaustive search for the nearest neighbors, useful for situations where high recall is more important than query performance. Available in the 2023-10-01-Preview REST API only. |
+| [**Exhaustive K-Nearest Neighbors (KNN)**](vector-search-overview.md#eknn) | Feature | Exhaustive K-Nearest Neighbor (KNN) is a new scoring algorithm for similarity search in vector space. It performs an exhaustive search for the nearest neighbors, useful for situations where high recall is more important than query performance. Available in the 2023-10-01-Preview REST API only. |
| [**Prefilters in vector search**](vector-search-how-to-query.md) | Feature | Evaluates filter criteria before query execution, reducing the amount of content that needs to be searched. Available in the 2023-10-01-Preview REST API only, through a new `vectorFilterMode` property on the query that can be set to `preFilter` (default) or `postFilter`, depending on your requirements. | | [**2023-10-01-Preview Search REST API**](/rest/api/searchservice/search-service-api-versions#2023-10-01-Preview) | API | New preview version of the Search REST APIs that changes the definition for [vector fields](vector-search-how-to-create-index.md) and [vector queries](vector-search-how-to-query.md). This API version introduces breaking changes from **2023-07-01-Preview**, otherwise it's inclusive of all previous preview features. We recommend [creating new indexes](vector-search-how-to-create-index.md) for **2023-10-01-Preview**. You might encounter an HTTP 400 on some features on a migrated index, even if you migrated correctly.|
| Item&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description | |--||--| | [**Azure RBAC (role-based access control)**](search-security-rbac.md) | Feature | Announcing general availability. |
-| [**2022-09-01 Management REST API**](/rest/api/searchmanagement) | API | New stable version of the Management REST APIs, with support for configuring search to use Azure RBAC. The **Az.Search** module of Azure PowerShell and **Az search** module of the Azure CLI are updated to support search service authentication options. You can also use the [**Terraform provider**](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/search_service) to configure authentication options (see this [Terraform quickstart](search-get-started-terraform.md) for details). |
+| [**2022-09-01 Management REST API**](/rest/api/searchmanagement) | API | New stable version of the Management REST APIs, with support for configuring search to use Azure roles The **Az.Search** module of Azure PowerShell and **Az search** module of the Azure CLI are updated to support search service authentication options. You can also use the [**Terraform provider**](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/search_service) to configure authentication options (see this [Terraform quickstart](search-get-started-terraform.md) for details). |
## April 2023
sentinel Automate Incident Handling With Automation Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-incident-handling-with-automation-rules.md
The following table shows the different possible scenarios that will cause an au
| Trigger type | Events that cause the rule to run | | | |
-| **When incident is created** | <li>A new incident is created by an analytics rule.<li>An incident is ingested from Microsoft Defender XDR.<li>A new incident is created manually. |
+| **When incident is created** | **Unified security operations platform in Microsoft Defender:**<li>A new incident is created in the Microsoft Defender portal.<br><br>**Microsoft Sentinel not onboarded to unified platform:**<li>A new incident is created by an analytics rule.<li>An incident is ingested from Microsoft Defender XDR.<li>A new incident is created manually. |
| **When incident is updated** | <li>An incident's status is changed (closed/reopened/triaged).<li>An incident's owner is assigned or changed.<li>An incident's severity is raised or lowered.<li>Alerts are added to an incident.<li>Comments, tags, or tactics are added to an incident. |
-| **When alert is created** | <li>An alert is created by an analytics rule. |
+| **When alert is created** | <li>An alert is created by a Microsoft Sentinel **Scheduled** or **NRT** analytics rule. |
#### Incident-based or alert-based automation?
For most use cases, **incident-triggered automation** is the preferable approach
For these reasons, it makes more sense to build your automation around incidents. So the most appropriate way to create playbooks is to base them on the Microsoft Sentinel incident trigger in Azure Logic Apps.
-The main reason to use **alert-triggered automation** is for responding to alerts generated by analytics rules that *do not create incidents* (that is, where incident creation has been *disabled* in the **Incident settings** tab of the [analytics rule wizard](detect-threats-custom.md#configure-the-incident-creation-settings)). A SOC might decide to do this if it wants to use its own logic to determine if and how incidents are created from alerts, as well as if and how alerts are grouped into incidents. For example:
+The main reason to use **alert-triggered automation** is for responding to alerts generated by analytics rules that *do not create incidents* (that is, where incident creation has been *disabled* in the **Incident settings** tab of the [analytics rule wizard](detect-threats-custom.md#configure-the-incident-creation-settings)).
+
+This reason is especially relevant when your Microsoft Sentinel workspace is onboarded to the unified security operations platform, as all incident creation happens in Microsoft Defender XDR, and therefore the incident creation rules in Microsoft Sentinel *must be disabled*.
+
+Even without being onboarded to the unified portal, you might anyway decide to use alert-triggered automation if you want to use other external logic to determine if and how incidents are created from alerts, as well as if and how alerts are grouped into incidents. For example:
- A playbook can be triggered by an alert that doesnΓÇÖt have an associated incident, enrich the alert with information from other sources, and based on some external logic decide whether to create an incident or not.
The main reason to use **alert-triggered automation** is for responding to alert
- A playbook can be triggered by an alert and send the alert to an external ticketing system for incident creation and management, creating a new ticket for each alert. > [!NOTE]
-> - Alert-triggered automation is available only for [alerts](detect-threats-built-in.md) created by **Scheduled** analytics rules. Alerts created by **Microsoft Security** analytics rules are not supported.
+> - Alert-triggered automation is available only for alerts created by [**Scheduled** and **NRT** analytics rules](detect-threats-built-in.md). Alerts created by **Microsoft Security** analytics rules are not supported.
+>
+> - Similarly, alert-triggered automation for alerts created by Microsoft Defender XDR is not available in the unified security operations platform in the Microsoft Defender portal.
>
-> - Alert-triggered automation is not currently available in the unified security operations platform in the Microsoft Defender portal.
+> - For more information, see [Automation with the unified security operations platform](automation.md#automation-with-the-unified-security-operations-platform).
### Conditions
You can [create and manage automation rules](create-manage-use-automation-rules.
In the **Automation** page, you see all the rules that are defined on the workspace, along with their status (Enabled/Disabled) and which analytics rules they are applied to.
- When you need an automation rule that will apply to many analytics rules, create it directly in the **Automation** page.
+ When you need an automation rule that will apply to incidents from Microsoft Defender XDR, or from many analytics rules in Microsoft Sentinel, create it directly in the **Automation** page.
- **Analytics rule wizard**
- In the **Automated response** tab of the analytics rule wizard, under **Automation rules**, you can view, edit, and create automation rules that apply to the particular analytics rule being created or edited in the wizard.
+ In the **Automated response** tab of the Microsoft Sentinel analytics rule wizard, under **Automation rules**, you can view, edit, and create automation rules that apply to the particular analytics rule being created or edited in the wizard.
You'll notice that when you create an automation rule from here, the **Create new automation rule** panel shows the **analytics rule** condition as unavailable, because this rule is already set to apply only to the analytics rule you're editing in the wizard. All the other configuration options are still available to you.
sentinel Create Manage Use Automation Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-manage-use-automation-rules.md
The following table shows the different possible scenarios that will cause an au
| Trigger type | Events that cause the rule to run | | | |
-| **When incident is created** | <li>A new incident is created by an analytics rule.<li>An incident is ingested from Microsoft Defender XDR.<li>A new incident is created manually. |
-| **When incident is updated**<br> | <li>An incident's status is changed (closed/reopened/triaged).<li>An incident's owner is assigned or changed.<li>An incident's severity is raised or lowered.<li>Alerts are added to an incident.<li>Comments, tags, or tactics are added to an incident. |
-| **When alert is created**<br> | <li>An alert is created by an analytics rule. |
+| **When incident is created** | **Unified security operations platform in Microsoft Defender:**<li>A new incident is created in the Microsoft Defender portal.<br><br>**Microsoft Sentinel not onboarded to unified platform:**<li>A new incident is created by an analytics rule.<li>An incident is ingested from Microsoft Defender XDR.<li>A new incident is created manually. |
+| **When incident is updated** | <li>An incident's status is changed (closed/reopened/triaged).<li>An incident's owner is assigned or changed.<li>An incident's severity is raised or lowered.<li>Alerts are added to an incident.<li>Comments, tags, or tactics are added to an incident. |
+| **When alert is created** | <li>An alert is created by a Microsoft Sentinel **Scheduled** or **NRT** analytics rule. |
## Create your automation rule
Use the options in the **Conditions** area to define conditions for your automat
| - **Tactics** | - Contains/Does not contain<br>- Added | | - **Alert product names**<br>- **Custom details value**<br>- **Analytic rule name** | - Contains/Does not contain |
+ #### Conditions available with the alert trigger
+
+ The only condition that can be evaluated by rules based on the alert creation trigger is which Microsoft Sentinel analytics rule created the alert.
+
+ Automation rules based on the alert trigger will therefore only run on alerts created by Microsoft Sentinel.
+ 1. Enter a value in the field on the right. Depending on the property you chose, this might be either a text box or a drop-down in which you select from a closed list of values. You might also be able to add several values by selecting the dice icon to the right of the text box. :::image type="content" source="media/create-manage-use-automation-rules/add-values-to-condition.png" alt-text="Screenshot of adding values to your condition in automation rules.":::
sentinel Citrix Security Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/citrix-security-analytics.md
CitrixAnalytics_userProfile_CL
| count ``` -- ## Prerequisites To integrate with CITRIX SECURITY ANALYTICS make sure you have: - **Licensing**: Entitlements to Citrix Security Analytics in Citrix Cloud. Please review [Citrix Tool License Agreement.](https://aka.ms/sentinel-citrixanalyticslicense-readme) - ## Vendor installation instructions - To get access to this capability and the configuration steps on Citrix Analytics, please visit: [Connect Citrix to Microsoft Sentinel.](https://aka.ms/Sentinel-Citrix-Connector)ΓÇï ---- ## Next steps
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/citrix.citrix_analytics_for_security_mss?tab=Overview) in the Azure Marketplace.
+For more information, go to the [related solution](https://docs.citrix.com/en-us/security-analytics/siem-integration/sentinel-workbook).
sentinel Nxlog Aix Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nxlog-aix-audit.md
The [NXLog AIX Audit](https://docs.nxlog.co/refman/current/im/aixaudit.html) dat
| | | | **Log Analytics table(s)** | AIX_Audit_CL<br/> | | **Data collection rules support** | Not currently supported |
-| **Supported by** | [NXLog](https://nxlog.co/support-tickets/add/support-ticket) |
+| **Supported by** | [NXLog](https://nxlog.co/getting-started-with-nxlog-support-service-desk) |
## Query samples
sentinel Nxlog Bsm Macos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nxlog-bsm-macos.md
The [NXLog BSM](https://docs.nxlog.co/refman/current/im/bsm.html) macOS data con
## Connector attributes
-| Connector attribute | Description |
+| Connector attribute | Description |
| | | | **Log Analytics table(s)** | BSMmacOS_CL<br/> | | **Data collection rules support** | Not currently supported |
-| **Supported by** | [NXLog](https://nxlog.co/support-tickets/add/support-ticket) |
-
+| **Supported by** | [NXLog](https://nxlog.co/getting-started-with-nxlog-support-service-desk) |
## Query samples
sentinel Nxlog Dns Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nxlog-dns-logs.md
The NXLog DNS Logs data connector uses Event Tracing for Windows ([ETW](/windows
| | | | **Log Analytics table(s)** | NXLog_DNS_Server_CL<br/> | | **Data collection rules support** | Not currently supported |
-| **Supported by** | [NXLog](https://nxlog.co/support-tickets/add/support-ticket) |
+| **Supported by** | [NXLog](https://nxlog.co/getting-started-with-nxlog-support-service-desk) |
## Query samples
sentinel Nxlog Fim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nxlog-fim.md
The [NXLog FIM](https://docs.nxlog.co/refman/current/im/fim.html) module allows
| | | | **Log Analytics table(s)** | NXLogFIM_CL<br/> | | **Data collection rules support** | Not currently supported |
-| **Supported by** | [NXLog](https://nxlog.co/support-tickets/add/support-ticket) |
+| **Supported by** | [NXLog](https://nxlog.co/getting-started-with-nxlog-support-service-desk) |
## Query samples
sentinel Nxlog Linuxaudit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nxlog-linuxaudit.md
The [NXLog LinuxAudit](https://docs.nxlog.co/refman/current/im/linuxaudit.html)
| | | | **Log Analytics table(s)** | LinuxAudit_CL<br/> | | **Data collection rules support** | Not currently supported |
-| **Supported by** | [NXLog](https://nxlog.co/support-tickets/add/support-ticket) |
+| **Supported by** | [NXLog](https://nxlog.co/getting-started-with-nxlog-support-service-desk) |
## Query samples
sentinel Microsoft Sentinel Defender Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/microsoft-sentinel-defender-portal.md
The following capabilities are only available in the Azure portal.
||| |Tasks | [Use tasks to manage incidents in Microsoft Sentinel](incident-tasks.md) | |Add entities to threat intelligence from incidents | [Add entity to threat indicators](add-entity-to-threat-intelligence.md) |
-| Automation | Some automation procedures are available only in the Azure portal. <br><br>Other automation procedures are the same in the Defender and Azure portals, but differ in the Azure portal between workspaces that are onboarded to the unified security operations platform and workspaces that aren't. <br><br>For more information, see [Security Orchestration, Automation, and Response (SOAR) in Microsoft Sentinel](https://aka.ms/unified-soc-automation-lims). |
+| Automation | Some automation procedures are available only in the Azure portal. <br><br>Other automation procedures are the same in the Defender and Azure portals, but differ in the Azure portal between workspaces that are onboarded to the unified security operations platform and workspaces that aren't. <br><br>For more information, see [Automation with the unified security operations platform](automation.md#automation-with-the-unified-security-operations-platform). |
## Quick reference
sentinel Deployment Attack Disrupt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-attack-disrupt.md
Title: Automatic attack disruption for SAP | Microsoft Sentinel
description: Learn about deploying automatic attack disruption for SAP with the unified security operations platform. -+ Last updated 04/01/2024
-appliesto: Microsoft Sentinel in the Azure portal and the Microsoft Defender portal
+appliesto:
+ - Microsoft Sentinel in the Azure portal and the Microsoft Defender portal
-#customerIntent: As a security engineer, I want to deploy automatic attack disruption for SAP in the Microsoft Defender portal.
+#customerIntent: As a security engineer, I want to use automatic attack disruption for SAP in the Microsoft Defender portal.
# Automatic attack disruption for SAP (Preview)
Attack disruption for SAP is configured by updating your data connector agent ve
To use attack disruption for SAP, make sure that you configured the integration between Microsoft Sentinel and Microsoft Defender XDR. For more information, see [Connect Microsoft Sentinel to Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-sentinel-onboard) and [Microsoft Sentinel in the Microsoft Defender portal (preview)](../microsoft-sentinel-defender-portal.md).
-## Required SAP data connector agent version and role
+## Required SAP data connector agent version and role assignments
Attack disruption for SAP requires that you have: -- A Microsoft Sentinel SAP data connector agent, version 88020708 or higher.
+- A Microsoft Sentinel SAP data connector agent, version 90847355 or higher.
- The identity of your data connector agent VM must be assigned to the **Microsoft Sentinel Business Applications Agent Operator** Azure role.
+- The **/MSFTSEN/SENTINEL_RESPONDER** SAP role, applied to your SAP system and assigned to the SAP user account used by Microsoft Sentinel's SAP data connector agent.
**To use attack disruption for SAP**, deploy a new agent, or update your current agent to the latest version. For more information, see:
SAP_HeartBeat_CL
If the identity of your data connector agent VM isn't yet assigned to the **Microsoft Sentinel Business Applications Agent Operator** role as part of the deployment process, assign the role manually. For more information, see [Deploy and configure the container hosting the SAP data connector agent](deploy-data-connector-agent-container.md#role).
+## Apply and assign the /MSFTSEN/SENTINEL_RESPONDER SAP role to your SAP system
+
+Attack disruption is supported by the new **/MSFTSEN/SENTINEL_RESPONDER** SAP role, which you must apply to your SAP system and assign to the SAP user account used by Microsoft Sentinel's SAP data connector agent.
+
+1. Upload role definitions from the [/MSFTSEN/SENTINEL_RESPONDER](https://aka.ms/SAP_Sentinel_Responder_Role) file in GitHub.
+
+1. Assign the **/MSFTSEN/SENTINEL_RESPONDER** role to the SAP user account used by Microsoft Sentinel's SAP data connector agent. For more information, see [Deploy SAP Change Requests and configure authorization](preparing-sap.md).
+
+Alternately, manually assign the following authorizations to the current role already assigned to the SAP user account used by Microsoft Sentinel's SAP data connector. These authorizations are included in the **/MSFTSEN/SENTINEL_RESPONDER** SAP role specifically for attack disruption response actions.
+
+| Authorization object | Field | Value |
+| -- | -- | -- |
+|S_RFC |RFC_TYPE |Function Module |
+|S_RFC |RFC_NAME |BAPI_USER_LOCK |
+|S_RFC |RFC_NAME |BAPI_USER_UNLOCK |
+|S_RFC |RFC_NAME |TH_DELETE_USER <br>In contrast to its name, this function doesn't delete users, but ends the active user session. |
+|S_USER_GRP |CLASS |* <br>We recommend replacing S_USER_GRP CLASS with the relevant classes in your organization that represent dialog users. |
+|S_USER_GRP |ACTVT |03 |
+|S_USER_GRP |ACTVT |05 |
+
+For more information, see [Required ABAP authorizations](preparing-sap.md#required-abap-authorizations).
+ ## Related content - [Automatic attack disruption in Microsoft Defender XDR](/microsoft-365/security/defender/automatic-attack-disruption)
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
# Support matrix for Azure VM disaster recovery between Azure regions > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article summarizes support and prerequisites for disaster recovery of Azure VMs from one Azure region to another, using the [Azure Site Recovery](site-recovery-overview.md) service.
site-recovery Azure To Azure Troubleshoot Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-troubleshoot-errors.md
# Troubleshoot Azure-to-Azure VM replication errors > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article describes how to troubleshoot common errors in Azure Site Recovery during replication and recovery of [Azure virtual machines](azure-to-azure-tutorial-enable-replication.md) (VM) from one region to another. For more information about supported configurations, see the [support matrix for replicating Azure VMs](azure-to-azure-support-matrix.md).
site-recovery Azure Vm Disaster Recovery With Accelerated Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-vm-disaster-recovery-with-accelerated-networking.md
# Accelerated Networking with Azure virtual machine disaster recovery > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Accelerated Networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the datapath, reducing latency, jitter, and CPU utilization, for use with the most demanding network workloads on supported VM types. The following picture shows communication between two VMs with and without accelerated networking:
site-recovery How To Enable Replication Proximity Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/how-to-enable-replication-proximity-placement-groups.md
Last updated 08/01/2023
# Replicate virtual machines running in a proximity placement group to another region > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article describes how to replicate, fail over, and fail back Azure virtual machines (VMs) running in a proximity placement group to a secondary region.
site-recovery Site Recovery Failover To Azure Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-failover-to-azure-troubleshoot.md
# Troubleshoot errors when failing over VMware VM or physical machine to Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
You may receive one of the following errors while doing failover of a virtual machine to Azure. To troubleshoot, use the described steps for each error condition.
site-recovery Site Recovery Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new-archive.md
Last updated 12/27/2023
# Archive for What's new in Site Recovery > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article contains information on older features and updates in the Azure Site Recovery service. The primary [What's new in Azure Site Recovery](./site-recovery-whats-new.md) article contains the latest updates.
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
# What's new in Site Recovery > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
The [Azure Site Recovery](site-recovery-overview.md) service is updated and improved on an ongoing basis. To help you stay up-to-date, this article provides you with information about the latest releases, new features, and new content. This page is updated regularly.
site-recovery Vmware Azure Disaster Recovery Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-disaster-recovery-powershell.md
# Set up disaster recovery of VMware VMs to Azure with PowerShell > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
In this article, you see how to replicate and fail over VMware virtual machines to Azure using Azure PowerShell.
site-recovery Vmware Azure Install Linux Master Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-install-linux-master-target.md
Last updated 03/07/2024
# Install a Linux master target server for failback > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
After you fail over your virtual machines to Azure, you can fail back the virtual machines to the on-premises site. To fail back, you need to reprotect the virtual machine from Azure to the on-premises site. For this process, you need an on-premises master target server to receive the traffic.
site-recovery Vmware Azure Install Mobility Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-install-mobility-service.md
# Prepare source machine for push installation of mobility agent > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
When you set up disaster recovery for VMware VMs and physical servers using [Azure Site Recovery](site-recovery-overview.md), you install the [Site Recovery Mobility service](vmware-physical-mobility-service-overview.md) on each on-premises VMware VM and physical server. The Mobility service captures data writes on the machine, and forwards them to the Site Recovery process server.
site-recovery Vmware Azure Mobility Install Configuration Mgr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-mobility-install-configuration-mgr.md
Last updated 05/02/2022
# Automate Mobility Service installation > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article describes how to automate installation and updates for the Mobility Service agent in [Azure Site Recovery](site-recovery-overview.md).
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
# Support matrix for disaster recovery of VMware VMs and physical servers to Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article summarizes supported components and settings for disaster recovery of VMware VMs and physical servers to Azure using [Azure Site Recovery](site-recovery-overview.md).
site-recovery Vmware Physical Manage Mobility Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-manage-mobility-service.md
Last updated 03/07/2024
# Manage the Mobility agent > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
You set up mobility agent on your server when you use Azure Site Recovery for disaster recovery of VMware VMs and physical servers to Azure. Mobility agent coordinates communications between your protected machine, configuration server/scale-out process server and manages data replication. This article summarizes common tasks for managing mobility agent after it's deployed.
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md
# About the Mobility service for VMware VMs and physical servers > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
When you set up disaster recovery for VMware virtual machines (VM) and physical servers using [Azure Site Recovery](site-recovery-overview.md), you install the Site Recovery Mobility service on each on-premises VMware VM and physical server. The Mobility service captures data, writes on the machine, and forwards them to the Site Recovery process server. The Mobility service is installed by the Mobility service agent software that you can deploy using the following methods:
spring-apps How To Deploy With Custom Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-deploy-with-custom-container-image.md
Last updated 4/28/2022
# Deploy an application with a custom container image > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
storage Blobfuse2 How To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-how-to-deploy.md
# How to mount an Azure Blob Storage container on Linux with BlobFuse2 > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article shows you how to install and configure BlobFuse2, mount an Azure blob container, and access data in the container. The basic steps are:
storage Network File System Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-support-how-to.md
# Mount Blob Storage by using the Network File System (NFS) 3.0 protocol > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article provides guidance on how to mount a container in Azure Blob Storage from a Linux-based Azure virtual machine (VM) or a Linux system that runs on-premises by using the Network File System (NFS) 3.0 protocol. To learn more about NFS 3.0 protocol support in Blob Storage, see [Network File System (NFS) 3.0 protocol support for Azure Blob Storage](network-file-system-protocol-support.md).
storage Storage How To Mount Container Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-how-to-mount-container-linux.md
# How to mount Azure Blob Storage as a file system with BlobFuse v1 > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
> [!IMPORTANT] > [BlobFuse2](blobfuse2-what-is.md) is the latest version of BlobFuse and has many significant improvements over the version discussed in this article, BlobFuse v1. To learn about the improvements made in BlobFuse2, see [the list of BlobFuse2 enhancements](blobfuse2-what-is.md#blobfuse2-enhancements-from-blobfuse-v1).
storage File Sync Choose Cloud Tiering Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-choose-cloud-tiering-policies.md
Azure File Sync is supported on NTFS volumes with Windows Server 2012 R2 and new
|16 TiB ΓÇô 32 TiB | 8 KiB | |32 TiB ΓÇô 64 TiB | 16 KiB |
-It's possible that upon creation of the volume, you manually formatted the volume with a different cluster size. If your volume stems from an older version of Windows, default cluster sizes might also be different. [This article provides more details on default cluster sizes.](https://support.microsoft.com/help/140365/default-cluster-size-for-ntfs-fat-and-exfat) Even if you choose a cluster size smaller than 4 KiB, an 8 KiB limit as the smallest file size that can be tiered still applies. (Even if technically 2x cluster size would equate to less than 8 KiB.)
+It's possible that upon creation of the volume, you manually formatted the volume with a different cluster size. If your volume stems from an older version of Windows, default cluster sizes might also be different. [This article provides more details on default cluster sizes.](https://www.disktuna.com/default-cluster-sizes-for-fat-exfat-and-ntfs/) Even if you choose a cluster size smaller than 4 KiB, an 8 KiB limit as the smallest file size that can be tiered still applies. (Even if technically 2x cluster size would equate to less than 8 KiB.)
The reason for the absolute minimum is due to the way NTFS stores extremely small files - 1 KiB to 4 KiB sized files. Depending on other parameters of the volume, it's possible that small files aren't stored in a cluster on disk at all. It's possibly more efficient to store such files directly in the volume's Master File Table or "MFT record". The cloud tiering reparse point is always stored on disk and takes up exactly one cluster. Tiering such small files could end up with no space savings. Extreme cases could even end up using more space with cloud tiering enabled. To safeguard against that, the smallest size of a file that cloud tiering will tier is 8 KiB on a 4 KiB or smaller cluster size.
storage Files Remove Smb1 Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-remove-smb1-linux.md
# Remove SMB 1 on Linux > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Many organizations and internet service providers (ISPs) block the port that SMB uses to communicate, port 445. This practice originates from security guidance about legacy and deprecated versions of the SMB protocol. Although SMB 3.x is an internet-safe protocol, older versions of SMB, especially SMB 1, aren't. SMB 1, also known as CIFS (Common Internet File System), is included with many Linux distributions. SMB 1 is an outdated, inefficient, and insecure protocol. The good news is that Azure Files doesn't support SMB 1. Also, starting with Linux kernel version 4.18, Linux makes it possible to disable SMB 1. We always [strongly recommend](https://aka.ms/stopusingsmb1) disabling the SMB 1 on your Linux clients before using SMB file shares in production.
storage Storage How To Use Files Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-linux.md
# Mount SMB Azure file share on Linux > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
[Azure Files](storage-files-introduction.md) is Microsoft's easy to use cloud file system. Azure file shares can be mounted in Linux distributions using the [SMB kernel client](https://wiki.samba.org/index.php/LinuxCIFS). The recommended way to mount an Azure file share on Linux is using SMB 3.1.1. By default, Azure Files requires encryption in transit, which is supported by SMB 3.0+. Azure Files also supports SMB 2.1, which doesn't support encryption in transit, but you can't mount Azure file shares with SMB 2.1 from another Azure region or on-premises for security reasons. Unless your application specifically requires SMB 2.1, use SMB 3.1.1.
stream-analytics Stream Analytics Define Inputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-inputs.md
If an Azure Stream Analytics job is started using *Now* at 13:00, and a blob is
To process the data as a stream using a timestamp in the event payload, you must use the [TIMESTAMP BY](/stream-analytics-query/stream-analytics-query-language-reference) keyword. A Stream Analytics job pulls data from Azure Blob storage or Azure Data Lake Storage Gen2 input every second if the blob file is available. If the blob file is unavailable, there's an exponential backoff with a maximum time delay of 90 seconds.
-CSV-formatted inputs require a header row to define fields for the data set, and all header row fields must be unique.
- > [!NOTE] > Stream Analytics does not support adding content to an existing blob file. Stream Analytics will view each file only once, and any changes that occur in the file after the job has read the data are not processed. Best practice is to upload all the data for a blob file at once and then add additional newer events to a different, new blob file.
For more information, see [Stream data from Kafka into Azure Stream Analytics (P
[stream.analytics.introduction]: stream-analytics-introduction.md [stream.analytics.get.started]: stream-analytics-real-time-fraud-detection.md [stream.analytics.query.language.reference]: /stream-analytics-query/stream-analytics-query-language-reference
-[stream.analytics.rest.api.reference]: /rest/api/streamanalytics/
+[stream.analytics.rest.api.reference]: /rest/api/streamanalytics/
time-series-insights Time Series Insights Manage Resources Using Azure Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-manage-resources-using-azure-resource-manager-template.md
The following procedure describes how to use PowerShell to deploy an Azure Resou
1. Deploy the quickstart template through the Azure portal - The quickstart template's home page on GitHub also includes a **Deploy to Azure** button. Clicking it opens a Custom Deployment page in the Azure portal. From this page, you can enter or select values for each of the parameters from the [required parameters](#required-parameters) or [optional parameters](#optional-parameters) tables. After filling out the settings, clicking the **Purchase** button will initiate the template deployment.
- </br>
- </br>
- <a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.timeseriesinsights%2Ftimeseriesinsights-environment-with-eventhub%2Fazuredeploy.json" target="_blank">
- <img src="https://azuredeploy.net/deploybutton.png" alt="The Deploy to Azure button."/>
- </a>
+
+[![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.timeseriesinsights%2Ftimeseriesinsights-environment-with-eventhub%2Fazuredeploy.json)
## Next steps -- For information on programmatically managing Azure Time Series Insights resources using REST APIs, read [Azure Time Series Insights Management](/rest/api/time-series-insights-management/).
+- For information on programmatically managing Azure Time Series Insights resources using REST APIs, read [Azure Time Series Insights Management](/rest/api/time-series-insights-management/).
update-manager Guidance Patching Sql Server Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/guidance-patching-sql-server-azure-vm.md
Title: Guidance on patching for SQL Server on Azure VMs (preview) using Azure Update Manager.
-description: An overview on patching guidance for SQL Server on Azure VMs (preview) using Azure Update Manager
+ Title: Guidance on patching for SQL Server on Azure VMs using Azure Update Manager.
+description: An overview on patching guidance for SQL Server on Azure VMs using Azure Update Manager
Previously updated : 09/27/2023 Last updated : 04/03/2024
-# Guidance on patching for SQL Server on Azure VMs (preview) using Azure Update Manager
+# Guidance on patching for SQL Server on Azure VMs using Azure Update Manager
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
update-manager Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/support-matrix.md
# Support matrix for Azure Update Manager > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly.
This article details the Windows and Linux operating systems supported and system requirements for machines or servers managed by Azure Update Manager. The article includes the supported regions and specific versions of the Windows Server and Linux operating systems running on Azure virtual machines (VMs) or machines managed by Azure Arc-enabled servers.
update-manager Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/whats-new.md
Previously updated : 02/06/2024 Last updated : 04/03/2024 # What's new in Azure Update Manager
Update management center is now rebranded as Azure Update Manager.
Azure Update Manager is now available in Canada East and Sweden Central regions for Arc-enabled servers. [Learn more](support-matrix.md#supported-regions).
-### SQL Server patching (preview)
+### SQL Server patching
-SQL Server patching (preview) allows you to patch SQL Servers. You can now manage and govern updates for all your SQL Servers using the patching capabilities provided by Azure Update Manager. [Learn more](guidance-patching-sql-server-azure-vm.md).
+SQL Server patching allows you to patch SQL Servers. You can now manage and govern updates for all your SQL Servers using the patching capabilities provided by Azure Update Manager. [Learn more](guidance-patching-sql-server-azure-vm.md).
## July 2023
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-portal.md
# Create virtual machines in a scale set using Azure portal > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article steps through using Azure portal to create a Virtual Machine Scale Set. ## Log in to Azure
virtual-machine-scale-sets Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-portal.md
# Quickstart: Create a Virtual Machine Scale Set in the Azure portal > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
virtual-machine-scale-sets Spot Priority Mix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/spot-priority-mix.md
# Spot Priority Mix for high availability and cost savings > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Flexible scale sets
virtual-machine-scale-sets Tutorial Autoscale Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-autoscale-cli.md
# Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
When you create a scale set, you define the number of VM instances that you wish to run. As your application demand changes, you can automatically increase or decrease the number of VM instances. The ability to autoscale lets you keep up with customer demand or respond to application performance changes throughout the lifecycle of your app. In this tutorial you learn how to:
virtual-machine-scale-sets Virtual Machine Scale Sets Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md
You can modify a scale to expand the set of zones over which to spread VM instan
> Updating Virtual Machine Scale Sets to add availability zones is currently in preview. Previews are made available to you on the condition that you agree to the [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Some aspects of this feature may change prior to general availability (GA). > [!IMPORTANT]
-> This preview is intended for stateless workloads on Virtual Machine Scale Sets. Scale sets with stateful workloads or used with **Service Fabric or Azure Kubernetes Services are not supported for zonal expansion**.
+> This feature is intended for stateless workloads on Virtual Machine Scale Sets. Scale sets with stateful workloads or used with **Service Fabric or Azure Kubernetes Services are not supported for zonal expansion**.
This feature can be used with API version 2023-03-01 or greater.
Expanding to a zonal scale set is done in 3 steps:
#### Prepare for zonal expansion > [!WARNING]
-> This preview allows you to add zones to the scale set. You can't go back to a regional scale set or remove zones once they have been added.
+> This feature allows you to add zones to the scale set. You can't go back to a regional scale set or remove zones once they have been added.
In order to prepare for zonal expansion: * [Check that you have enough quota](../virtual-machines/quotas.md) for the VM size in the selected region to handle more instances.
With [Rolling upgrades + MaxSurge](virtual-machine-scale-sets-upgrade-policy.md)
> [!IMPORTANT] > Rolling upgrades with MaxSurge is currently under Public Preview. It is only available for VMSS Uniform Orchestration Mode.
-### Preview known issues and limitations
+### Known issues and limitations
-* The preview is targeted to stateless workloads on Virtual Machine Scale Sets.
+* The feature is targeted to stateless workloads on Virtual Machine Scale Sets.
* Scale sets running Service Fabric or Azure Kubernetes Service are not supported.
virtual-machines Automatic Vm Guest Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/automatic-vm-guest-patching.md
# Automatic VM guest patching for Azure VMs > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
virtual-machines Compiling Scaling Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/compiling-scaling-applications.md
# Scaling HPC applications > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/configure.md
# Configure and optimize VMs > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Custom Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/custom-data.md
# Custom data and cloud-init on Azure Virtual Machines > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
virtual-machines Disks Shared https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-shared.md
Azure shared disks are supported on:
- [Ubuntu 18.04 and above](https://discourse.ubuntu.com/t/ubuntu-high-availability-corosync-pacemaker-shared-disk-environments/14874) - Red Hat Enterprise Linux (RHEL) ([support policy](https://access.redhat.com/articles/3444601)) - [RHEL 7.9](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/deploying_red_hat_enterprise_linux_7_on_public_cloud_platforms/configuring-rhel-high-availability-on-azure_cloud-content)
- - [RHEL 8.3 and above](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/deploying_red_hat_enterprise_linux_8_on_public_cloud_platforms/configuring-rhel-high-availability-on-azure_cloud-content)
+ - [RHEL 8.3 and above](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/deploying_rhel_8_on_microsoft_azure/configuring-rhel-high-availability-on-azure_cloud-content-azure)
- [Oracle Enterprise Linux](https://docs.oracle.com/en/operating-systems/oracle-linux/8/availability/) Linux clusters can use cluster managers such as [Pacemaker](https://wiki.clusterlabs.org/wiki/Pacemaker). Pacemaker builds on [Corosync](http://corosync.github.io/corosync/), enabling cluster communications for applications deployed in highly available environments. Some common clustered filesystems include [ocfs2](https://oss.oracle.com/projects/ocfs2/) and [gfs2](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/global_file_system_2/ch-overview-gfs2). You can use SCSI Persistent Reservation (SCSI PR) and/or STONITH Block Device (SBD) based clustering models for arbitrating access to the disk. When using SCSI PR, you can manipulate reservations and registrations using utilities such as [fence_scsi](https://manpages.ubuntu.com/manpages/kinetic/man8/fence_scsi.8.html) and [sg_persist](https://linux.die.net/man/8/sg_persist).
virtual-machines Enable Nvme Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/enable-nvme-interface.md
# Enabling NVMe and SCSI Interface on Virtual Machine > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
NVMe stands for nonvolatile memory express, which is a communication protocol that facilitates faster and more efficient data transfer between servers and storage systems. With NVMe, data can be transferred at the highest throughput and with the fastest response time. Azure now supports the NVMe interface on the Ebsv5 and Ebdsv5 family, offering the highest IOPS and throughput performance for remote disk storage among all the GP v5 VM series.
virtual-machines Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-linux.md
Last updated 03/28/2023
# Azure Linux VM Agent overview > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
The Microsoft Azure Linux VM Agent (waagent) manages Linux and FreeBSD provisioning, along with virtual machine (VM) interaction with the Azure fabric controller. In addition to the Linux agent providing provisioning functionality, Azure provides the option of using cloud-init for some Linux operating systems.
virtual-machines Custom Script Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/custom-script-linux.md
Last updated 03/31/2023
# Use the Azure Custom Script Extension Version 2 with Linux virtual machines > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
The Custom Script Extension Version 2 downloads and runs scripts on Azure virtual machines (VMs). Use this extension for post-deployment configuration, software installation, or any other configuration or management task. You can download scripts from Azure Storage or another accessible internet location, or you can provide them to the extension runtime.
virtual-machines Diagnostics Linux V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/diagnostics-linux-v3.md
ms.devlang: azurecli
# Use Linux diagnostic extension 3.0 to monitor metrics and logs > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This document describes version 3.0 and newer of the Linux diagnostic extension (LAD).
virtual-machines Diagnostics Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/diagnostics-linux.md
ms.devlang: azurecli
# Use the Linux diagnostic extension 4.0 to monitor metrics and logs > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article describes the latest versions of the Linux diagnostic extension (LAD).
virtual-machines Enable Infiniband https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/enable-infiniband.md
# Enable InfiniBand > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Hpc Compute Infiniband Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpc-compute-infiniband-linux.md
# InfiniBand Driver Extension for Linux > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This extension installs InfiniBand OFED drivers on InfiniBand and SR-IOV-enabled ('r' sizes) [HB-series](../sizes-hpc.md) and [N-series](../sizes-gpu.md) VMs running Linux. Depending on the VM family, the extension installs the appropriate drivers for the Connect-X NIC. It does not install the InfiniBand ND drivers on the non-SR-IOV enabled [HB-series](../sizes-hpc.md) and [N-series](../sizes-gpu.md) VMs.
virtual-machines Hpccompute Gpu Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpccompute-gpu-linux.md
# NVIDIA GPU Driver Extension for Linux > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This extension installs NVIDIA GPU drivers on Linux N-series virtual machines (VMs). Depending on the VM family, the extension installs CUDA or GRID drivers. When you install NVIDIA drivers by using this extension, you're accepting and agreeing to the terms of the [NVIDIA End-User License Agreement](https://www.nvidia.com/en-us/data-center/products/nvidia-ai-enterprise/eula/). During the installation process, the VM might reboot to complete the driver setup.
virtual-machines Network Watcher Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-linux.md
# Manage Network Watcher Agent virtual machine extension for Linux > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](../workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](../workloads/centos/centos-end-of-life.md).
The Network Watcher Agent virtual machine extension is a requirement for some of Azure Network Watcher features that capture network traffic to diagnose and monitor Azure virtual machines (VMs). For more information, see [What is Azure Network Watcher?](../../network-watcher/network-watcher-overview.md)
virtual-machines Stackify Retrace Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/stackify-retrace-linux.md
ms.devlang: azurecli
# Stackify Retrace Linux Agent Extension > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
## Overview
virtual-machines Tenable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/tenable.md
Last updated 07/18/2023
# Tenable One-Click Nessus Agent > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Tenable now supports a One-Click deployment of Nessus Agents via Microsoft's Azure portal. This solution provides an easy way to install the latest version of Nessus Agent on Azure virtual machines (VM) (whether Linux or Windows) by either clicking on an icon within the Azure portal or by writing a few lines of PowerShell script.
virtual-machines Update Linux Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/update-linux-agent.md
Last updated 02/03/2023
# How to update the Azure Linux Agent on a VM > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
To update your [Azure Linux Agent](https://github.com/Azure/WALinuxAgent) on a Linux VM in Azure, you must already have:
virtual-machines Vmaccess Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/vmaccess-linux.md
# VMAccess Extension for Linux > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
The VMAccess Extension is used to manage administrative users, configure SSH, and check or repair disks on Azure Linux virtual machines. The extension integrates with Azure Resource Manager templates. It can also be invoked using Azure CLI, Azure PowerShell, the Azure portal, and the Azure Virtual Machines REST API.
virtual-machines Fsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/fsv2-series.md
# Fsv2-series > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Generalize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/generalize.md
# Remove machine specific information by deprovisioning or generalizing a VM before creating an image > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Generalizing or deprovisioning a VM is not necessary for creating an image in an [Azure Compute Gallery](shared-image-galleries.md#generalized-and-specialized-images) unless you specifically want to create an image that has no machine specific information, like user accounts. Generalizing is still required when creating a managed image outside of a gallery.
virtual-machines Hb Hc Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hb-hc-known-issues.md
# Known issues with HB-series and N-series VMs > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Hb Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hb-series-overview.md
# HB-series virtual machines overview > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Hbv2 Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv2-series-overview.md
# HBv2 series virtual machine overview > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Hbv3 Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv3-series-overview.md
# HBv3-series virtual machine overview > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Hbv4 Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv4-series-overview.md
# HBv4-series virtual machine overview > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Hc Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hc-series-overview.md
# HC-series virtual machine overview > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Hx Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hx-series-overview.md
# HX-series virtual machine overview > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-version.md
# Create an image definition and an image version > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
An [Azure Compute Gallery](shared-image-galleries.md) (formerly known as Shared Image Gallery) simplifies custom image sharing across your organization. Custom images are like marketplace images, but you create them yourself. Images can be created from a VM, VHD, snapshot, managed image, or another image version.
virtual-machines Azure Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-dns.md
# DNS Name Resolution options for Linux virtual machines in Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
virtual-machines Cli Ps Findimage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cli-ps-findimage.md
# Find Azure Marketplace image information using the Azure CLI > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
virtual-machines Cloud Init Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloud-init-troubleshooting.md
# Troubleshooting VM provisioning with cloud-init > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
virtual-machines Cloudinit Configure Swapfile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloudinit-configure-swapfile.md
# Use cloud-init to configure a swap partition on a Linux VM > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
virtual-machines Cloudinit Update Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloudinit-update-vm.md
# Use cloud-init to update and install packages in a Linux VM in Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
virtual-machines Create Upload Centos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-centos.md
# Prepare a CentOS-based virtual machine for Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
virtual-machines Create Upload Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-generic.md
# Prepare Linux for imaging in Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
virtual-machines Disk Encryption Isolated Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-isolated-network.md
Last updated 02/20/2024
# Azure Disk Encryption on an isolated network > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets.
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-overview.md
# Azure Disk Encryption for Linux VMs > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
virtual-machines Disk Encryption Sample Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-sample-scripts.md
# Azure Disk Encryption sample scripts for Linux VMs > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
virtual-machines Endorsed Distros https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/endorsed-distros.md
# Endorsed Linux distributions on Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Expand Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/expand-disks.md
# Expand virtual hard disks on a Linux VM > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
virtual-machines Imaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/imaging.md
# Bringing and creating Linux images in Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines N Series Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/n-series-driver-setup.md
# Install NVIDIA GPU drivers on N-series VMs running Linux > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs
virtual-machines Run Command Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/run-command-managed.md
# Run scripts in your Linux VM by using managed Run Commands > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
virtual-machines Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/run-command.md
ms.devlang: azurecli
# Run scripts in your Linux VM by using action Run Commands > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
virtual-machines Storage Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/storage-performance.md
# Optimize performance on Lsv3, Lasv3, and Lsv2-series Linux VMs > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Uniform scale sets
virtual-machines Time Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/time-sync.md
# Time sync for Linux VMs in Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
$ sudo systemctl disable systemd-timesyncd
```` In most cases, systemd-timesyncd will try during boot but once chrony starts up it will overwrite and become the default time sync source.
-For more information about Ubuntu and NTP, see [Time Synchronization](https://ubuntu.com/server/docs/network-ntp).
+For more information about Ubuntu and NTP, see [Time Synchronization](https://ubuntu.com/server/docs/about-time-synchronisation).
For more information about Red Hat and NTP, see [Configure NTP](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-configuring_ntp_using_ntpd#s1-Configure_NTP).
virtual-machines Tutorial Manage Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-manage-vm.md
# Tutorial: Create and Manage Linux VMs with the Azure CLI > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
virtual-machines Using Cloud Init https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/using-cloud-init.md
# cloud-init support for virtual machines in Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
virtual-machines M Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/m-series.md
# M-series > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Maintenance And Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-and-updates.md
Title: Maintenance and updates
description: Overview of maintenance and updates for virtual machines running in Azure. Previously updated : 04/13/2023 Last updated : 04/01/2024 #pmcontact:shants # Maintenance for virtual machines in Azure
For greater control on all maintenance activities including zero-impact and rebo
### Live migration
-Live migration is an operation that doesn't require a reboot and that preserves memory for the VM. It causes a pause or freeze, typically lasting no more than 5 seconds. Except for G, L, M, N, and H series, all infrastructure as a service (IaaS) VMs, are eligible for live migration. Eligible VMs represent more than 90 percent of the IaaS VMs that are deployed to the Azure fleet.
+Live migration is an operation that doesn't require a reboot and that preserves memory for the VM. It causes a pause or freeze, typically lasting no more than 5 seconds. Except for G, L, N, and H series, all infrastructure as a service (IaaS) VMs, are eligible for live migration. Live Migration is available on majority of M-Series SKUs. Eligible VMs represent more than 90 percent of the IaaS VMs that are deployed to the Azure fleet.
> [!NOTE] > You won't receive a notification in the Azure portal for live migration operations that don't require a reboot. To see a list of live migrations that don't require a reboot, [query for scheduled events](./windows/scheduled-events.md#query-for-events).
virtual-machines Nc A100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nc-a100-v4-series.md
Last updated 09/19/2023
# NC A100 v4-series > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Ndm A100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ndm-a100-v4-series.md
Last updated 03/13/2023
# NDm A100 v4-series > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Np Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/np-series.md
# NP-series > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Setup Mpi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/setup-mpi.md
# Set up Message Passing Interface for HPC > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Sizes Hpc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-hpc.md
# High performance computing VM sizes > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Trusted Launch Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-faq.md
# Trusted Launch FAQ > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Frequently asked questions about trusted launch. Feature use cases, support for other Azure features, and fixes for common errors.
virtual-machines Cli Ps Findimage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/cli-ps-findimage.md
# Find and use Azure Marketplace VM images with Azure PowerShell > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
virtual-machines Ubuntu Pro In Place Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/canonical/ubuntu-pro-in-place-upgrade.md
Execute these commands inside the VM:
```bash sudo apt install ubuntu-advantage-tools sudo pro auto-attach++ ```
+> [!IMPORTANT]
+> The change of the "licenseType" property may take some time to propagate thru the system. If the auto-attach process fails, please wait for a few minutes and try again. If the auto-attach process continues to fail, please open a support ticket with Microsoft.
If the `pro --version` is lower than 28, execute this command: ```bash
virtual-machines Install Openframe Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/tmaxsoft/install-openframe-azure.md
# Install TmaxSoft OpenFrame on Azure > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Learn how to set up an OpenFrame environment on Azure suitable for development, demos, testing, or production workloads. This tutorial walks you through each step.
virtual-machines Oracle Database Backup Strategies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-database-backup-strategies.md
# Backup strategies for Oracle Database on an Azure Linux VM > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
**Applies to:** :heavy_check_mark: Linux VMs
virtual-network Accelerated Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-overview.md
# Accelerated Networking overview > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article describes the benefits, constraints, and supported configurations of Accelerated Networking. Accelerated Networking enables [single root I/O virtualization (SR-IOV)](/windows-hardware/drivers/network/overview-of-single-root-i-o-virtualization--sr-iov-) on supported virtual machine (VM) types, greatly improving networking performance. This high-performance data path bypasses the host, which reduces latency, jitter, and CPU utilization for the most demanding network workloads.
virtual-network Create Vm Accelerated Networking Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-vm-accelerated-networking-cli.md
# Use Azure CLI to create a Windows or Linux VM with Accelerated Networking > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article describes how to create a Linux or Windows virtual machine (VM) with Accelerated Networking (AccelNet) enabled by using the Azure CLI command-line interface. The article also discusses how to enable and manage Accelerated Networking on existing VMs.
virtual-network Manage Route Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-route-table.md
You can determine the next hop type between a virtual machine and the IP address
1. In the **Network Watcher | Next hop** page:
- :::image type="content" source="./media/manage-route-table/add-route.png" alt-text="Screenshot of add a route page for a route table.":::
+ :::image type="content" source="./media/manage-route-table/next-hop.png" alt-text="Screenshot of next hop in Network Watcher.":::
| Setting | Value | |--|--|
virtual-network Setup Dpdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/setup-dpdk.md
# Set up DPDK in a Linux virtual machine > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Data Plane Development Kit (DPDK) on Azure offers a faster user-space packet processing framework for performance-intensive applications. This framework bypasses the virtual machineΓÇÖs kernel network stack.
virtual-network Virtual Network Bandwidth Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-bandwidth-testing.md
# Test VM network throughput by using NTTTCP > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article describes how to use the free NTTTCP tool from Microsoft to test network bandwidth and throughput performance on Azure Windows or Linux virtual machines (VMs). A tool like NTTTCP targets the network for testing and minimizes the use of other resources that could affect performance.
virtual-network Virtual Network For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-for-azure-services.md
Previously updated : 05/03/2023 Last updated : 04/03/2024
Deploying services within a virtual network provides the following capabilities:
|Category|Service| Dedicated<sup>1</sup> Subnet |-|-|-| | Compute | Virtual machines: [Linux](/previous-versions/azure/virtual-machines/linux/infrastructure-example?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Windows](/previous-versions/azure/virtual-machines/windows/infrastructure-example?toc=%2fazure%2fvirtual-network%2ftoc.json) <br/>[Virtual machine scale sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-mvss-existing-vnet.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Cloud Service](/previous-versions/azure/reference/jj156091(v=azure.100)): Virtual network (classic) only <br/> [Azure Batch](../batch/nodes-and-pools.md?toc=%2fazure%2fvirtual-network%2ftoc.json#virtual-network-vnet-and-firewall-configuration) <br/> [Azure Baremetal Infrastructure](../baremetal-infrastructure/concepts-baremetal-infrastructure-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json)| No <br/> No <br/> No <br/> No<sup>2</sup> </br> No |
-| Network | [Application Gateway - WAF](../application-gateway/application-gateway-ilb-arm.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure Bastion](../bastion/bastion-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure Firewall](../firewall/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) <br/>[Azure Route Server](../route-server/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[ExpressRoute Gateway](../expressroute/expressroute-about-virtual-network-gateways.md)<br/>[Network Virtual Appliances](/windows-server/networking/sdn/manage/use-network-virtual-appliances-on-a-vn)<br/>[VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md?toc=%2fazure%2fvirtual-network%2ftoc.json) <br/>[Azure DNS Private Resolver](../dns/dns-private-resolver-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json)| Yes <br/> Yes <br/> Yes <br/> Yes <br/> Yes <br/> No <br/> Yes </br> No |
+| Network | [Application Gateway - WAF](../application-gateway/application-gateway-ilb-arm.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure Bastion](../bastion/bastion-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure Firewall](../firewall/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) <br/>[Azure Route Server](../route-server/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[ExpressRoute Gateway](../expressroute/expressroute-about-virtual-network-gateways.md)<br/>[Network Virtual Appliances](/windows-server/networking/sdn/manage/use-network-virtual-appliances-on-a-vn)<br/>[VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md?toc=%2fazure%2fvirtual-network%2ftoc.json) <br/>[Azure DNS Private Resolver](../dns/dns-private-resolver-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) </br> [Virtual Network Data Gateway for Fabric and Power BI](/data-integration/vnet/overview) | Yes <br/> Yes <br/> Yes <br/> Yes <br/> Yes <br/> No <br/> Yes </br> No </br> Yes |
|Data|[RedisCache](../azure-cache-for-redis/cache-how-to-premium-vnet.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure SQL Managed Instance](/azure/azure-sql/managed-instance/connectivity-architecture-overview?toc=%2fazure%2fvirtual-network%2ftoc.json) </br> [Azure Database for MySQL - Flexible Server](../mysql/flexible-server/concepts-networking-vnet.md) </br> [Azure Database for PostgreSQL - Flexible Server](../postgresql/flexible-server/concepts-networking.md#private-access-vnet-integration)| Yes <br/> Yes <br/> Yes </br> Yes | |Analytics | [Azure HDInsight](../hdinsight/hdinsight-plan-virtual-network-deployment.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure Databricks](/azure/databricks/scenarios/what-is-azure-databricks?toc=%2fazure%2fvirtual-network%2ftoc.json) |No<sup>2</sup> <br/> No<sup>2</sup> <br/> | Identity | [Microsoft Entra Domain Services](../active-directory-domain-services/tutorial-create-instance.md?toc=%2fazure%2fvirtual-network%2ftoc.json) |No <br/>
virtual-network Virtual Network Optimize Network Bandwidth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-optimize-network-bandwidth.md
# Optimize network throughput for Azure virtual machines > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Azure Virtual Machines (VMs) have default network settings that can be further optimized for network throughput. This article describes how to optimize network throughput for Microsoft Azure Windows and Linux VMs, including major distributions such as Ubuntu, CentOS, and Red Hat.
virtual-network Virtual Network Test Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-test-latency.md
# Test network latency between Azure VMs > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article describes how to test network latency between Azure virtual machines (VMs) by using the publicly available tools [Latte](https://github.com/microsoft/latte) for Windows or [SockPerf](https://github.com/mellanox/sockperf) for Linux.
virtual-network Virtual Networks Name Resolution For Vms And Role Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md
# Name resolution for resources in Azure virtual networks > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
Azure can be used to host IaaS, PaaS, and hybrid solutions. In order to facilitate communication between the virtual machines (VMs) and other resources deployed in a virtual network, it may be necessary to allow them to communicate with each other. The use of easily remembered and unchanging names simplifies the communication process, rather than relying on IP addresses.
virtual-wan Hub Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/hub-settings.md
When you deploy a new virtual hub, you can specify additional routing infrastruc
When increasing the virtual hub capacity, the virtual hub router will continue to support traffic at its current capacity until the scale out is complete. It may take up to 25 minutes for the virtual hub router to scale out to additional routing infrastructure units. It's also important to note the following: currently, regardless of the number of routing infrastructure units deployed, traffic may experience performance degradation if more than 1.5 Gbps is sent in a single TCP flow.
+> [!NOTE]
+> Regardless of the virtual hub's capacity, the hub can only accept a maximum of 10,000 routes from its connected resources (virtual networks, branches, other virtual hubs, etc).
+>
+ ### Configure virtual hub capacity Capacity is configured on the **Basics** tab **Virtual hub capacity** setting when you create your virtual hub.
virtual-wan Monitor Virtual Wan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-virtual-wan-reference.md
$MetricInformation.Data
* **Start Time and End Time** - This time is based on UTC. Ensure that you're entering UTC values when inputting these parameters. If these parameters aren't used, the past one hour's worth of data is shown by default.
-* **Sum Aggregation Type** - This aggregation type shows you the total number of bytes that traversed the virtual hub router during a selected time period. The **Max** and **Min** aggregation types aren't meaningful.
+* **Sum Aggregation Type** - The **sum** aggregation type shows you the total number of bytes that traversed the virtual hub router during a selected time period. For example, if you set the Time granularity to 5 minutes, each data point will correspond to the number of bytes sent in that 5 minute interval. To convert this to Gbps, you can divide this number by 37500000000. Based on the virtual hub's [capacity](hub-settings.md#capacity), the hub router can support between 3 Gbps and 50 Gbps. The **Max** and **Min** aggregation types aren't meaningful at this time.
### <a name="s2s-metrics"></a>Site-to-site VPN gateway metrics
virtual-wan Routing Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/routing-deep-dive.md
Even if hub 1 knows the ExpressRoute prefix from circuit 2 (`10.5.2.0/24`) and h
> [!IMPORTANT] > The previous diagram shows two secured virtual hubs, this topology is supported with Routing Intent. For more information see [How to configure Virtual WAN Hub routing intent and routing policies][virtual-wan-intent].
-As explained in [Virtual hub routing preference (Preview)][virtual-wan-hrp], Virtual WAN favors routes coming from ExpressRoute per default. Since routes are advertised from hub 1 to the ExpressRoute circuit 1, from the ExpressRoute circuit 1 to the circuit 2, and from the ExpressRoute circuit 2 to hub 2 (and vice versa), virtual hubs prefer this path over the more direct inter hub link now. The effective routes in hub 1 show this:
+As explained in [Virtual hub routing preference][virtual-wan-hrp], Virtual WAN favors routes coming from ExpressRoute per default. Since routes are advertised from hub 1 to the ExpressRoute circuit 1, from the ExpressRoute circuit 1 to the circuit 2, and from the ExpressRoute circuit 2 to hub 2 (and vice versa), virtual hubs prefer this path over the more direct inter hub link now. The effective routes in hub 1 show this:
:::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-2-er-hub-1.png" alt-text="Screenshot of effective routes in Virtual hub 1 with Global Reach and routing preference ExpressRoute." lightbox="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-2-er-hub-1-expanded.png":::
The effective routes in hub 2 will be similar:
:::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-2-er-hub-2.png" alt-text="Screenshot of effective routes in Virtual hub 2 with Global Reach and routing preference ExpressRoute." lightbox="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-2-er-hub-2-expanded.png":::
-The routing preference can be changed to VPN or AS-Path as explained in [Virtual hub routing preference (Preview)][virtual-wan-hrp]. For example, you can set the preference to VPN as shown in this image:
+The routing preference can be changed to VPN or AS-Path as explained in [Virtual hub routing preference][virtual-wan-hrp]. For example, you can set the preference to VPN as shown in this image:
:::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-2-set-hrp-vpn.png" alt-text="Screenshot of how to set hub routing preference in Virtual WAN to V P N." lightbox="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-2-set-hrp-vpn.png":::
virtual-wan Scenario Route Through Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-route-through-nva.md
With that, the static routes that we need in the Default table to send traffic t
| VNet 2 | Default | 10.2.0.0/16 -> eastusconn | | VNet 4 | Default | 10.4.0.0/16 -> weconn |
-Now virtual WAN knows which connection to send the packets to, but the connection needs to know what to do when receiving those packets: This is where the connection route tables are used. Here we'll use the shorter prefixes (/24 instead of the longer /16), to make sure that these routes have preference over routes that are imported from the NVA VNets (VNet 2 and VNet 4):
+Now, these static routes will be advertised to your on-premises branches, and the Virtual WAN hub will know which VNet connection to forward traffic to. However, the VNet connection needs to know what to do when receiving this traffic: This is where the connection route tables are used. Here we'll use the shorter prefixes (/24 instead of the longer /16), to make sure that these routes have preference over routes that are imported from the NVA VNets (VNet 2 and VNet 4):
| Description | Connection | Static route | | -- | - | -- |
virtual-wan Virtual Wan Expressroute Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-expressroute-portal.md
Once the gateway is created, you can connect an [ExpressRoute circuit](../expres
### To connect the circuit to the hub gateway
-In the portal, go to the **Virtual hub -> Connectivity -> ExpressRoute** page. If you have access in your subscription to an ExpressRoute circuit, you'll see the circuit you want to use in the list of circuits. If you donΓÇÖt see any circuits, but have been provided with an authorization key and peer circuit URI, you can redeem and connect a circuit. See [To connect by redeeming an authorization key](#authkey).
+First, verify that your circuit's peering status is provisioned in the **ExpressRoute circuit -> Peerings** page in Portal. Then, go to the **Virtual hub -> Connectivity -> ExpressRoute** page. If you have access in your subscription to an ExpressRoute circuit, you'll see the circuit you want to use in the list of circuits. If you donΓÇÖt see any circuits, but have been provided with an authorization key and peer circuit URI, you can redeem and connect a circuit. See [To connect by redeeming an authorization key](#authkey).
1. Select the circuit. 2. Select **Connect circuit(s)**.
vpn-gateway Nva Work Remotely Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/nva-work-remotely-support.md
Most major NVA partners have posted guidance around scaling for sudden, unexpect
[Cisco AnyConnect Implementation and Performance/Scaling Reference for COVID-19 Preparation](https://www.cisco.com/c/en/us/support/docs/security/anyconnect-secure-mobility-client/215331-anyconnect-implementation-and-performanc.html "Cisco AnyConnect Implementation and Performance/Scaling Reference for COVID-19 Preparation")
-[Citrix COVID-19 Response Support Center](https://www.citrix.com/support/covid-19-coronavirus.html "Citrix COVID-19 Response Support Center")
+[Citrix COVID-19 Response Support Center](https://www.citrix.com/content/dam/citrix/en_us/documents/ebook/back-to-the-office.pdf "Citrix COVID-19 Response Support Center")
[F5 Guidance to Address the Dramatic Increase in Remote Workers](https://www.f5.com/business-continuity "F5 Guidance to Address the Dramatic Increase in Remote Workers")
vpn-gateway Vpn Gateway Validate Throughput To Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-validate-throughput-to-vnet.md
# How to validate VPN throughput to a virtual network > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
A VPN gateway connection enables you to establish secure, cross-premises connectivity between your Virtual Network within Azure and your on-premises IT infrastructure.