Updates from: 08/06/2024 01:21:33
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Studio Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/studio-quickstart.md
In this quickstart, get started with the Azure AI Content Safety service using C
* An active Azure account. If you don't have one, you can [create one for free](https://azure.microsoft.com/free/cognitive-services/). * A [Content Safety](https://aka.ms/acs-create) Azure resource.
-* Assign `Cognitive Services User` role to your account to ensure the studio experience. Go to [Azure portal](https://portal.azure.com/), navigate to your Content Safety resource or Azure AI Services resource, and select **Access Control** in the left navigation bar, then click **+ Add role assignment**, choose the `Cognitive Services User` role and select the member of your account that you need to assign this role to, then review and assign. It might take few minutes for the assignment to take effect.
+* Assign `Cognitive Services User` role to your account. Go to the [Azure Portal](https://portal.azure.com/), navigate to your Content Safety resource or Azure AI Services resource, and select **Access Control** in the left navigation bar, then select **+ Add role assignment**, choose the `Cognitive Services User` role and select the member of your account that you need to assign this role to, then review and assign. It might take few minutes for the assignment to take effect.
* Sign in to [Content Safety Studio](https://contentsafety.cognitive.azure.com) with your Azure subscription and Content Safety resource.
+> [!IMPORTANT]
+> * You must assign the `Cognitive Services User` role to your Azure account to use the studio experience. Go to the [Azure Portal](https://portal.azure.com/), navigate to your Content Safety resource or Azure AI Services resource, and select **Access Control** in the left navigation bar, then select **+ Add role assignment**, choose the `Cognitive Services User` role and select the member of your account that you need to assign this role to, then review and assign. It might take few minutes for the assignment to take effect.
+ ## Analyze text content The [Moderate text content](https://contentsafety.cognitive.azure.com/text) page provides capability for you to quickly try out text moderation.
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
For more information on Provisioned deployments, see our [Provisioned guidance](
- eastus +
+### Global batch model availability
+
+### Region and model support
+
+The following models support global batch:
+
+| Model | Version | Input format |
+|||
+|`gpt-4o` | 2024-05-13 |text + image |
+|`gpt-4` | turbo-2024-04-09 | text |
+|`gpt-4` | 0613 | text |
+| `gpt-35-turbo` | 0125 | text |
+| `gpt-35-turbo` | 1106 | text |
+| `gpt-35-turbo` | 0613 | text |
+
+Global batch is currently supported in the following regions:
+
+- East US
+- West US
+- Sweden Central
+ ### GPT-4 and GPT-4 Turbo model availability #### Public cloud regions
These models can only be used with Embedding API requests.
| `gpt-35-turbo` (1106) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | Input: 16,385<br> Output: 4,096 | Sep 2021| | `gpt-35-turbo` (0125) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | 16,385 | Sep 2021 | | `gpt-4` (0613) <sup>**1**</sup> | North Central US <br> Sweden Central | 8192 | Sep 2021 |
+| `gpt-4o-mini` <sup>**1**</sup> (2024-07-18) | North Central US <br> Sweden Central | Input: 128,000 <br> Output: 16,384 <br> Training example context length: 64,536 | Oct 2023 |
-**<sup>1</sup>** GPT-4 fine-tuning is currently in public preview. See our [GPT-4 fine-tuning safety evaluation guidance](/azure/ai-services/openai/how-to/fine-tuning?tabs=turbo%2Cpython-new&pivots=programming-language-python#safety-evaluation-gpt-4-fine-tuningpublic-preview) for more information.
+**<sup>1</sup>** GPT-4 and GPT-4o mini fine-tuning is currently in public preview. See our [GPT-4 & GPT-4o mini fine-tuning safety evaluation guidance](/azure/ai-services/openai/how-to/fine-tuning?tabs=turbo%2Cpython-new&pivots=programming-language-python#safety-evaluation-gpt-4-fine-tuningpublic-preview) for more information.
### Whisper models
ai-services Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/batch.md
+
+ Title: 'How to use global batch processing with Azure OpenAI Service'
+
+description: Learn how to use global batch with Azure OpenAI Service
++++ Last updated : 08/04/2024++
+recommendations: false
+zone_pivot_groups: openai-fine-tuning-batch
++
+# Getting started with Azure OpenAI global batch deployments (preview)
+
+The Azure OpenAI Batch API is designed to handle large-scale and high-volume processing tasks efficiently. Process asynchronous groups of requests with separate quota, with 24-hour target turnaround, at [50% less cost than global standard](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/). With batch processing, rather than send one request at a time you send a large number of requests in a single file. Global batch requests have a separate enqueued token quota avoiding any disruption of your online workloads.
+
+Key use cases include:
+
+* **Large-Scale Data Processing:** Quickly analyze extensive datasets in parallel.
+
+* **Content Generation:** Create large volumes of text, such as product descriptions or articles.
+
+* **Document Review and Summarization:** Automate the review and summarization of lengthy documents.
+
+* **Customer Support Automation:** Handle numerous queries simultaneously for faster responses.
+
+* **Data Extraction and Analysis:** Extract and analyze information from vast amounts of unstructured data.
+
+* **Natural Language Processing (NLP) Tasks:** Perform tasks like sentiment analysis or translation on large datasets.
+
+* **Marketing and Personalization:** Generate personalized content and recommendations at scale.
+
+> [!IMPORTANT]
+> We aim to process batch requests within 24 hours; we do not expire the jobs that take longer. You can [cancel](#cancel-batch) the job anytime. When you cancel the job, any remaining work is cancelled and any already completed work is returned. You will be charged for any completed work.
+>
+> Data stored at rest remains in the designated Azure geography, while data may be processed for inferencing in any Azure OpenAI location.ΓÇ»[Learn more about data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/).ΓÇ»
+
+## Global batch support
+
+### Region and model support
+
+Global batch is currently supported in the following regions:
+
+- East US
+- West US
+- Sweden Central
+
+The following models support global batch:
+
+| Model | Version | Supported |
+|||
+|`gpt-4o` | 2024-05-13 |Yes (text + vision) |
+|`gpt-4` | turbo-2024-04-09 | Yes (text only) |
+|`gpt-4` | 0613 | Yes |
+| `gpt-35-turbo` | 0125 | Yes |
+| `gpt-35-turbo` | 1106 | Yes |
+| `gpt-35-turbo` | 0613 | Yes |
++
+Refer to the [models page](../concepts/models.md) for the most up-to-date information on regions/models where global batch is currently supported.
+
+### API Versions
+
+- `2024-07-01-preview`
+
+### Not supported
+
+The following aren't currently supported:
+
+- Integration with the Assistants API.
+- Integration with Azure OpenAI On Your Data feature.
+
+### Global batch deployment
+
+In the Studio UI the deployment type will appear as `Global-Batch`.
++
+> [!TIP]
+> Each line of your input file for batch processing has a `model` attribute that requires a global batch **deployment name**. For a given input file, all names must be the same deployment name. This is different from OpenAI where the concept of model deployments does not exist.
+++++++++++
+## Batch object
+
+|Property | Type | Definition|
+||||
+| `id` | string | |
+| `object` | string| `batch` |
+| `endpoint` | string | The API endpoint used by the batch |
+| `errors` | object | |
+| `input_file_id` | string | The ID of the input file for the batch |
+| `completion_window` | string | The time frame within which the batch should be processed |
+| `status` | string | The current status of the batch. Possible values: `validating`, `failed`, `in_progress`, `finalizing`, `completed`, `expired`, `cancelling`, `cancelled`. |
+| `output_file_id` | string |The ID of the file containing the outputs of successfully executed requests. |
+| `error_file_id` | string | The ID of the file containing the outputs of requests with errors. |
+| `created_at` | integer | A timestamp when this batch was created (in unix epochs). |
+| `in_progress_at` | integer | A timestamp when this batch started progressing (in unix epochs). |
+| `expires_at` | integer | A timestamp when this batch will expire (in unix epochs). |
+| `finalizing_at` | integer | A timestamp when this batch started finalizing (in unix epochs). |
+| `completed_at` | integer | A timestamp when this batch started finalizing (in unix epochs). |
+| `failed_at` | integer | A timestamp when this batch failed (in unix epochs) |
+| `expired_at` | integer | A timestamp when this batch expired (in unix epochs).|
+| `cancelling_at` | integer | A timestamp when this batch started `cancelling` (in unix epochs). |
+| `cancelled_at` | integer | A timestamp when this batch was `cancelled` (in unix epochs). |
+| `request_counts` | object | Object structure:<br><br> `total` *integer* <br> The total number of requests in the batch. <br>`completed` *integer* <br> The number of requests in the batch that have been completed successfully. <br> `failed` *integer* <br> The number of requests in the batch that have failed.
+| `metadata` | map | A set of key-value pairs that can be attached to the batch. This property can be useful for storing additional information about the batch in a structured format. |
+
+## Frequently asked questions (FAQ)
+
+### Can images be used with the batch API?
+
+This capability is limited to certain multi-modal models. Currently only GPT-4o support images as part of batch requests. Images can be provided as input either via [image url or a base64 encoded representation of the image](#input-format). Images for batch are currently not supported with GPT-4 Turbo.
+
+### Can I use the batch API with fine-tuned models?
+
+This is currently not supported.
+
+### Can I use the batch API for embeddings models?
+
+This is currently not supported.
+
+### Does content filtering work with Global Batch deployment?
+
+Yes. Similar to other deployment types, you can create content filters and associate them with the Global Batch deployment type.
+
+### Can I request additional quota?
+
+Yes, from the quota page in the Studio UI. Default quota allocation can be found in the [quota and limits article](../quotas-limits.md#global-batch-quota).
+
+### What happens if the API doesn't complete my request within the 24 hour time frame?
+
+We aim to process these requests within 24 hours; we don't expire the jobs that take longer. You can cancel the job anytime. When you cancel the job, any remaining work is cancelled and any already completed work is returned. You'll be charged for any completed work.
+
+### How many requests can I queue using batch?
+
+There's no fixed limit on the number of requests you can batch, however, it will depend on your enqueued token quota. Your enqueued token quota includes the maximum number of input tokens you can enqueue at one time.
+
+Once your batch request is completed, your batch rate limit is reset, as your input tokens are cleared. The limit depends on the number of global requests in the queue. If the Batch API queue processes your batches quickly, your batch rate limit is reset more quickly.
+
+## Troubleshooting
+
+A job is successful when `status` is `Completed`. Successful jobs will still generate an error_file_id, but it will be associated with an empty file with zero bytes.
+
+When a job failure occurs, you'll find details about the failure in the `errors` property:
+
+```json
+"value": [
+ {
+ "cancelled_at": null,
+ "cancelling_at": null,
+ "completed_at": "2024-06-27T06:50:01.6603753+00:00",
+ "completion_window": null,
+ "created_at": "2024-06-27T06:37:07.3746615+00:00",
+ "error_file_id": "file-f13a58f6-57c7-44d6-8ceb-b89682588072",
+ "expired_at": null,
+ "expires_at": "2024-06-28T06:37:07.3163459+00:00",
+ "failed_at": null,
+ "finalizing_at": "2024-06-27T06:49:59.1994732+00:00",
+ "id": "batch_50fa47a0-ef19-43e5-9577-a4679b92faff",
+ "in_progress_at": "2024-06-27T06:39:57.455977+00:00",
+ "input_file_id": "file-42147e78ea42488682f4fd1d73028e72",
+ "errors": {
+ "object": ΓÇ£listΓÇ¥,
+ "data": [
+ {
+ ΓÇ£codeΓÇ¥: ΓÇ£empty_fileΓÇ¥,
+ ΓÇ£messageΓÇ¥: ΓÇ£The input file is empty. Please ensure that the batch contains at least one request.ΓÇ¥
+ }
+ ]
+ },
+ "metadata": null,
+ "object": "batch",
+ "output_file_id": "file-22d970b7-376e-4223-a307-5bb081ea24d7",
+ "request_counts": {
+ "total": 10,
+ "completed": null,
+ "failed": null
+ },
+ "status": "Failed"
+ }
+```
+
+### Error codes
+
+|Error code | Definition|
+|||
+|`invalid_json_line`| A line (or multiple) in your input file wasn't able to be parsed as valid json.<br><br> Please ensure no typos, proper opening and closing brackets, and quotes as per JSON standard, and resubmit the request.|
+| `too_many_tasks` |The number of requests in the input file exceeds the maximum allowed value of 100,000.<br><br>Please ensure your total requests are under 100,000 and resubmit the job.|
+| `url_mismatch` | Either a row in your input file has a URL that doesnΓÇÖt match the rest of the rows, or the URL specified in the input file doesnΓÇÖt match the expected endpoint URL. <br><br>Please ensure all request URLs are the same, and that they match the endpoint URL associated with your Azure OpenAI deployment.|
+|`model_not_found`|The Azure OpenAI model deployment name that was specified in the `model` property of the input file wasn't found.<br><br> Please ensure this name points to a valid Azure OpenAI model deployment.|
+| `duplicate_custom_id` | The custom ID for this request is a duplicate of the custom ID in another request. |
+|`empty_batch` | Please check your input file to ensure that the custom ID parameter is unique for each request in the batch.|
+|`model_mismatch`| The Azure OpenAI model deployment name that was specified in the `model` property of this request in the input file doesn't match the rest of the file.<br><br>Please ensure that all requests in the batch point to the same AOAI model deployment in the `model` property of the request.|
+|`invalid_request`| The schema of the input line is invalid or the deployment SKU is invalid. <br><br>Please ensure the properties of the request in your input file match the expected input properties, and that the Azure OpenAI deployment SKU is `globalbatch` for batch API requests.|
+
+### Known issues
+
+- Resources deployed with Azure CLI won't work out-of-box with Azure OpenAI global batch. This is due to an issue where resources deployed using this method have endpoint subdomains that don't follow the `https://your-resource-name.openai.azure.com` pattern. A workaround for this issue is to deploy a new Azure OpenAI resource using one of the other common deployment methods which will properly handle the subdomain setup as part of the deployment process.
++
+## See also
+
+* Learn more about Azure OpenAI [deployment types](./deployment-types.md)
+* Learn more about Azure OpenAI [quotas and limits](../quotas-limits.md)
ai-services Deployment Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/deployment-types.md
Previously updated : 07/01/2024 Last updated : 07/11/2024
Our global deployments will be the first location for all new models and feature
Azure OpenAI offers three types of deployments. These provide a varied level of capabilities that provide trade-offs on: throughput, SLAs, and price. Below is a summary of the options followed by a deeper description of each.
-| **Offering** | **Global-Standard** | **Standard** | **Provisioned** |
-||:|:|:|
-| **Best suited for** | Applications that donΓÇÖt require data residency. Recommended starting place for customers. | For customers with data residency requirements. Optimized for low to medium volume. | Real-time scoring for large consistent volume. Includes the highest commitments and limits.|
-| **How it works** | Traffic may be routed anywhere in the world | | |
-| **Getting started** | [Model deployment](./create-resource.md) | [Model deployment](./create-resource.md) | [Provisioned onboarding](./provisioned-throughput-onboarding.md) |
-| **Cost** | [Global deployment pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) | [Regional pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) | May experience cost savings for consistent usage |
-| **What you get** | Easy access to all new models with highest default pay-per-call limits.<br><br> Customers with high volume usage may see higher latency variability | Easy access with [SLA on availability](https://azure.microsoft.com/support/legal/sl#estimate-provisioned-throughput-and-cost) |
-| **What you don’t get** |❌Data processing guarantee<br> <br> Data might be processed outside of the resource's Azure geography, but data storage remains in its Azure geography. [Learn more about data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/) | ❌High volume w/consistent low latency | ❌Pay-per-call flexibility |
-| **Per-call Latency** | Optimized for real-time calling & low to medium volume usage. Customers with high volume usage may see higher latency variability. Threshold set per model | Optimized for real-time calling & low to medium volume usage. Customers with high volume usage may see higher latency variability. Threshold set per model | Optimized for real-time. |
-| **Sku Name in code** | `GlobalStandard` | `Standard` | `ProvisionedManaged` |
-| **Billing model** | Pay-per-token | Pay-per-token | Monthly Commitments |
+| **Offering** | **Global-Batch** | **Global-Standard** | **Standard** | **Provisioned** |
+||:|:|:|:|
+| **Best suited for** | Offline scoring <br><br> Workloads that are not latency sensitive and can be completed in hours.<br><br> For use cases that do not have data processing residency requirements.| Recommended starting place for customers. <br><br>Global-Standard will have the higher default quota and larger number of models available than Standard. <br><br> For production applications that do not have data processing residency requirements. | For customers with data residency requirements. Optimized for low to medium volume. | Real-time scoring for large consistent volume. Includes the highest commitments and limits.|
+| **How it works** | Offline processing via files |Traffic may be routed anywhere in the world | | |
+| **Getting started** | [Global-Batch](./batch.md) | [Model deployment](./create-resource.md) | [Model deployment](./create-resource.md) | [Provisioned onboarding](./provisioned-throughput-onboarding.md) |
+| **Cost** | [Least expensive option](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) <br> 50% less cost compared to Global Standard prices. Access to all new models with larger quota allocations. | [Global deployment pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) | [Regional pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) | May experience cost savings for consistent usage |
+| **What you get** |[Significant discount compared to Global Standard](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) | Easy access to all new models with highest default pay-per-call limits.<br><br> Customers with high volume usage may see higher latency variability | Easy access with [SLA on availability](https://azure.microsoft.com/support/legal/sl#estimate-provisioned-throughput-and-cost) |
+| **What you don’t get** |❌Real-time call performance <br><br>❌Data processing guarantee<br> <br> Data stored at rest remains in the designated Azure geography, while data may be processed for inferencing in any Azure OpenAI location. [Learn more about data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/) |❌Data processing guarantee<br> <br> Data stored at rest remains in the designated Azure geography, while data may be processed for inferencing in any Azure OpenAI location. [Learn more about data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/) | ❌High volume w/consistent low latency | ❌Pay-per-call flexibility |
+| **Per-call Latency** | Not Applicable (file based async process) | Optimized for real-time calling & low to medium volume usage. Customers with high volume usage may see higher latency variability. Threshold set per model | Optimized for real-time calling & low to medium volume usage. Customers with high volume usage may see higher latency variability. Threshold set per model | Optimized for real-time. |
+| **Sku Name in code** | `GlobalBatch` | `GlobalStandard` | `Standard` | `ProvisionedManaged` |
+| **Billing model** | Pay-per-token |Pay-per-token | Pay-per-token | Monthly Commitments |
## Provisioned
Standard deployments are optimized for low to medium volume workloads with high
## Global standard > [!IMPORTANT]
-> Data might be processed outside of the resource's Azure geography, but data storage remains in its Azure geography. [Learn more about data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/).
+> Data stored at rest remains in the designated Azure geography, while data may be processed for inferencing in any Azure OpenAI location. [Learn more about data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/).
Global deployments are available in the same Azure OpenAI resources as non-global deployment types but allow you to leverage Azure's global infrastructure to dynamically route traffic to the data center with best availability for each request. Global standard provides the highest default quota and eliminates the need to load balance across multiple resources. Customers with high consistent volume may experience greater latency variability. The threshold is set per model. See the [quotas page to learn more](./quota.md). For applications that require the lower latency variance at large workload usage, we recommend purchasing provisioned throughput.
+## Global batch
+
+> [!IMPORTANT]
+> Data stored at rest remains in the designated Azure geography, while data may be processed for inferencing in any Azure OpenAI location. [Learn more about data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/).
+
+[Global batch](./batch.md) is designed to handle large-scale and high-volume processing tasks efficiently. Process asynchronous groups of requests with separate quota, with 24-hour target turnaround, at [50% less cost than global standard](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/). With batch processing, rather than send one request at a time you send a large number of requests in a single file. Global batch requests have a separate enqueued token quota avoiding any disruption of your online workloads.
+
+Key use cases include:
+
+* **Large-Scale Data Processing:** Quickly analyze extensive datasets in parallel.
+
+* **Content Generation:** Create large volumes of text, such as product descriptions or articles.
+
+* **Document Review and Summarization:** Automate the review and summarization of lengthy documents.
+
+* **Customer Support Automation:** Handle numerous queries simultaneously for faster responses.
+
+* **Data Extraction and Analysis:** Extract and analyze information from vast amounts of unstructured data.
+
+* **Natural Language Processing (NLP) Tasks:** Perform tasks like sentiment analysis or translation on large datasets.
+
+* **Marketing and Personalization:** Generate personalized content and recommendations at scale.
+ ### How to disable access to global deployments in your subscription Azure Policy helps to enforce organizational standards and to assess compliance at-scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to the per-resource, per-policy granularity. It also helps to bring your resources to compliance through bulk remediation for existing resources and automatic remediation for new resources. [Learn more about Azure Policy and specific built-in controls for AI services](/azure/ai-services/security-controls-policy).
You can use the following policy to disable access to Azure OpenAI global standa
To learn about creating resources and deploying models refer to the [resource creation guide](./create-resource.md).
+## Retrieve batch job output file
+++ ## See also - [Quotas & limits](./quota.md)
ai-services Fine Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/fine-tuning.md
Previously updated : 07/25/2024 Last updated : 08/02/2024 zone_pivot_groups: openai-fine-tuning-new
ai-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md
The following sections provide you with a quick guide to the default quotas and
[!INCLUDE [Quota](./includes/model-matrix/quota.md)] + ## gpt-4o rate limits `gpt-4o` and `gpt-4o-mini` have rate limit tiers with higher limits for certain customer types.
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
- ignite-2023 - references_regions Previously updated : 07/31/2024 Last updated : 08/05/2024 recommendations: false
recommendations: false
This article provides a summary of the latest releases and major documentation updates for Azure OpenAI.
+## August 2024
+
+### Global batch deployments are now available
+
+The Azure OpenAI Batch API is designed to handle large-scale and high-volume processing tasks efficiently. Process asynchronous groups of requests with separate quota, with 24-hour target turnaround, at [50% less cost than global standard](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/). With batch processing, rather than send one request at a time you send a large number of requests in a single file. Global batch requests have a separate enqueued token quota avoiding any disruption of your online workloads.
+
+Key use cases include:
+
+* **Large-Scale Data Processing:** Quickly analyze extensive datasets in parallel.
+
+* **Content Generation:** Create large volumes of text, such as product descriptions or articles.
+
+* **Document Review and Summarization:** Automate the review and summarization of lengthy documents.
+
+* **Customer Support Automation:** Handle numerous queries simultaneously for faster responses.
+
+* **Data Extraction and Analysis:** Extract and analyze information from vast amounts of unstructured data.
+
+* **Natural Language Processing (NLP) Tasks:** Perform tasks like sentiment analysis or translation on large datasets.
+
+* **Marketing and Personalization:** Generate personalized content and recommendations at scale.
+
+For more information on [getting started with global batch deployments](./how-to/batch.md).
+ ## July 2024
+### GPT-4o mini is now available for fine-tuning
+
+GPT-4o mini fine-tuning is [now available in public preview](./concepts/models.md#fine-tuning-models) in Sweden Central and in North Central US.
+ ### Assistants File Search tool is now billed The [file search](./how-to/file-search.md) tool for Assistants now has additional charges for usage. See the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) for more information.
ai-studio Concept Synthetic Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/concept-synthetic-data.md
# Synthetic data generation in Azure AI Studio
-In this article
- - [Synthetic data generation](#synthetic-data-generation)
- - [Next Steps](#next-steps)
+In Azure AI Studio, you can use synthetic data generation to efficiently produce predictions for your datasets. In this article, you're introduced to the concept of synthetic data generation and how it can be used in machine learning.
-In Azure AI Studio, you can leverage synthetic data generation to efficiently produce predictions for your datasets.
## Synthetic data generation Synthetic data generation involves creating artificial data that mimics the statistical properties of real-world data. This data is generated using algorithms and machine learning techniques, and it can be used in various ways, such as computer simulations or by modeling real-world events.
-In machine learning, synthetic data is particularly valuable for several reasons:
+In machine learning, synthetic data is valuable for several reasons:
**Data Augmentation:** It helps in expanding the size of training datasets, which is crucial for training robust machine learning models. This is especially useful when real-world data is scarce or expensive to obtain.
You can use the sample notebook available at this [link](https://aka.ms/meta-lla
- [What is Azure AI Studio?](../what-is-ai-studio.md) - [Learn more about deploying Meta Llama models](../how-to/deploy-models-llama.md) -- [Azure AI FAQ article](../faq.yml)
+- [Azure AI FAQ article](../faq.yml)
ai-studio Deploy Models Serverless Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-serverless-availability.md
Title: Region availability for models in Serverless API endpoints
-description: Learn about the regions where each model is available for deployment in serverless API endpoints.
+description: Learn about the regions where each model is available for deployment in serverless API endpoints via Azure AI Studio.
- references_regions
-# Region availability for models in serverless API endpoints | Azure AI Studio
+# Region availability for models in serverless API endpoints
In this article, you learn about which regions are available for each of the models supporting serverless API endpoint deployments.
analysis-services Analysis Services Async Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-async-refresh.md
https://westus.asazure.windows.net/servers/myserver/models/AdventureWorks/refres
All calls must be authenticated with a valid Microsoft Entra ID (OAuth 2) token in the Authorization header and must meet the following requirements: - The token must be either a user token or an application service principal.-- The token must have the audience set to exactly `https://*.asazure.windows.net`. Note that `*` isn't a placeholder or a wildcard, and the audience must have the `*` character as the subdomain. Specifying an invalid audience results in authentication failure.
+- The token must have the audience set to exactly `https://*.asazure.windows.net`. Note that `*` isn't a placeholder or a wildcard, and the audience must have the `*` character as the subdomain. Custom audiences, such as https://customersubdomain.asazure.windows.net, are not supported. Specifying an invalid audience results in authentication failure.
- The user or application must have sufficient permissions on the server or model to make the requested call. The permission level is determined by roles within the model or the admin group on the server. > [!IMPORTANT]
analysis-services Analysis Services Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-logging.md
This article describes how to set up, view, and manage [Azure Monitor resource l
You can select **Engine**, **Service**, and **Metrics** log categories. For a listing of what's logged for each category, see [Supported resource logs for Microsoft.AnalysisServices/servers](monitor-analysis-services-reference.md#supported-resource-logs-for-microsoftanalysisservicesservers).
-## Set up diagnostics logging
+## Set up diagnostic settings
-### Azure portal
-
-1. In [Azure portal](https://portal.azure.com) > server, click **Diagnostic settings** in the left navigation, and then click **Turn on diagnostics**.
-
- ![Screenshot showing Turn on diagnostics in the Azure portal.](./media/analysis-services-logging/aas-logging-turn-on-diagnostics.png)
-
-2. In **Diagnostic settings**, specify the following options:
-
- * **Name**. Enter a name for the logs to create.
-
- * **Archive to a storage account**. To use this option, you need an existing storage account to connect to. See [Create a storage account](/azure/storage/common/storage-account-create). Follow the instructions to create a Resource Manager, general-purpose account, then select your storage account by returning to this page in the portal. It may take a few minutes for newly created storage accounts to appear in the drop-down menu.
- * **Stream to an event hub**. To use this option, you need an existing Event Hub namespace and event hub to connect to. To learn more, see [Create an Event Hubs namespace and an event hub using the Azure portal](/azure/event-hubs/event-hubs-create). Then return to this page in the portal to select the Event Hub namespace and policy name.
- * **Send to Azure Monitor (Log Analytics workspace)**. To use this option, either use an existing workspace or [create a new workspace](/azure/azure-monitor/logs/quick-create-workspace) resource in the portal. For more information on viewing your logs, see [View logs in Log Analytics workspace](#view-logs-in-log-analytics-workspace) in this article.
-
- * **Engine**. Select this option to log xEvents. If you're archiving to a storage account, you can select the retention period for the resource logs. Logs are autodeleted after the retention period expires.
- * **Service**. Select this option to log service level events. If you are archiving to a storage account, you can select the retention period for the resource logs. Logs are autodeleted after the retention period expires.
- * **Metrics**. Select this option to store verbose data in [Metrics](analysis-services-monitor.md#server-metrics). If you are archiving to a storage account, you can select the retention period for the resource logs. Logs are autodeleted after the retention period expires.
-
-3. Click **Save**.
-
- If you receive an error that says "Failed to update diagnostics for \<workspace name>. The subscription \<subscription id> is not registered to use microsoft.insights." follow the [Troubleshoot Azure Diagnostics](../azure-monitor/essentials/resource-logs.md) instructions to register the account, then retry this procedure.
-
- If you want to change how your resource logs are saved at any point in the future, you can return to this page to modify settings.
-
-### PowerShell
-
-Here are the basic commands to get you going. If you want step-by-step help on setting up logging to a storage account by using PowerShell, see the tutorial later in this article.
-
-To enable metrics and resource logging by using PowerShell, use the following commands:
--- To enable storage of resource logs in a storage account, use this command:-
- ```powershell
- Set-AzDiagnosticSetting -ResourceId [your resource id] -StorageAccountId [your storage account id] -Enabled $true
- ```
-
- The storage account ID is the resource ID for the storage account where you want to send the logs.
--- To enable streaming of resource logs to an event hub, use this command:-
- ```powershell
- Set-AzDiagnosticSetting -ResourceId [your resource id] -ServiceBusRuleId [your service bus rule id] -Enabled $true
- ```
-
- The Azure Service Bus rule ID is a string with this format:
-
- ```powershell
- {service bus resource ID}/authorizationrules/{key name}
- ```
--- To enable sending resource logs to a Log Analytics workspace, use this command:-
- ```powershell
- Set-AzDiagnosticSetting -ResourceId [your resource id] -WorkspaceId [resource id of the log analytics workspace] -Enabled $true
- ```
--- You can obtain the resource ID of your Log Analytics workspace by using the following command:-
- ```powershell
- (Get-AzOperationalInsightsWorkspace).ResourceId
- ```
-
-You can combine these parameters to enable multiple output options.
-
-### REST API
-
-Learn how to [change diagnostics settings by using the Azure Monitor REST API](/rest/api/monitor/).
-
-### Resource Manager template
-
-Learn how to [enable diagnostics settings at resource creation by using a Resource Manager template](../azure-monitor/essentials/resource-manager-diagnostic-settings.md).
+To learn how to set up diagnostic settings using the Azure portal, Azure CLI, PowerShell, or Azure Resource Manager, see [Create diagnostic settings in Azure Monitor](/azure/azure-monitor/essentials/create-diagnostic-settings).
## Manage your logs
analysis-services Monitor Analysis Services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/monitor-analysis-services-reference.md
See [Monitor Azure Analysis Services](monitor-analysis-services.md) for details
### Supported metrics for Microsoft.AnalysisServices/servers The following table lists the metrics available for the Microsoft.AnalysisServices/servers resource type. [!INCLUDE [horz-monitor-ref-metrics-tableheader](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-tableheader.md)] [!INCLUDE [horz-monitor-ref-metrics-dimensions-intro](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-dimensions-intro.md)] [!INCLUDE [horz-monitor-ref-metrics-dimensions](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-dimensions.md)]
Analysis Services metrics have the dimension `ServerResourceType`.
[!INCLUDE [horz-monitor-ref-resource-logs](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-resource-logs.md)] ### Supported resource logs for Microsoft.AnalysisServices/servers When you set up logging for Analysis Services, you can select **Engine** or **Service** events to log.
analysis-services Monitor Analysis Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/monitor-analysis-services.md
For a list of available metrics for Analysis Services, see [Analysis Services mo
[!INCLUDE [horz-monitor-resource-logs](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-resource-logs.md)] - For the available resource log categories, associated Log Analytics tables, and the logs schemas for Analysis Services, see [Analysis Services monitoring data reference](monitor-analysis-services-reference.md#resource-logs).+ ## Analysis Services resource logs
+To learn how to set up diagnostics logging, see [Set up diagnostic logging](analysis-services-logging.md).
+ When you set up logging for Analysis Services, you can select **Engine** or **Service** events to log, or select **AllMetrics** to log metrics data. For more information, see [Supported resource logs for Microsoft.AnalysisServices/servers](monitor-analysis-services-reference.md#supported-resource-logs-for-microsoftanalysisservicesservers). [!INCLUDE [horz-monitor-activity-log](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-activity-log.md)]
api-center Use Vscode Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/use-vscode-extension.md
description: Build, discover, try, and consume APIs from your Azure API center u
Previously updated : 07/15/2024 Last updated : 08/01/2024 # Customer intent: As a developer, I want to use my Visual Studio Code environment to build, discover, try, and consume APIs in my organization's API center.
To build, discover, try, and consume APIs in your [API center](overview.md), you
* [Visual Studio Code](https://code.visualstudio.com/) * [Azure API Center extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=apidev.azure-api-center)+
+ > [!NOTE]
+ > Where noted, certain features are available only in the extension's pre-release version. When installing the extension from the [Visual Studio Code Marketplace](https://marketplace.visualstudio.com/items?itemName=apidev.azure-api-center&ssr=false#overview), you can choose to install the release version or a pre-release version. Switch between the two versions at any time by using the extension's **Manage** button context menu in the Extensions view.
The following Visual Studio Code extensions are optional and needed only for certain scenarios as indicated:
Visual Studio Code will open a diff view between the two API specifications. Any
## Generate OpenAPI specification file from API code
-Use the power of GitHub Copilot with the Azure API Center extension for Visual Studio Code to create an OpenAPI specification file from your API code. Right click on the API code, select **Copilot** from the options, and select **Generate API documentation**. This will create an OpenAPI specification file.
+Use the power of GitHub Copilot with the Azure API Center extension for Visual Studio Code to create an OpenAPI specification file from your API code. Right-click on the API code, select **Copilot** from the options, and select **Generate API documentation**. This will create an OpenAPI specification file.
+
+> [!NOTE]
+> This feature is available in the pre-release version of the API Center extension.
:::image type="content" source="media/use-vscode-extension/generate-api-documentation.gif" alt-text="Animation showing how to use GitHub Copilot to generate an OpenAPI spec from code." lightbox="media/use-vscode-extension/generate-api-documentation.gif":::
You can view the documentation for an API definition in your API center and try
> Depending on the API, you might need to provide authorization credentials or an API key to try the API. > [!TIP]
- > You can also use the extension to generate API documentation in Markdown, a format that's easy to maintain and share with end users. Right-click on the definition, and select **Generate Markdown**.
+ > Using the pre-release version of the extension, you can generate API documentation in Markdown, a format that's easy to maintain and share with end users. Right-click on the definition, and select **Generate Markdown**.
## Generate HTTP file
api-management Api Management Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-capacity.md
This article explains what the **capacity** is and how it behaves. It shows how to access **capacity** metrics in the Azure portal and suggests when to consider scaling or upgrading your API Management instance. + > [!IMPORTANT] > This article discusses how you can monitor and scale your Azure API Management instance based upon its capacity metric. However, it is equally important to understand what happens when an individual API Management instance has actually *reached* its capacity. Azure API Management will not apply service-level throttling to prevent a physical overload of the instances. When an instance reaches its physical capacity, it will behave similar to any overloaded web server that is unable to process incoming requests: latency will increase, connections will get dropped, timeout errors will occur, and so on. This means that API clients should be prepared to deal with this possibility as they do with any other external service (for example, by applying retry policies).
api-management Api Management Gateways Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md
- build-2024- Previously updated : 05/05/2024+ Last updated : 07/11/2024
The API Management *gateway* (also called *data plane* or *runtime*) is the serv
API Management offers both managed and self-hosted gateways:
-* **Managed** - The managed gateway is the default gateway component that is deployed in Azure for every API Management instance in every service tier. With the managed gateway, all API traffic flows through Azure regardless of where backends implementing the APIs are hosted.
+* **Managed** - The managed gateway is the default gateway component that is deployed in Azure for every API Management instance in every service tier. A standalone managed gateway can also be associated with a [workspace](workspaces-overview.md) in an API Management instance. With the managed gateway, all API traffic flows through Azure regardless of where backends implementing the APIs are hosted.
> [!NOTE] > Because of differences in the underlying service architecture, the gateways provided in the different API Management service tiers have some differences in capabilities. For details, see the section [Feature comparison: Managed versus self-hosted gateways](#feature-comparison-managed-versus-self-hosted-gateways).
The following tables compare features available in the following API Management
* **V2** - the managed gateway available in the Basic v2 and Standard v2 tiers * **Consumption** - the managed gateway available in the Consumption tier * **Self-hosted** - the optional self-hosted gateway available in select service tiers
+* **Workspace** - the managed gateway available in a [workspace](workspaces-overview.md) in select service tiers
> [!NOTE] > * Some features of managed and self-hosted gateways are supported only in certain [service tiers](api-management-features.md) or with certain [deployment environments](self-hosted-gateway-overview.md#packaging) for self-hosted gateways.
The following tables compare features available in the following API Management
### Infrastructure
-| Feature support | Classic | V2 | Consumption | Self-hosted |
+| Feature support | Classic | V2 | Consumption | Self-hosted | Workspace |
| | | -- | -- | - |
-| [Custom domains](configure-custom-domain.md) | ✔️ | ✔️ | ✔️ | ✔️ |
-| [Built-in cache](api-management-howto-cache.md) | ✔️ | ✔️ | ❌ | ❌ |
-| [External Redis-compatible cache](api-management-howto-cache-external.md) | ✔️ | ✔️ |✔️ | ✔️ |
-| [Virtual network injection](virtual-network-concepts.md) | Developer, Premium | ❌ | ❌ | ✔️<sup>1,2</sup> |
-| [Inbound private endpoints](private-endpoint.md) | Developer, Basic, Standard, Premium | ❌ | ❌ | ❌ |
-| [Outbound virtual network integration](integrate-vnet-outbound.md) | ❌ | Standard V2 | ❌ | ❌ |
-| [Availability zones](zone-redundancy.md) | Premium | ❌ | ❌ | ✔️<sup>1</sup> |
-| [Multi-region deployment](api-management-howto-deploy-multi-region.md) | Premium | ❌ | ❌ | ✔️<sup>1</sup> |
-| [CA root certificates](api-management-howto-ca-certificates.md) for certificate validation | ✔️ | ✔️ | ❌ | ✔️<sup>3</sup> |
-| [Managed domain certificates](configure-custom-domain.md?tabs=managed#domain-certificate-options) | Developer, Basic, Standard, Premium | ❌ | ✔️ | ❌ |
-| [TLS settings](api-management-howto-manage-protocols-ciphers.md) | ✔️ | ✔️ | ✔️ | ✔️ |
-| **HTTP/2** (Client-to-gateway) | ✔️<sup>4</sup> | ✔️<sup>4</sup> |❌ | ✔️ |
-| **HTTP/2** (Gateway-to-backend) | ❌ | ❌ | ❌ | ✔️ |
-| API threat detection with [Defender for APIs](protect-with-defender-for-apis.md) | ✔️ | ✔️ | ❌ | ❌ |
+| [Custom domains](configure-custom-domain.md) | ✔️ | ✔️ | ✔️ | ✔️ | ❌ |
+| [Built-in cache](api-management-howto-cache.md) | ✔️ | ✔️ | ❌ | ❌ | ✔️ |
+| [External Redis-compatible cache](api-management-howto-cache-external.md) | ✔️ | ✔️ |✔️ | ✔️ | ❌ |
+| [Virtual network injection](virtual-network-concepts.md) | Developer, Premium | ❌ | ❌ | ✔️<sup>1,2</sup> | ✔️ |
+| [Inbound private endpoints](private-endpoint.md) | Developer, Basic, Standard, Premium | ❌ | ❌ | ❌ | ❌ |
+| [Outbound virtual network integration](integrate-vnet-outbound.md) | ❌ | Standard V2 | ❌ | ❌ | ✔️ |
+| [Availability zones](zone-redundancy.md) | Premium | ✔️<sup>3</sup> | ❌ | ✔️<sup>1</sup> | ✔️<sup>3</sup> |
+| [Multi-region deployment](api-management-howto-deploy-multi-region.md) | Premium | ❌ | ❌ | ✔️<sup>1</sup> | ❌ |
+| [CA root certificates](api-management-howto-ca-certificates.md) for certificate validation | ✔️ | ✔️ | ❌ | ✔️<sup>4</sup> | ❌ |
+| [Managed domain certificates](configure-custom-domain.md?tabs=managed#domain-certificate-options) | Developer, Basic, Standard, Premium | ❌ | ✔️ | ❌ | ❌ |
+| [TLS settings](api-management-howto-manage-protocols-ciphers.md) | ✔️ | ✔️ | ✔️ | ✔️ | ❌ |
+| **HTTP/2** (Client-to-gateway) | ✔️<sup>5</sup> | ✔️<sup>5</sup> |❌ | ✔️ | ❌ |
+| **HTTP/2** (Gateway-to-backend) | ❌ | ❌ | ❌ | ✔️ | ❌ |
+| API threat detection with [Defender for APIs](protect-with-defender-for-apis.md) | ✔️ | ✔️ | ❌ | ❌ | ❌ |
<sup>1</sup> Depends on how the gateway is deployed, but is the responsibility of the customer.<br/> <sup>2</sup> Connectivity to the self-hosted gateway v2 [configuration endpoint](self-hosted-gateway-overview.md#fqdn-dependencies) requires DNS resolution of the endpoint hostname.<br/>
-<sup>3</sup>CA root certificates for self-hosted gateway are managed separately per gateway<br/>
-<sup>4</sup> Client protocol needs to be enabled.
+<sup>3</sup> Two zones are enabled by default; not configurable.<br/>
+<sup>4</sup> CA root certificates for self-hosted gateway are managed separately per gateway<br/>
+<sup>5</sup> Client protocol needs to be enabled.
### Backend APIs
-| Feature support | Classic | V2 | Consumption | Self-hosted |
-| | | -- | -- | - |
-| [OpenAPI specification](import-api-from-oas.md) | ✔️ | ✔️ | ✔️ | ✔️ |
-| [WSDL specification](import-soap-api.md) | ✔️ | ✔️ | ✔️ | ✔️ |
-| WADL specification | ✔️ | ✔️ | ✔️ | ✔️ |
-| [Logic App](import-logic-app-as-api.md) | ✔️ | ✔️ | ✔️ |✔️ |
-| [App Service](import-app-service-as-api.md) | ✔️ | ✔️ | ✔️ | ✔️ |
-| [Function App](import-function-app-as-api.md) | ✔️ | ✔️ | ✔️ | ✔️ |
-| [Container App](import-container-app-with-oas.md) | ✔️ | ✔️ | ✔️ | ✔️ |
-| [Service Fabric](../service-fabric/service-fabric-api-management-overview.md) | Developer, Premium | ❌ |❌ | ❌ |
-| [Pass-through GraphQL](graphql-apis-overview.md) | ✔️ | ✔️ |✔️ | ✔️ |
-| [Synthetic GraphQL](graphql-apis-overview.md)| ✔️ | ✔️ | ✔️<sup>1</sup> | ✔️<sup>1</sup> |
-| [Pass-through WebSocket](websocket-api.md) | ✔️ | ✔️ | ❌ | ✔️ |
-| [Pass-through gRPC](grpc-api.md) | ❌ | ❌ | ❌ | ✔️ |
-| [OData](import-api-from-odata.md) | ✔️ | ✔️ | ✔️ | ✔️ |
-| [Azure OpenAI](azure-openai-api-from-specification.md) | ✔️ | ✔️ | ✔️ | ✔️ |
-| [Circuit breaker in backend](backends.md#circuit-breaker) | ✔️ | ✔️ | ❌ | ✔️ |
-| [Load-balanced backend pool](backends.md#load-balanced-pool) | ✔️ | ✔️ | ✔️ | ✔️ |
+| Feature support | Classic | V2 | Consumption | Self-hosted | Workspace |
+| | | -- | -- | - | -- |
+| [OpenAPI specification](import-api-from-oas.md) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| [WSDL specification](import-soap-api.md) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| WADL specification | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| [Logic App](import-logic-app-as-api.md) | ✔️ | ✔️ | ✔️ |✔️ | ✔️ |
+| [App Service](import-app-service-as-api.md) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| [Function App](import-function-app-as-api.md) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| [Container App](import-container-app-with-oas.md) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| [Service Fabric](../service-fabric/service-fabric-api-management-overview.md) | Developer, Premium | ❌ |❌ | ❌ | ❌ |
+| [Pass-through GraphQL](graphql-apis-overview.md) | ✔️ | ✔️ |✔️ | ✔️ | ✔️ |
+| [Synthetic GraphQL](graphql-apis-overview.md)| ✔️ | ✔️ | ✔️<sup>1</sup> | ✔️<sup>1</sup> | ❌ |
+| [Pass-through WebSocket](websocket-api.md) | ✔️ | ✔️ | ❌ | ✔️ | ❌ |
+| [Pass-through gRPC](grpc-api.md) | ❌ | ❌ | ❌ | ✔️ | ❌ |
+| [OData](import-api-from-odata.md) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| [Azure OpenAI](azure-openai-api-from-specification.md) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| [Circuit breaker in backend](backends.md#circuit-breaker) | ✔️ | ✔️ | ❌ | ✔️ | ✔️ |
+| [Load-balanced backend pool](backends.md#load-balanced-pool) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
<sup>1</sup> Synthetic GraphQL subscriptions (preview) aren't supported.
The following tables compare features available in the following API Management
Managed and self-hosted gateways support all available [policies](api-management-policies.md) in policy definitions with the following exceptions.
-| Feature support | Classic | V2 | Consumption | Self-hosted<sup>1</sup> |
-| | | -- | -- | - |
-| [Dapr integration](api-management-policies.md#integration-and-external-communication) | ❌ | ❌ |❌ | ✔️ |
-| [GraphQL resolvers](api-management-policies.md#graphql-resolvers) and [GraphQL validation](api-management-policies.md#content-validation)| ✔️ | ✔️ |✔️ | ❌ |
-| [Get authorization context](get-authorization-context-policy.md) | ✔️ | ✔️ |✔️ | ❌ |
-| [Quota and rate limit](api-management-policies.md#rate-limiting-and-quotas) | ✔️ | ✔️<sup>2</sup> | ✔️<sup>3</sup> | ✔️<sup>4</sup> |
+| Feature support | Classic | V2 | Consumption | Self-hosted<sup>1</sup> | Workspace |
+| | | -- | -- | - | -- |
+| [Dapr integration](api-management-policies.md#integration-and-external-communication) | ❌ | ❌ |❌ | ✔️ | ❌ |
+| [GraphQL resolvers](api-management-policies.md#graphql-resolvers) and [GraphQL validation](api-management-policies.md#content-validation)| ✔️ | ✔️ |✔️ | ❌ | ❌ |
+| [Get authorization context](get-authorization-context-policy.md) | ✔️ | ✔️ |✔️ | ❌ | ❌ |
+| [Quota and rate limit](api-management-policies.md#rate-limiting-and-quotas) | ✔️ | ✔️<sup>2</sup> | ✔️<sup>3</sup> | ✔️<sup>4</sup> | ✔️ |
<sup>1</sup> Configured policies that aren't supported by the self-hosted gateway are skipped during policy execution.<br/> <sup>2</sup> The quota by key policy isn't available in the v2 tiers.<br/>
Managed and self-hosted gateways support all available [policies](api-management
For details about monitoring options, see [Observability in Azure API Management](observability.md).
-| Feature support | Classic | V2 | Consumption | Self-hosted |
-| | | -- | -- | - |
-| [API analytics](howto-use-analytics.md) | ✔️ | ✔️<sup>1</sup> | ❌ | ❌ |
-| [Application Insights](api-management-howto-app-insights.md) | ✔️ | ✔️ | ✔️ | ✔️ |
-| [Logging through Event Hubs](api-management-howto-log-event-hubs.md) | ✔️ | ✔️ | ✔️ | ✔️ |
-| [Metrics in Azure Monitor](api-management-howto-use-azure-monitor.md#view-metrics-of-your-apis) | ✔️ | ✔️ |✔️ | ✔️ |
-| [OpenTelemetry Collector](how-to-deploy-self-hosted-gateway-kubernetes-opentelemetry.md) | ❌ | ❌ | ❌ | ✔️ |
-| [Request logs in Azure Monitor and Log Analytics](api-management-howto-use-azure-monitor.md#resource-logs) | ✔️ | ✔️ | ❌ | ❌<sup>2</sup> |
-| [Local metrics and logs](how-to-configure-local-metrics-logs.md) | ❌ | ❌ | ❌ | ✔️ |
-| [Request tracing](api-management-howto-api-inspector.md) | ✔️ | ❌<sup>3</sup> | ✔️ | ✔️ |
+| Feature support | Classic | V2 | Consumption | Self-hosted | Workspace |
+| | | -- | -- | - | -- |
+| [API analytics](howto-use-analytics.md) | ✔️ | ✔️<sup>1</sup> | ❌ | ❌ | ❌ |
+| [Application Insights](api-management-howto-app-insights.md) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| [Logging through Event Hubs](api-management-howto-log-event-hubs.md) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| [Metrics in Azure Monitor](api-management-howto-use-azure-monitor.md#view-metrics-of-your-apis) | ✔️ | ✔️ |✔️ | ✔️ | ❌ |
+| [OpenTelemetry Collector](how-to-deploy-self-hosted-gateway-kubernetes-opentelemetry.md) | ❌ | ❌ | ❌ | ✔️ | ❌ |
+| [Request logs in Azure Monitor and Log Analytics](api-management-howto-use-azure-monitor.md#resource-logs) | ✔️ | ✔️ | ❌ | ❌<sup>2</sup> | ❌ |
+| [Local metrics and logs](how-to-configure-local-metrics-logs.md) | ❌ | ❌ | ❌ | ✔️ | ❌ |
+| [Request tracing](api-management-howto-api-inspector.md) | ✔️ | ❌<sup>3</sup> | ✔️ | ✔️ | ❌ |
<sup>1</sup> The v2 tiers support Azure Monitor-based analytics.<br/> <sup>2</sup> The self-hosted gateway currently doesn't send resource logs (diagnostic logs) to Azure Monitor. Optionally [send metrics](how-to-configure-cloud-metrics-logs.md) to Azure Monitor, or [configure and persist logs locally](how-to-configure-local-metrics-logs.md) where the self-hosted gateway is deployed.<br/>
For details about monitoring options, see [Observability in Azure API Management
Managed and self-hosted gateways support all available [API authentication and authorization options](authentication-authorization-overview.md) with the following exceptions.
-| Feature support | Classic | V2 | Consumption | Self-hosted |
-| | | -- | -- | - |
-| [Credential manager](credentials-overview.md) | ✔️ | ✔️ | ✔️ | ❌ |
+| Feature support | Classic | V2 | Consumption | Self-hosted | Workspace |
+| | | -- | -- | - | -- |
+| [Credential manager](credentials-overview.md) | ✔️ | ✔️ | ✔️ | ❌ | ❌ |
## Gateway throughput and scaling
For estimated maximum gateway throughput in the API Management service tiers, se
* In environments such as [Kubernetes](how-to-self-hosted-gateway-on-kubernetes-in-production.md), add multiple gateway replicas to handle expected usage. * Optionally [configure autoscaling](how-to-self-hosted-gateway-on-kubernetes-in-production.md#autoscaling) to meet traffic demands.
+### Workspace gateway
+
+Scale capacity by adding and removing scale [units](upgrade-and-scale.md) in the workspace gateway.
+ ## Related content - Learn more about [API Management in a Hybrid and multicloud World](https://aka.ms/hybrid-and-multi-cloud-api-management)
api-management Api Management Howto App Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-app-insights.md
Previously updated : 08/25/2023 Last updated : 07/11/2024
You can easily integrate Azure Application Insights with Azure API Management. A
* Walk through Application Insights integration into API Management. * Learn strategies for reducing performance impact on your API Management service instance.
+> [!NOTE]
+> In an API Management [workspace](workspaces-overview.md), a workspace owner can independently integrate Application Insights and enable Application Insights logging for the workspace's APIs. The general guidance to integrate a workspace with Application Insights is similar to the guidance for an API Management instance; however, configuration is scoped to the workspace only. Currently, you must integrate Application Insights in a workspace by configuring an instrumentation key or connection string.
+ ## Prerequisites * You need an Azure API Management instance. [Create one](get-started-create-service-instance.md) first.
The following are high level steps for this scenario.
You can create a connection between Application Insights and your API Management using the Azure portal, the REST API, or related Azure tools. API Management configures a *logger* resource for the connection. > [!NOTE]
- > If your Application Insights resource is in a different tenant, then you must create the logger using the [REST API](/rest/api/apimanagement/current-ga/logger/create-or-update).
+ > If your Application Insights resource is in a different tenant, then you must create the logger using the [REST API](#create-a-connection-using-the-rest-api-bicep-or-arm-template) as shown in a later section of this article.
> [!IMPORTANT] > Currently, in the portal, API Management only supports connections to Application Insights using an Application Insights instrumentation key. To use an Application Insights connection string or an API Management managed identity, use the REST API, Bicep, or ARM template to create the logger. [Learn more](../azure-monitor/app/sdk-connection-string.md) about Application Insights connection strings.
The Application Insights connection string appears in the **Overview** section o
#### [REST API](#tab/rest)
-Use the API Management [REST API](/rest/api/apimanagement/current-preview/logger/create-or-update) with the following request body.
+Use the API Management [Logger - Create or Update](/rest/api/apimanagement/current-preview/logger/create-or-update) REST API with the following request body.
+
+If you are configuring the logger for a workspace, use the [Workspace Logger - Create or Update](/rest/api/apimanagement/workspace-logger/create-or-update?view=rest-apimanagement-2023-09-01-preview&preserve-view=true) REST API.
```JSON {
See the [prerequisites](#prerequisites) for using an API Management managed iden
#### [REST API](#tab/rest)
-Use the API Management [REST API](/rest/api/apimanagement/current-preview/logger/create-or-update) with the following request body.
+Use the API Management [Logger - Create or Update](/rest/api/apimanagement/current-preview/logger/create-or-update) REST API with the following request body.
```JSON {
See the [prerequisites](#prerequisites) for using an API Management managed iden
#### [REST API](#tab/rest)
-Use the API Management [REST API](/rest/api/apimanagement/current-preview/logger/create-or-update) with the following request body.
+Use the API Management [Logger - Create or Update](/rest/api/apimanagement/current-preview/logger/create-or-update) REST API with the following request body.
```JSON {
To improve performance issues, skip:
Addressing the issue of telemetry data flow from API Management to Application Insights: + Investigate whether a linked Azure Monitor Private Link Scope (AMPLS) resource exists within the VNet where the API Management resource is connected. AMPLS resources have a global scope across subscriptions and are responsible for managing data query and ingestion for all Azure Monitor resources. It's possible that the AMPLS has been configured with a Private-Only access mode specifically for data ingestion. In such instances, include the Application Insights resource and its associated Log Analytics resource in the AMPLS. Once this addition is made, the API Management data will be successfully ingested into the Application Insights resource, resolving the telemetry data transmission issue.
-## Next steps
+## Related content
+ Learn more about [Azure Application Insights](../azure-monitor/app/app-insights-overview.md). + Consider [logging with Azure Event Hubs](api-management-howto-log-event-hubs.md).
api-management Api Management Howto Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-autoscale.md
The article walks through the process of configuring autoscale and suggests opti
> [!NOTE] > * In service tiers that support multiple scale units, you can also [manually scale](upgrade-and-scale.md) your API Management instance. > * An API Management service in the **Consumption** tier scales automatically based on the traffic - without any additional configuration needed.
+> * Currently, autoscale is not supported for the [workspace gateway](workspaces-overview.md#workspace-gateway) in API Management workspaces.
## Prerequisites
api-management Api Management Howto Ca Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-ca-certificates.md
The article shows how to manage CA certificates of an Azure API Management servi
CA certificates uploaded to API Management can only be used for certificate validation by the managed API Management gateway. If you use the [self-hosted gateway](self-hosted-gateway-overview.md), learn how to [create a custom CA for self-hosted gateway](#create-custom-ca-for-self-hosted-gateway), later in this article. + [!INCLUDE [updated-for-az](~/reusable-content/ce-skilling/azure/includes/updated-for-az.md)]
api-management Api Management Howto Cache External https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-cache-external.md
Using an external cache allows you to overcome a few limitations of the built-in
For more detailed information about caching, see [API Management caching policies](api-management-policies.md#caching) and [Custom caching in Azure API Management](api-management-sample-cache-by-key.md). + ![Bring your own cache to APIM](media/api-management-howto-cache-external/overview.png) What you'll learn:
api-management Api Management Howto Log Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-log-event-hubs.md
Previously updated : 03/31/2023 Last updated : 07/12/2024
This article describes how to log API Management events using Azure Event Hubs.
Azure Event Hubs is a highly scalable data ingress service that can ingest millions of events per second so that you can process and analyze the massive amounts of data produced by your connected devices and applications. Event Hubs acts as the "front door" for an event pipeline, and once data is collected into an event hub, it can be transformed and stored using any real-time analytics provider or batching/storage adapters. Event Hubs decouples the production of a stream of events from the consumption of those events, so that event consumers can access the events on their own schedule. + ## Prerequisites * An API Management service instance. If you don't have one, see [Create an API Management service instance](get-started-create-service-instance.md).
api-management Api Management Howto Manage Protocols Ciphers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-manage-protocols-ciphers.md
By default, API Management enables TLS 1.2 for client and backend connectivity a
> [!NOTE] > * If you're using the self-hosted gateway, see [self-hosted gateway security](self-hosted-gateway-overview.md#security) to manage TLS protocols and cipher suites. > * The following tiers don't support changes to the default cipher configuration: **Consumption**, **Basic v2**, **Standard v2**.
+> * In [workspaces](workspaces-overview.md), the managed gateway doesn't support changes to the default protocol and cipher configuration.
## Prerequisites
api-management Api Management Howto Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-properties.md
Previously updated : 01/13/2023 Last updated : 07/11/2024
Using key vault secrets is recommended because it helps improve API Management s
### Prerequisites for key vault integration + - If you don't already have a key vault, create one. For steps to create a key vault, see [Quickstart: Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md). To create or import a secret to the key vault, see [Quickstart: Set and retrieve a secret from Azure Key Vault using the Azure portal](../key-vault/secrets/quick-create-portal.md).
api-management Api Management Howto Use Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-use-azure-monitor.md
With Azure Monitor, you can visualize, query, route, archive, and take actions on the metrics or logs coming from your Azure API Management service. + In this tutorial, you learn how to: > [!div class="checklist"]
api-management Api Management Howto Use Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-use-managed-service-identity.md
You can grant two types of identities to an API Management instance:
> [!NOTE] > Managed identities are specific to the Microsoft Entra tenant where your Azure subscription is hosted. They don't get updated if a subscription is moved to a different directory. If a subscription is moved, you'll need to recreate and configure the identities. + ## Create a system-assigned managed identity ### Azure portal
api-management Api Management In Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-in-workspace.md
- Title: Use a workspace in Azure API Management
-description: Members of a workspace in Azure API Management can collaborate to manage and productize their own APIs.
---- Previously updated : 03/10/2023--
-# Manage APIs and other resources in your API Management workspace
--
-This article is an introduction to managing APIs, products, subscriptions, and other API Management resources in a *workspace*. A workspace is a place where a development team can own, manage, update, and productize their own APIs, while a central API platform team manages the API Management infrastructure. Learn about the [workspace features](workspaces-overview.md)
-
-> [!NOTE]
-> * Workspaces are a preview feature of API Management and subject to certain [limitations](workspaces-overview.md#preview-limitations).
-> * Workspaces are supported in API Management REST API version 2022-09-01-preview or later.
-> * For pricing considerations, see [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/).
-
-## Prerequisites
-
-* An API Management instance. If needed, ask an administrator to [create one](get-started-create-service-instance.md).
-* A workspace. If needed, ask an administrator of your API Management instance to [create one](how-to-create-workspace.md).
-* Permissions to collaborate in the workspace. If needed, ask an administrator of your API Management instance to assign you appropriate [roles](api-management-role-based-access-control.md#built-in-workspace-roles) in the service and the workspace.
-
-## Go to the workspace - portal
-
-1. Sign in to the [Azure portal](https://portal.azure.com), and navigate to your API Management instance.
-
-1. In the left menu, select **Workspaces** (preview), and select the name of your workspace.
-
- :::image type="content" source="media/api-management-in-workspace/workspace-in-portal.png" alt-text="Screenshot of workspaces in API Management instance in the portal." lightbox="media/api-management-in-workspace/workspace-in-portal-expanded.png":::
-
-1. The workspace appears. The available resources and settings appear in the menu on the left.
-
- :::image type="content" source="media/api-management-in-workspace/workspace-menu.png" alt-text="Screenshot of API Management workspace menu in the portal." lightbox="media/api-management-in-workspace/workspace-menu-expanded.png":::
--
-## Get started with your workspace
-
-Depending on your role in the workspace, you might have permissions to create APIs, products, subscriptions, and other resources, or you might have read-only access to some or all of them.
-
-To get started managing, protecting, and publishing APIs in your workspaces, see the following guidance.
---
-|Resource |Guide |
-|||
-|APIs | [Tutorial: Import and publish your first API](import-and-publish.md) |
-|Products | [Tutorial: Create and publish a product](api-management-howto-add-products.md) |
-|Subscriptions | [Subscriptions in Azure API Management](api-management-subscriptions.md)<br/><br/>[Create subscriptions in API Management](api-management-howto-create-subscriptions.md) |
-|Policies | [Tutorial: Transform and protect your API](transform-api.md)<br/><br/>[Policies in Azure API Management](api-management-howto-policies.md)<br/><br/>[Set or edit API Management policies](set-edit-policies.md) |
-|Named values | [Manage secrets using named values](api-management-howto-properties.md) |
-|Policy fragments | [Reuse policy configurations in your API Management policy definitions](policy-fragments.md) |
-| Schemas | [Validate content](validate-content-policy.md) |
-| Groups | [Create and use groups to manage developer accounts](api-management-howto-create-groups.md)
-| Notifications | [How to configure notifications and notification templates](api-management-howto-configure-notifications.md)
---
-## Next steps
-
-* Learn more about [workspaces](workspaces-overview.md)
-
api-management Api Management Policy Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policy-expressions.md
The `context` variable is implicitly available in every policy [expression](api-
|Context Variable|Allowed methods, properties, and parameter values| |-|-|
-|`context`|[`Api`](#ref-context-api): [`IApi`](#ref-iapi)<br /><br /> [`Deployment`](#ref-context-deployment)<br /><br /> Elapsed: `TimeSpan` - time interval between the value of `Timestamp` and current time<br /><br /> [`GraphQL`](#ref-context-graphql)<br /><br />[`LastError`](#ref-context-lasterror)<br /><br /> [`Operation`](#ref-context-operation)<br /><br /> [`Request`](#ref-context-request)<br /><br /> `RequestId`: `Guid` - unique request identifier<br /><br /> [`Response`](#ref-context-response)<br /><br /> [`Subscription`](#ref-context-subscription)<br /><br /> `Timestamp`: `DateTime` - point in time when request was received<br /><br /> `Tracing`: `bool` - indicates if tracing is on or off <br /><br /> [User](#ref-context-user)<br /><br /> [`Variables`](#ref-context-variables): `IReadOnlyDictionary<string, object>`<br /><br /> `void Trace(message: string)`|
-|<a id="ref-context-api"></a>`context.Api`|`Id`: `string`<br /><br /> `IsCurrentRevision`: `bool`<br /><br /> `Name`: `string`<br /><br /> `Path`: `string`<br /><br /> `Revision`: `string`<br /><br /> `ServiceUrl`: [`IUrl`](#ref-iurl)<br /><br /> `Version`: `string` <br /><br /> `Workspace`: [`IWorkspace`](#ref-iworkspace) |
+|`context`|[`Api`](#ref-context-api): [`IApi`](#ref-iapi)<br /><br /> [`Deployment`](#ref-context-deployment)<br /><br /> Elapsed: `TimeSpan` - time interval between the value of `Timestamp` and current time<br /><br /> [`GraphQL`](#ref-context-graphql)<br /><br />[`LastError`](#ref-context-lasterror)<br /><br /> [`Operation`](#ref-context-operation)<br /><br /> [`Request`](#ref-context-request)<br /><br /> `RequestId`: `Guid` - unique request identifier<br /><br /> [`Response`](#ref-context-response)<br /><br /> [`Subscription`](#ref-context-subscription)<br /><br /> `Timestamp`: `DateTime` - point in time when request was received<br /><br /> `Tracing`: `bool` - indicates if tracing is on or off <br /><br /> [User](#ref-context-user)<br /><br /> [`Variables`](#ref-context-variables): `IReadOnlyDictionary<string, object>`<br /><br /> `void Trace(message: string)` <br /><br /> [`Workspace`](#ref-context-workspace) |
+|<a id="ref-context-api"></a>`context.Api`|`Id`: `string`<br /><br /> `IsCurrentRevision`: `bool`<br /><br /> `Name`: `string`<br /><br /> `Path`: `string`<br /><br /> `Revision`: `string`<br /><br /> `ServiceUrl`: [`IUrl`](#ref-iurl)<br /><br /> `Version`: `string` |
|<a id="ref-context-deployment"></a>`context.Deployment`|[`Gateway`](#ref-context-gateway)<br /><br /> `GatewayId`: `string` (returns 'managed' for managed gateways)<br /><br /> `Region`: `string`<br /><br /> `ServiceId`: `string`<br /><br /> `ServiceName`: `string`<br /><br /> `Certificates`: `IReadOnlyDictionary<string, X509Certificate2>`| |<a id="ref-context-gateway"></a>`context.Deployment.Gateway`|`Id`: `string` (returns 'managed' for managed gateways)<br /><br /> `InstanceId`: `string` (returns 'managed' for managed gateways)<br /><br /> `IsManaged`: `bool`| |<a id="ref-context-graphql"></a>`context.GraphQL`|`GraphQLArguments`: `IGraphQLDataObject`<br /><br /> `Parent`: `IGraphQLDataObject`<br/><br/>[Examples](configure-graphql-resolver.md#graphql-context)| |<a id="ref-context-lasterror"></a>`context.LastError`|`Source`: `string`<br /><br /> `Reason`: `string`<br /><br /> `Message`: `string`<br /><br /> `Scope`: `string`<br /><br /> `Section`: `string`<br /><br /> `Path`: `string`<br /><br /> `PolicyId`: `string`<br /><br /> For more information about `context.LastError`, see [Error handling](api-management-error-handling-policies.md).| |<a id="ref-context-operation"></a>`context.Operation`|`Id`: `string`<br /><br /> `Method`: `string`<br /><br /> `Name`: `string`<br /><br /> `UrlTemplate`: `string`|
-|<a id="ref-context-product"></a>`context.Product`|`ApprovalRequired`: `bool`<br /><br /> `Groups`: `IEnumerable<`[`IGroup`](#ref-igroup)`>`<br /><br /> `Id`: `string`<br /><br /> `Name`: `string`<br /><br /> `State`: `enum ProductState {NotPublished, Published}`<br /><br /> `SubscriptionsLimit`: `int?`<br /><br /> `SubscriptionRequired`: `bool`<br /><br /> `Workspace`: [`IWorkspace`](#ref-iworkspace)|
+|<a id="ref-context-product"></a>`context.Product`|`ApprovalRequired`: `bool`<br /><br /> `Groups`: `IEnumerable<`[`IGroup`](#ref-igroup)`>`<br /><br /> `Id`: `string`<br /><br /> `Name`: `string`<br /><br /> `State`: `enum ProductState {NotPublished, Published}`<br /><br /> `SubscriptionsLimit`: `int?`<br /><br /> `SubscriptionRequired`: `bool`|
|<a id="ref-context-request"></a>`context.Request`|`Body`: [`IMessageBody`](#ref-imessagebody) or `null` if request doesn't have a body.<br /><br /> `Certificate`: `System.Security.Cryptography.X509Certificates.X509Certificate2`<br /><br /> [`Headers`](#ref-context-request-headers): `IReadOnlyDictionary<string, string[]>`<br /><br /> `IpAddress`: `string`<br /><br /> `MatchedParameters`: `IReadOnlyDictionary<string, string>`<br /><br /> `Method`: `string`<br /><br /> `OriginalUrl`: [`IUrl`](#ref-iurl)<br /><br /> `Url`: [`IUrl`](#ref-iurl)<br /><br /> `PrivateEndpointConnection`: [`IPrivateEndpointConnection`](#ref-iprivateendpointconnection) or `null` if request doesn't come from a private endpoint connection.| |<a id="ref-context-request-headers"></a>`string context.Request.Headers.GetValueOrDefault(headerName: string, defaultValue: string)`|`headerName`: `string`<br /><br /> `defaultValue`: `string`<br /><br /> Returns comma-separated request header values or `defaultValue` if the header isn't found.| |<a id="ref-context-response"></a>`context.Response`|`Body`: [`IMessageBody`](#ref-imessagebody)<br /><br /> [`Headers`](#ref-context-response-headers): `IReadOnlyDictionary<string, string[]>`<br /><br /> `StatusCode`: `int`<br /><br /> `StatusReason`: `string`| |<a id="ref-context-response-headers"></a>`string context.Response.Headers.GetValueOrDefault(headerName: string, defaultValue: string)`|`headerName`: `string`<br /><br /> `defaultValue`: `string`<br /><br /> Returns comma-separated response header values or `defaultValue` if the header isn't found.| |<a id="ref-context-subscription"></a>`context.Subscription`|`CreatedDate`: `DateTime`<br /><br /> `EndDate`: `DateTime?`<br /><br /> `Id`: `string`<br /><br /> `Key`: `string`<br /><br /> `Name`: `string`<br /><br /> `PrimaryKey`: `string`<br /><br /> `SecondaryKey`: `string`<br /><br /> `StartDate`: `DateTime?`| |<a id="ref-context-user"></a>`context.User`|`Email`: `string`<br /><br /> `FirstName`: `string`<br /><br /> `Groups`: `IEnumerable<`[`IGroup`](#ref-igroup)`>`<br /><br /> `Id`: `string`<br /><br /> `Identities`: `IEnumerable<`[`IUserIdentity`](#ref-iuseridentity)`>`<br /><br /> `LastName`: `string`<br /><br /> `Note`: `string`<br /><br /> `RegistrationDate`: `DateTime`|
+|<a id="ref-context-workspace"></a>`context.Workspace`| `Id`: `string`<br /><br /> `Name`: `string`|
|<a id="ref-iapi"></a>`IApi`|`Id`: `string`<br /><br /> `Name`: `string`<br /><br /> `Path`: `string`<br /><br /> `Protocols`: `IEnumerable<string>`<br /><br /> `ServiceUrl`: [`IUrl`](#ref-iurl)<br /><br /> `SubscriptionKeyParameterNames`: [`ISubscriptionKeyParameterNames`](#ref-isubscriptionkeyparameternames)| |<a id="ref-igraphqldataobject"></a>`IGraphQLDataObject`|TBD<br /><br />| |<a id="ref-igroup"></a>`IGroup`|`Id`: `string`<br /><br /> `Name`: `string`|
The `context` variable is implicitly available in every policy [expression](api-
|<a id="ref-isubscriptionkeyparameternames"></a>`ISubscriptionKeyParameterNames`|`Header`: `string`<br /><br /> `Query`: `string`| |<a id="ref-iurl-query"></a>`string IUrl.Query.GetValueOrDefault(queryParameterName: string, defaultValue: string)`|`queryParameterName`: `string`<br /><br /> `defaultValue`: `string`<br /><br /> Returns comma-separated query parameter values or `defaultValue` if the parameter isn't found.| |<a id="ref-iuseridentity"></a>`IUserIdentity`|`Id`: `string`<br /><br /> `Provider`: `string`|
-|<a id="ref-iworkspace"></a>`IWorkspace`|`Id`: `string`<br /><br /> `Name`: `string`|
|<a id="ref-context-variables"></a>`T context.Variables.GetValueOrDefault<T>(variableName: string, defaultValue: T)`|`variableName`: `string`<br /><br /> `defaultValue`: `T`<br /><br /> Returns variable value cast to type `T` or `defaultValue` if the variable isn't found.<br /><br /> This method throws an exception if the specified type doesn't match the actual type of the returned variable.| |`BasicAuthCredentials AsBasic(input: this string)`|`input`: `string`<br /><br /> If the input parameter contains a valid HTTP Basic Authentication authorization request header value, the method returns an object of type `BasicAuthCredentials`; otherwise the method returns null.| |`bool TryParseBasic(input: this string, result: out BasicAuthCredentials)`|`input`: `string`<br /><br /> `result`: `out BasicAuthCredentials`<br /><br /> If the input parameter contains a valid HTTP Basic Authentication authorization value in the request header, the method returns `true` and the result parameter contains a value of type `BasicAuthCredentials`; otherwise the method returns `false`.|
api-management Api Management Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-role-based-access-control.md
Title: How to use Role-Based Access Control in Azure API Management | Microsoft Docs
+ Title: How to use role-based access control in Azure API Management | Microsoft Docs
description: Learn how to use the built-in roles and create custom roles in Azure API Management Previously updated : 02/15/2023 Last updated : 07/10/2024
The following table provides brief descriptions of the built-in roles. You can a
API Management provides the following built-in roles for collaborators in [workspaces](workspaces-overview.md) in an API Management instance.
-A workspace collaborator must be assigned both a workspace-scoped role and a service-scoped role.
-
+A workspace collaborator must be assigned both a workspace-scoped role and a service-scoped role.
|Role |Scope |Description | ||||
A workspace collaborator must be assigned both a workspace-scoped role and a ser
| API Management Service Workspace API Developer | service | Has read access to tags and products and write access to allow: <br/><br/> ▪️ Assigning APIs to products<br/> ▪️ Assigning tags to products and APIs<br/><br/> This role should be assigned on the service scope. | | API Management Service Workspace API Product Manager | service | Has the same access as API Management Service Workspace API Developer as well as read access to users and write access to allow assigning users to groups. This role should be assigned on the service scope. |
+Depending on how workspace collaborators use or manage the workspace, we recommend also assigning one of the following Azure-provided RBAC roles at the scope of the [workspace gateway](workspaces-overview.md#workspace-gateway): **Reader**, **Contributor**, or **Owner**.
+
+## Built-in developer portal roles
+
+|Role |Scope |Description |
+||||
+|API Management Developer Portal Content Editor | service | Can customize the developer portal, edit its content, and publish it using Azure Resource Manager APIs. |
## Custom roles
api-management Authentication Basic Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-basic-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
Use the `authentication-basic` policy to authenticate with a backend service usi
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
api-management Authentication Certificate Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-certificate-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
## Examples
api-management Authentication Managed Identity Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-managed-identity-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
Both system-assigned identity and any of the multiple user-assigned identities c
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted ## Examples
api-management Azure Openai Api From Specification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-api-from-specification.md
This article shows two options to import an [Azure OpenAI Service](/azure/ai-ser
## Option 1. Import API from Azure OpenAI Service
-You can import an Azure OpenAI API directly to API Management from the Azure OpenAI Service. When you import the API, API Management automatically configures:
+You can import an Azure OpenAI API directly to API Management from the Azure OpenAI Service.
++
+When you import the API, API Management automatically configures:
* Operations for each of the Azure OpenAI [REST API endpoints](/azure/ai-services/openai/reference). * A system-assigned identity with the necessary permissions to access the Azure OpenAI resource.
api-management Azure Openai Emit Token Metric Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-emit-token-metric-policy.md
The `azure-openai-emit-token-metric` policy sends metrics to Application Insight
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
api-management Azure Openai Enable Semantic Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-enable-semantic-caching.md
- build-2024 Previously updated : 06/25/2024 Last updated : 07/23/2024 # Enable semantic caching for Azure OpenAI APIs in Azure API Management Enable semantic caching of responses to Azure OpenAI API requests to reduce bandwidth and processing requirements imposed on the backend APIs and lower latency perceived by API consumers. With semantic caching, you can return cached responses for identical prompts and also for prompts that are similar in meaning, even if the text isn't the same. For background, see [Tutorial: Use Azure Cache for Redis as a semantic cache](../azure-cache-for-redis/cache-tutorial-semantic-cache.md).
api-management Azure Openai Semantic Cache Lookup Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-semantic-cache-lookup-policy.md
Use the `azure-openai-semantic-cache-lookup` policy to perform cache lookup of r
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) v2 ### Usage notes
api-management Azure Openai Semantic Cache Store Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-semantic-cache-store-policy.md
The `azure-openai-semantic-cache-store` policy caches responses to Azure OpenAI
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) outbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) v2 ### Usage notes
api-management Azure Openai Token Limit Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-token-limit-policy.md
By relying on token usage metrics returned from the OpenAI endpoint, the policy
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, self-hosted, workspace
### Usage notes
api-management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/overview.md
Previously updated : 05/17/2024 Last updated : 07/15/2024
The following table lists all the upcoming breaking changes and feature retireme
| [API version retirements][api2023] | June 1, 2024 | | [Deprecated (legacy) portal retirement][devportal2023] | October 31, 2023 | | [Self-hosted gateway v0/v1 retirement][shgwv0v1] | October 1, 2023 |
-| [Workspaces breaking changes][workspaces2024] | June 14, 2024 |
+| [Workspaces preview breaking changes][workspaces2024] | June 14, 2024 |
| [stv1 platform retirement][stv12024] | August 31, 2024 |
+| [Workspaces preview breaking changes, part 2][workspaces2025march] | March 31, 2025 |
| [Git repository retirement][git2025] | March 15, 2025 | | [Direct management API retirement][mgmtapi2025] | March 15, 2025 | | [ADAL-based Microsoft Entra ID or Azure AD B2C identity provider retirement][msal2025] | September 30, 2025 |
The following table lists all the upcoming breaking changes and feature retireme
[analytics2027]: ./analytics-dashboard-retirement-march-2027.md [mgmtapi2025]: ./direct-management-api-retirement-march-2025.md [workspaces2024]: ./workspaces-breaking-changes-june-2024.md
+[workspaces2025march]: ./workspaces-breaking-changes-march-2025.md
api-management Workspaces Breaking Changes June 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/workspaces-breaking-changes-june-2024.md
Title: Azure API Management workspaces - breaking changes (June 2024) | Microsoft Docs
+ Title: Azure API Management workspaces preview - breaking changes (June 2024) | Microsoft Docs
description: Azure API Management is updating the workspaces (preview) with breaking changes. If your service uses workspaces, you may need to update workspace configurations.
[!INCLUDE [api-management-availability-premium-dev-standard](../../../includes/api-management-availability-premium-dev-standard.md)]
-After 14 June 2024, as part of our development of [workspaces](../workspaces-overview.md) (preview) in Azure API Management, we're introducing several breaking changes.
+> [!IMPORTANT]
+> If you created workspaces after the generally available release of workspaces in July 2024, your workspaces shouldn't be affected by these changes.
+>
+
+After 14 June 2024, as part of our development of [workspaces](../workspaces-overview.md) in Azure API Management, we're introducing several breaking changes.
After 14 June 2024, your workspaces and APIs managed in them may stop working if they still rely on the capabilities set to change. APIs and resources managed outside workspaces aren't affected by this change.
If you have questions, get answers from community experts in [Microsoft Q&A](htt
## More information * [Workspaces overview](../workspaces-overview.md)
+* [Workspaces breaking changes, part 2 (March 2025)](workspaces-breaking-changes-march-2025.md)
++ ## Related content
api-management Workspaces Breaking Changes March 2025 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/workspaces-breaking-changes-march-2025.md
+
+ Title: Azure API Management workspaces preview - breaking changes (March 2025)
+description: Azure API Management is removing support for preview workspaces. If your service uses preview workspaces, migrate your workspaces to the generally available version.
++++ Last updated : 07/10/2024+++
+# Workspaces breaking changes, part 2 (March 2025)
++
+> [!IMPORTANT]
+> These breaking changes apply only to *preview* workspaces in Azure API Management. If you created workspaces after the generally available release in July 2024 and use workspaces with workspace gateways, your workspaces shouldn't be affected by these changes.
+>
+
+Azure API Management [workspaces](../workspaces-overview.md) are now generally available, and we introduced several feature updates with that release. As part of our continued development of workspaces, we're removing support for preview workspaces (created before July 2024). If you created preview workspaces in Azure API Management and want to continue using them, you need to migrate your workspaces to the generally available version.
+
+After 31 March 2025, your preview workspaces and APIs managed in them may stop working if you haven't migrated to the latest workspace capabilities. APIs and resources managed outside workspaces aren't affected by this change.
+
+## Is my service affected by these changes?
+
+Your service may be affected by these changes if you created preview workspaces in your API Management instance, before the generally available release of workspaces in July 2024. Workspaces created after the generally available release date that use workspace gateways for API runtime aren't affected by the breaking changes.
+
+## Breaking changes
+
+The following are breaking changes that require you to take action to migrate your preview workspaces to the generally available version:
+
+* **Workspace API gateway is required** - Each workspace must be associated with a workspace API gateway that isolates the workspace's runtime traffic. In preview, workspaces shared a gateway with the service.
+* **Service-level managed identities are not supported** - To improve the security of workspaces, system-assigned and user-assigned managed identities enabled at the service level can't be used in workspaces. Currently, related API Management features that depend on managed identities, such as storing named values and certificates in Azure Key Vault, and using the `authentication-managed-identity` policy, aren't supported in workspaces.
+
+> [!NOTE]
+> These breaking changes are in addition to the [June 2024 breaking changes](workspaces-breaking-changes-june-2024.md) for preview workspaces that were announced previously.
+
+## What is the deadline for the change?
+
+The breaking changes will be enforced in preview workspaces after 31 March 2025. We strongly recommend that you make all required changes to the configuration of your preview workspaces before that date.
+
+## What do I need to do?
+
+If your workspaces are affected by these changes, you need to migrate your workspaces to align with the generally available capabilities. The following sections provide guidance on how to migrate your workspaces.
+
+### Use Premium tier for your API Management instance
+
+Ensure that your API Management instance is running in the **Premium** tier to continue using workspaces. As announced [previously](workspaces-breaking-changes-june-2024.md), if your instance is in the **Standard** or **Developer** tier, you need to upgrade to the **Premium** tier.
+
+### Confirm the region for your instance
+
+Adding a workspace gateway to a workspace requires that the gateway is in the same region as your instance. Currently, workspace gateways are supported in a [subset of regions](../workspaces-overview.md#workspace-gateway) in which API Management is available. The regions with support for workspace gateways will be updated over time.
+
+To determine if a preview workspace is in a supported region:
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
+1. In the left menu, under **APIs**, select **Workspaces**, and select a workspace.
+1. If your workspace is in a region that doesn't support workspace gateways, you'll see a message in the portal similar to "Workspaces are currently unavailable in the region of your API Management service".
+ * If you see this message, you can [move your API Management instance](../api-management-howto-migrate.md) to a supported region.
+ * If you don't see this message, your workspace is in a supported region and you can proceed to add a workspace gateway.
+
+### Add a workspace gateway to your workspace
+
+The following are abbreviated steps to add a workspace gateway to a workspace. For gateway networking options, prerequisites, and detailed instructions, see [Create and manage a workspace](../how-to-create-workspace.md).
+
+> [!NOTE]
+> * The workspace gateway incurs additional charges. For more information, see [API Management pricing](https://aka.ms/apimpricing).
+> * API Management currently supports a dedicated gateway per workspace only. If this is impacting your migration plans, see the workspaces roadmap in the [workspaces GA announcement](https://aka.ms/apim/workspaces/ga-announcement).
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
+1. In the left menu, under **APIs**, select **Workspaces**.
+1. Select a workspace.
+1. In the left menu, under **Deployment + infrastructure**, select **Gateways** > **+ Add**.
+1. Complete the wizard to create a gateway. Currently, provisioning of the gateway can take from several minutes to up to 3 hours or longer.
+1. After your gateway is provisioned, go to the gateway's **Overview** page. Note the value of **Runtime hostname**. Use this value to update your client apps that call your workspace's APIs.
+1. Repeat the preceding steps for your remaining workspaces.
+
+### Update client apps to use the new gateway hostname
+
+After adding a gateway to your workspace, you need to update your client apps that call the workspace's APIs to use the new gateway hostname instead of the gateway hostname of your API Management instance.
+
+> [!NOTE]
+> To help you migrate your workspaces, APIs in workspaces can still be accessed at runtime through October 2024 using the gateway hostname of your API Management instance, even if a workspace gateway is associated with a workspace. We strongly recommend that you complete migration before this date. If your workspace gateways are configured with private inbound access and private outbound access, make sure that connectivity to your API Management instance's built-in gateway is also secured.
+
+### Update dependencies on service-level managed identities
+
+If you're using service-level managed identities in the configuration of workspace entities (for example, named values or certificates), you need to update the configurations. Recommended steps vary depending on the entity. Example: Update named values to use secret values instead of secrets stored in Azure Key Vault.
+
+## Help and support
+
+If you have questions, get answers from community experts in [Microsoft Q&A](https://aka.ms/apim/azureqa/change/captcha-2022). If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
+
+## More information
+
+* [Workspaces overview](../workspaces-overview.md)
+* [Workspaces breaking changes (June 2024)](workspaces-breaking-changes-june-2024.md)
+
+## Related content
+
+See all [upcoming breaking changes and feature retirements](overview.md).
api-management Cache Lookup Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-lookup-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
Use the `cache-lookup` policy to perform cache lookup and return a valid cached
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
api-management Cache Lookup Value Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-lookup-value-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
Use the `cache-lookup-value` policy to perform cache lookup by key and return a
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
## Example
api-management Cache Remove Value Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-remove-value-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `cache-remove-value` deletes a cached item identified by its key. The key ca
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
## Example
api-management Cache Store Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-store-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `cache-store` policy caches responses according to the specified cache setti
- [**Policy sections:**](./api-management-howto-policies.md#sections) outbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
api-management Cache Store Value Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-store-value-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `cache-store-value` performs cache storage by key. The key can have an arbit
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
## Example
api-management Check Header Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/check-header-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
Use the `check-header` policy to enforce that a request has a specified HTTP he
- **[Policy sections:](./api-management-howto-policies.md#sections)** inbound - **[Policy scopes:](./api-management-howto-policies.md#scopes)** global, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
## Example
api-management Choose Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/choose-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `choose` policy must contain at least one `<when/>` element. The `<otherwise
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
## Examples
api-management Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-custom-domain.md
Previously updated : 01/13/2023 Last updated : 06/24/2024
When you create an Azure API Management service instance in the Azure cloud, Azu
>* The Gateway's default domain name >* Any of the Gateway's configured custom domain names
+> [!NOTE]
+> Currently, custom domain names aren't supported in a [workspace gateway](workspaces-overview.md#workspace-gateway).
+ ## Prerequisites - An API Management instance. For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md).
api-management Configure Graphql Resolver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-graphql-resolver.md
Configure a resolver to retrieve or set data for a GraphQL field in an object type specified in a GraphQL schema. The schema must be imported to API Management as a GraphQL API. + Currently, API Management supports resolvers that can access the following data sources: * [HTTP-based data source](http-data-source-policy.md) (REST or SOAP API)
api-management Cors Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cors-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `cors` policy adds cross-origin resource sharing (CORS) support to an operat
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes * You may configure the `cors` policy at more than one scope (for example, at the product scope and the global scope). Ensure that the `base` element is configured at the operation, API, and product scopes to inherit needed policies at the parent scopes.
api-management Credentials Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/credentials-overview.md
To help you manage access to backend APIs, your API Management instance includes
> * Currently, you can use credential manager to configure and manage connections (formerly called *authorizations*) for backend OAuth 2.0 APIs. > * No breaking changes are introduced with credential manager. OAuth 2.0 credential providers and connections use the existing API Management [authorization](/rest/api/apimanagement/authorization) APIs and resource provider. + ## Managed connections for OAuth 2.0 APIs Using credential manager, you can greatly simplify the process of authenticating and authorizing users, groups, and service principals across one or more backend or SaaS services that use OAuth 2.0. Using API Management's credential manager, easily configure OAuth 2.0, consent, acquire tokens, cache tokens in a credential store, and refresh tokens without writing a single line of code. Use access policies to delegate authentication to your API Management instance, service principals, users, or groups. For background about the OAuth 2.0, see [Microsoft identity platform and OAuth 2.0 authorization code flow](/entra/identity-platform/v2-oauth2-auth-code-flow).
api-management Cross Domain Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cross-domain-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
api-management Developer Portal Wordpress Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-wordpress-plugin.md
In this step, add the Microsoft Entra app registration as an identity provider f
> Do not use the version 2.0 endpoint for the issuer URL (URL ending in `/v2.0`). 1. In **Allowed token audiences**, enter the **Application ID URI** from the app registration. Example: `api://<app-id>`. 1. Under **Additional checks**, select values appropriate for your environment, or use the default values.
-1. Accept the default values for the remaining settings and select **Add**.
+1. Configure your desired the values for the remaining settings, or use the default values. Select **Add**.
+ > [!NOTE]
+ > If you want to allow guest users as well as signed-in users to access the developer portal on WordPress, you can enable unauthenticated access. In **Restrict access**, select **Allow unauthenticated access**. [Learn more](../app-service/overview-authentication-authorization.md#authorization-behavior)
The identity provider is added to the app service.
Add a custom stylesheet for the API Management developer portal.
## Step 9: Sign into the API Management developer portal deployed on WordPress
-Sign into the WordPress site to see your new API Management developer portal deployed on WordPress and hosted on App Service.
+Access the WordPress site to see your new API Management developer portal deployed on WordPress and hosted on App Service.
+
+1. In a new browser window, navigate to your WordPress site, substituting the name of your app service in the following URL: `https://<yourapp-service-name>.azurewebsites.net`.
+1. When prompted, sign in using Microsoft Entra ID credentials for a developer account. If unauthenticated access to the developer portal is enabled, select **Sign in** on the home page of the developer portal.
> [!NOTE] > You can only sign in to the developer portal on WordPress using Microsoft Entra ID credentials. Basic authentication isn't supported.
-1. In a new browser window, navigate to your WordPress site, substituting the name of your app service in the following URL: `https://<yourapp-service-name>.azurewebsites.net`
-1. When prompted, sign in using Microsoft Entra ID credentials for a developer account.
-- You can now use the following features of the API Management developer portal: * Sign into the portal
api-management Emit Metric Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/emit-metric-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `emit-metric` policy sends custom metrics in the specified format to Applica
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
api-management Find And Replace Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/find-and-replace-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `find-and-replace` policy finds a request or response substring and replaces
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
## Example
api-management Forward Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/forward-request-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `forward-request` policy forwards the incoming request to the backend servic
- [**Policy sections:**](./api-management-howto-policies.md#sections) backend - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
## Examples
api-management Get Authorization Context Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/get-authorization-context-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
class Authorization
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption ### Usage notes
api-management Graphql Schema Resolve Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-schema-resolve-api.md
Last updated 05/31/2023
[!INCLUDE [api-management-graphql-intro.md](../../includes/api-management-graphql-intro.md)] + In this article, you'll: > [!div class="checklist"] > * Import a GraphQL schema to your API Management instance
api-management How To Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-create-workspace.md
Title: Set up a workspace in Azure API Management
-description: Learn how to create a workspace in Azure API Management. Workspaces allow decentralized API development teams to own and productize their own APIs.
+description: Learn how to create a workspace and a workspace gateway in Azure API Management. Workspaces allow decentralized API development teams to own and productize their own APIs.
Previously updated : 03/07/2023 Last updated : 07/10/2024
-# Set up a workspace
+# Create and manage a workspace in Azure API Management
[!INCLUDE [api-management-availability-premium](../../includes/api-management-availability-premium.md)]
-Set up a [workspace](workspaces-overview.md) (preview) to enable a decentralized API development team to manage and productize their own APIs, while a central API platform team maintains the API Management infrastructure. After you create a workspace and assign permissions, workspace collaborators can create and manage their own APIs, products, subscriptions, and related resources.
+Set up a [workspace](workspaces-overview.md) to enable an API team to manage and productize their own APIs, while providing the API platform team with the tools to observe, govern, and maintain the API Management platform. After you create a workspace and assign permissions, workspace collaborators can create and manage their own APIs, products, subscriptions, and related resources.
++
+Follow the steps in this article to:
+
+* Create an API Management workspace and a workspace gateway using the Azure portal
+* Optionally, isolate the workspace gateway in an Azure virtual network
+* Assign permissions to the workspace
> [!NOTE]
-> * Workspaces are a preview feature of API Management and subject to certain [limitations](workspaces-overview.md#preview-limitations).
-> * Workspaces are supported in API Management REST API version 2022-09-01-preview or later.
-> * For pricing considerations, see [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/).
+> Currently, creating a workspace gateway is a long-running operation that can take up to 3 hours or more to complete.
## Prerequisites
-* An API Management instance. If you need to, [create one](get-started-create-service-instance.md).
-
+* An API Management instance. If you need to, [create one](get-started-create-service-instance.md) in a supported tier.
+* **Owner** or **Contributor** role on the resource group where the API Management instance is deployed, or equivalent permissions to create resources in the resource group.
+* (Optional) An existing or new Azure virtual network and subnet to isolate the workspace gateway's inbound and outbound traffic. For configuration options and requirements, see [Network resource requirements for workspace gateways](virtual-network-workspaces-resources.md).
+
## Create a workspace - portal 1. Sign in to the [Azure portal](https://portal.azure.com), and navigate to your API Management instance.
-1. In the left menu, select **Workspaces** (preview) > **+ Add**.
-
-1. In the **Create workspace** window, enter a descriptive **Name**, resource **Id**, and optional **Description** for the workspace. Select **Save**.
+1. In the left menu, under **APIs**, select **Workspaces** > **+ Add**.
+
+1. On the **Basics** tab, enter a descriptive **Display name**, resource **Name**, and optional **Description** for the workspace. Select **Next**.
+
+1. On the **Gateway** tab, configure settings for the workspace gateway:
-The new workspace appears in the list on the **Workspaces** page. Select the workspace to manage its settings and resources.
+ * In **Gateway details**, enter a gateway name and select the number of scale **Units**. The gateway costs are based on the number of units you select. For more information, see [API Management pricing](https://aka.ms/apimpricing).
+ * In **Network**, select a **Network configuration** for your workspace gateway.
+
+ > [!IMPORTANT]
+ > Plan your workspace's network configuration carefully. You can't change the network configuration after you create the workspace.
+
+ * If you select a network configuration that includes private inbound or private outbound network access, select a **Virtual network** and **Subnet** to isolate the workspace gateway, or create a new one. For network requirements, see [Network resource requirements for workspace gateways](virtual-network-workspaces-resources.md).
+
+1. Select **Next**. After validation completes, select **Create**.
+
+It can take from several minutes to up to several hours to create the workspace, workspace gateway, and related resources. To track the deployment progress in the Azure portal, go to the gateway's resource group. In the left menu, under **Settings**, select **Deployments**.
+
+After the deployment completes, the new workspace appears in the list on the **Workspaces** page. Select the workspace to manage its settings and resources.
+
+> [!NOTE]
+> * To view the gateway runtime hostname and other gateway details, select the workspace in the portal. Under **Deployment + infrastructure**, select **Gateways**, and select the name of the workspace's gateway.
+> * While the workspace gateway is being created, runtime calls to the workspace's APIs won't succeed.
## Assign users to workspace - portal After creating a workspace, assign permissions to users to manage the workspace's resources. Each workspace user must be assigned both a service-scoped workspace RBAC role and a workspace-scoped RBAC role, or granted equivalent permissions using custom roles.
+To manage the workspace gateway, we recommend also assigning workspace users an Azure-provided RBAC role scoped to the workspace gateway.
+ > [!NOTE] > For easier management, set up Microsoft Entra groups to assign workspace permissions to multiple users. >
After creating a workspace, assign permissions to users to manage the workspace'
### Assign a workspace-scoped role
-1. In the menu for your API Management instance, select **Workspaces (preview)** > the name of the workspace that you created.
+1. In the menu for your API Management instance, under **APIs**, select **Workspaces** > the name of the workspace that you created.
1. In the **Workspace** window, select **Access control (IAM)**> **+ Add**.
-1. Assign one of the following workspace-scoped roles to the workspace members to manage workspace APIs and other resources.
+1. Assign one of the following workspace-scoped roles to the workspace members so that they can manage workspace APIs and other resources.
* **API Management Workspace Reader** * **API Management Workspace Contributor** * **API Management Workspace API Developer** * **API Management Workspace API Product Manager**
-## Migrate resources to a workspace
+### Assign a gateway-scoped role
+
+1. Sign in to the [Azure portal](https://portal.azure.com), and navigate to your API Management instance.
+
+1. In the left menu, under **APIs**, select **Workspaces** > the name of your workspace.
+
+1. In the left menu of the workspace, select **Gateways**, and select the workspace gateway.
+
+1. In the left menu, select **Access control (IAM)** > **+ Add**.
+
+1. Assign one of the following roles to each member of the workspace. At minimum, we recommend assigning the **Reader** role to view the gateway's settings. **Owners** and **Contributors** can manage the gateway's settings including scaling the gateway.
+
+ * **Owner**
+ * **Contributor**
+ * **Reader**
+
+## Get started with your workspace
+
+Depending on their role in the workspace, users might have permissions to create APIs, products, subscriptions, and other resources, or they might have read-only access to some or all of them.
+
+To get started managing, protecting, and publishing APIs in a workspace, see the following guidance.
++
+|Resource |Guide |
+|||
+|APIs | [Tutorial: Import and publish your first API](import-and-publish.md) |
+|Products | [Tutorial: Create and publish a product](api-management-howto-add-products.md) |
+|Subscriptions | [Subscriptions in Azure API Management](api-management-subscriptions.md)<br/><br/>[Create subscriptions in API Management](api-management-howto-create-subscriptions.md) |
+|Policies | [Tutorial: Transform and protect your API](transform-api.md)<br/><br/>[Policies in Azure API Management](api-management-howto-policies.md)<br/><br/>[Set or edit API Management policies](set-edit-policies.md) |
+|Named values | [Manage secrets using named values](api-management-howto-properties.md) |
+| Backends | [Use backends in Azure API Management](backends.md) |
+|Policy fragments | [Reuse policy configurations in your API Management policy definitions](policy-fragments.md) |
+| Schemas | [Validate content](validate-content-policy.md) |
+| Groups | [Create and use groups to manage developer accounts](api-management-howto-create-groups.md) |
+| Notifications | [How to configure notifications and notification templates](api-management-howto-configure-notifications.md) |
-The open source [Azure API Management workspaces migration tool](https://github.com/Azure-Samples/api-management-workspaces-migration) can help you with the initial setup of resources in the workspace. Use the tool to migrate selected service-level APIs with their dependencies from an Azure API Management instance to a workspace.
-## Next steps
+## Related content
-* Workspace collaborators can get started [managing APIs and other resources in their API Management workspace](api-management-in-workspace.md)
+* Learn more about [workspaces in Azure API Management](workspaces-overview.md).
+* [Use a virtual network to secure inbound or outbound traffic for Azure API Management](virtual-network-concepts.md)
api-management Howto Use Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/howto-use-analytics.md
Azure API Management provides analytics for your APIs so that you can analyze th
:::image type="content" source="media/howto-use-analytics/analytics-report-portal.png" alt-text="Screenshot of API analytics in the portal." lightbox="media/howto-use-analytics/analytics-report-portal.png"::: ## About API analytics
api-management Http Data Source Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/http-data-source-policy.md
Previously updated : 05/02/2024 Last updated : 07/23/2024
api-management Import App Service As Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-app-service-as-api.md
This article shows how to import an Azure Web App to Azure API Management and test the imported API, using the Azure portal.
-> [!NOTE]
-> You can use the API Management Extension for Visual Studio Code to import and manage your APIs. Follow the [API Management Extension tutorial](visual-studio-code-tutorial.md) to install and get started.
In this article, you learn how to:
api-management Import Container App With Oas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-container-app-with-oas.md
[!INCLUDE [api-management-availability-all-tiers](../../includes/api-management-availability-all-tiers.md)]
-This article shows how to import an Azure Container App to Azure API Management and test the imported API using the Azure portal. In this article, you learn how to:
+This article shows how to import an Azure Container App to Azure API Management and test the imported API using the Azure portal.
++
+In this article, you learn how to:
> [!div class="checklist"] > * Import a Container App that exposes a Web API
api-management Import Function App As Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-function-app-as-api.md
Azure API Management supports importing Azure Function Apps as new APIs or appending them to existing APIs. The process automatically generates a host key in the Azure Function App, which is then assigned to a named value in Azure API Management. + This article walks through importing and testing an Azure Function App as an API in Azure API Management. You will learn how to:
api-management Import Logic App As Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-logic-app-as-api.md
This article shows how to import a Logic App as an API and test the imported API. + In this article, you learn how to: > [!div class="checklist"]
api-management Include Fragment Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/include-fragment-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The policy inserts the policy fragment as-is at the location you select in the p
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
## Example
api-management Invoke Dapr Binding Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/invoke-dapr-binding-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The policy assumes that Dapr runtime is running in a sidecar container in the sa
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) self-hosted ### Usage notes
api-management Ip Filter Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/ip-filter-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024 # Restrict caller IPs
The `ip-filter` policy filters (allows/denies) calls from specific IP addresses
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
api-management Json To Xml Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/json-to-xml-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `json-to-xml` policy converts a request or response body from JSON to XML.
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
## Example
api-management Jsonp Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/jsonp-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `jsonp` policy adds JSON with padding (JSONP) support to an operation or an
- [**Policy sections:**](./api-management-howto-policies.md#sections) outbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
api-management Limit Concurrency Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/limit-concurrency-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `limit-concurrency` policy prevents enclosed policies from executing by more
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
## Example
api-management Log To Eventhub Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/log-to-eventhub-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `log-to-eventhub` policy sends messages in the specified format to an event
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted ### Usage notes
api-management Mock Response Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mock-response-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `mock-response` policy, as the name implies, is used to mock APIs and operat
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
api-management Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/observability.md
Previously updated : 06/01/2020 Last updated : 07/12/2024
The table below summarizes all the observability capabilities supported by API M
> | Tool | Useful for | Data lag | Retention | Sampling | Data kind | Supported Deployment Model(s) | |:- |:-|:- |:-|:- |: |:- |
-| **[API Inspector](api-management-howto-api-inspector.md)** | Testing and debugging | Instant | Last 100 traces | Turned on per request | Request traces | Managed, Self-hosted, Azure Arc |
+| **[API Inspector](api-management-howto-api-inspector.md)** | Testing and debugging | Instant | Last 100 traces | Turned on per request | Request traces | Managed, Self-hosted, Azure Arc, Workspace |
| **[Built-in Analytics](howto-use-analytics.md)** | Reporting and monitoring | Minutes | Lifetime | 100% | Reports and logs | Managed | | **[Azure Monitor Metrics](api-management-howto-use-azure-monitor.md)** | Reporting and monitoring | Minutes | 90 days (upgrade to extend) | 100% | Metrics | Managed, Self-hosted<sup>2</sup>, Azure Arc | | **[Azure Monitor Logs](api-management-howto-use-azure-monitor.md)** | Reporting, monitoring, and debugging | Minutes | 31 days/5GB (upgrade to extend) | 100% (adjustable) | Logs | Managed<sup>1</sup>, Self-hosted<sup>3</sup>, Azure Arc<sup>3</sup> |
-| **[Azure Application Insights](api-management-howto-app-insights.md)** | Reporting, monitoring, and debugging | Seconds | 90 days/5GB (upgrade to extend) | Custom | Logs, metrics | Managed<sup>1</sup>, Self-hosted<sup>1</sup>, Azure Arc<sup>1</sup> |
+| **[Azure Application Insights](api-management-howto-app-insights.md)** | Reporting, monitoring, and debugging | Seconds | 90 days/5GB (upgrade to extend) | Custom | Logs, metrics | Managed<sup>1</sup>, Self-hosted<sup>1</sup>, Azure Arc<sup>1</sup>, Workspace<sup>1</sup> |
| **[Logging through Azure Event Hubs](api-management-howto-log-event-hubs.md)** | Custom scenarios | Seconds | User managed | Custom | Custom | Managed<sup>1</sup>, Self-hosted<sup>1</sup>, Azure Arc<sup>1</sup> | | **[OpenTelemetry](how-to-deploy-self-hosted-gateway-kubernetes-opentelemetry.md#introduction-to-opentelemetry)** | Monitoring | Minutes | User managed | 100% | Metrics | Self-hosted<sup>2</sup> |
api-management Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/private-endpoint.md
You can configure an inbound [private endpoint](../private-link/private-endpoint
* Only the API Management instance's Gateway endpoint supports inbound Private Link connections. * Each API Management instance supports at most 100 Private Link connections.
-* Connections aren't supported on the [self-hosted gateway](self-hosted-gateway-overview.md).
+* Connections aren't supported on the [self-hosted gateway](self-hosted-gateway-overview.md) or on a [workspace gateway](workspaces-overview.md#workspace-gateway).
## Prerequisites
api-management Protect With Defender For Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/protect-with-defender-for-apis.md
Previously updated : 04/20/2023 Last updated : 07/11/2024 # Enable advanced API security features using Microsoft Defender for Cloud
[Defender for APIs](/azure/defender-for-cloud/defender-for-apis-introduction), a capability of [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction), offers full lifecycle protection, detection, and response coverage for APIs that are managed in Azure API Management. The service empowers security practitioners to gain visibility into their business-critical APIs, understand their security posture, prioritize vulnerability fixes, and detect active runtime threats within minutes. + Capabilities of Defender for APIs include: * Identify external, unused, or unauthenticated APIs
api-management Proxy Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/proxy-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `proxy` policy allows you to route requests forwarded to backends via an HTT
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
## Example
api-management Publish Event Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/publish-event-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
api-management Publish To Dapr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/publish-to-dapr-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The policy assumes that Dapr runtime is running in a sidecar container in the sa
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) self-hosted ### Usage notes
api-management Quota By Key Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quota-by-key-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024 # Set usage quota by key
To understand the difference between rate limits and quotas, [see Rate limits an
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, self-hosted, workspace
### Usage notes
api-management Quota Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quota-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
To understand the difference between rate limits and quotas, [see Rate limits an
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) product-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
api-management Rate Limit By Key Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/rate-limit-by-key-policy.md
Previously updated : 05/23/2024 Last updated : 07/23/2024
To understand the difference between rate limits and quotas, [see Rate limits an
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, self-hosted, workspace
### Usage notes
api-management Rate Limit Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/rate-limit-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
To understand the difference between rate limits and quotas, [see Rate limits an
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
api-management Redirect Content Urls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/redirect-content-urls-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `redirect-content-urls` policy rewrites (masks) links in the response body s
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
api-management Retry Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/retry-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `retry` policy may contain any other policies as its child elements.
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
## Examples
api-management Return Response Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/return-response-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `return-response` policy cancels pipeline execution and returns either a def
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
api-management Rewrite Uri Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/rewrite-uri-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
This policy can be used when a human and/or browser-friendly URL should be trans
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
api-management Send Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/send-request-policy.md
The `send-request` policy sends the provided request to the specified URL, waiti
- **[Policy sections:](./api-management-howto-policies.md#sections)** inbound, outbound, backend, on-error - **[Policy scopes:](./api-management-howto-policies.md#scopes)** global, workspace, product, API, operation-- **[Gateways:](api-management-gateways-overview.md)** dedicated, consumption, self-hosted
+- **[Gateways:](api-management-gateways-overview.md)** dedicated, consumption, self-hosted, workspace
### Usage notes
api-management Set Backend Service Dapr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-backend-service-dapr-policy.md
The policy assumes that Dapr runs in a sidecar container in the same pod as the
## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) self-hosted ### Usage notes
api-management Set Backend Service Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-backend-service-policy.md
- build-2024 Previously updated : 03/18/2024 Last updated : 07/23/2024
Referencing a backend entity allows you to manage the backend service base URL a
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, backend - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
api-management Set Body Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-body-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
OriginalUrl.
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
api-management Set Header Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-header-policy.md
The `set-header` policy assigns a value to an existing HTTP response and/or requ
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
api-management Set Method Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-method-policy.md
The value of the element specifies the HTTP method, such as `POST`, `GET`, and s
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
## Example
api-management Set Query Parameter Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-query-parameter-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `set-query-parameter` policy adds, replaces value of, or deletes request que
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, backend - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
## Examples
api-management Set Status Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-status-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `set-status` policy sets the HTTP status code to the specified value.
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
## Example
api-management Set Variable Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-variable-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `set-variable` policy declares a [context](api-management-policy-expressions
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
## Allowed types
api-management Sql Data Source Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/sql-data-source-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
api-management Trace Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/trace-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `trace` policy adds a custom trace into the request tracing output in the te
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
## Example
api-management Upgrade And Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/upgrade-and-scale.md
Previously updated : 03/21/2024 Last updated : 07/02/2024
You can use the portal to scale your API Management instance. How you scale depe
1. Specify the new number of **Units** - use the slider, or select or type the number. 1. Select **Save**.
+### Add or remove units - workspace gateway
+
+1. Navigate to your API Management instance in the [Azure portal](https://portal.azure.com/).
+1. In the left menu, under **APIs**, select **Workspaces** > the name of your workspace.
+1. In the left menu, under **Deployment + infrastructure**, select **Gateways** > the name of your gateway.
+1. In the left menu, under **Deployment and infrastructure**, select **Scale**.
+1. Specify the new number of **Units** - use the slider, or select or type the number.
+1. Select **Save**.
+ ## Change your API Management service tier 1. Navigate to your API Management instance in the [Azure portal](https://portal.azure.com/).
api-management Validate Azure Ad Token Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-azure-ad-token-policy.md
Previously updated : 06/24/2024 Last updated : 07/23/2024
The `validate-azure-ad-token` policy enforces the existence and validity of a JS
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
api-management Validate Client Certificate Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-client-certificate-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
For more information about custom CA certificates and certificate authorities, s
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
## Example
api-management Validate Content Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-content-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The policy validates the following content in the request or response against th
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
[!INCLUDE [api-management-validation-policy-common](../../includes/api-management-validation-policy-common.md)]
api-management Validate Graphql Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-graphql-request-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
Available actions are described in the following table.
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
api-management Validate Headers Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-headers-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `validate-headers` policy validates the response headers against the API sch
- [**Policy sections:**](./api-management-howto-policies.md#sections) outbound, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
api-management Validate Jwt Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-jwt-policy.md
Previously updated : 06/25/2024 Last updated : 07/23/2024
The `validate-jwt` policy enforces existence and validity of a supported JSON we
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
api-management Validate Odata Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-odata-request-policy.md
- build-2024 Previously updated : 05/06/2024 Last updated : 07/23/2024
The `validate-odata-request` policy validates the request URL, headers, and para
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
api-management Validate Parameters Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-parameters-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `validate-parameters` policy validates the header, query, or path parameters
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
api-management Validate Status Code Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-status-code-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `validate-status-code` policy validates the HTTP status codes in responses a
- [**Policy sections:**](./api-management-howto-policies.md#sections) outbound, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
api-management Virtual Network Injection Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-injection-resources.md
Title: Azure API Management virtual network integration - network resources
+ Title: Azure API Management virtual network injection - network resources
description: Learn about requirements for network resources when you deploy (inject) your API Management instance in an Azure virtual network.
api-management Virtual Network Workspaces Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-workspaces-resources.md
+
+ Title: Azure API Management workspace gateways - VNet integration - network resources
+description: Learn about requirements for network resources when you integrate your API Management workspace gateway in an Azure virtual network.
++++ Last updated : 07/15/2024+++
+# Network resource requirements for integration of a workspace gateway into a virtual network
++
+Network isolation is an optional feature of an API Management [workspace gateway](workspaces-overview.md#workspace-gateway). This article provides network resource requirements when you integrate your gateway in an Azure virtual network. Some requirements differ depending on the desired inbound and outbound access mode. The following modes are supported:
+
+* Public inbound access, private outbound access (Public/Private)
+* Private inbound access, private outbound access (Private/Private)
+
+For information about networking options in API Management, see [Use a virtual network to secure inbound or outbound traffic for Azure API Management](virtual-network-concepts.md).
+++
+## Network location
+
+* The virtual network must be in the same region and Azure subscription as the API Management instance.
+
+## Subnet size
+
+* The subnet size must be `/24` (256 IP addresses).
+* The subnet can't be shared with another Azure resource, including another workspace gateway.
+
+## Subnet delegation
+
+The subnet must be delegated as follows to enable the desired inbound and outbound access.
+
+For information about configuring subnet delegation, see [Add or remove a subnet delegation](../virtual-network/manage-subnet-delegation.md).
+
+#### [Public/Private](#tab/external)
++
+For Public/Private mode, the subnet needs to be delegated to the **Microsoft.Web/serverFarms** service.
++
+> [!NOTE]
+> You might need to register the `Microsoft.Web/serverFarms` resource provider in the subscription so that you can delegate the subnet to the service.
+
+#### [Private/Private](#tab/internal)
+
+For Private/Private mode, the subnet needs to be delegated to the **Microsoft.Web/hostingEnvironments** service.
+++
+> [!NOTE]
+> You might need to register the `Microsoft.Web/hostingEnvironments` resource provider in the subscription so that you can delegate the subnet to the service.
++++
+## Network security group (NSG) rules
+
+A network security group (NSG) must be attached to the subnet to explicitly allow inbound connectivity. Configure the following rules in the NSG. Set the priority of these rules higher than that of the default rules.
+
+#### [Public/Private](#tab/external)
+
+| Source / Destination Port(s) | Direction | Transport protocol | Source | Destination | Purpose |
+||--|--||-|--|
+| */80 | Inbound | TCP | AzureLoadBalancer | Workspace gateway subnet range | Allow internal health ping traffic |
+| */80,443 | Inbound | TCP | Internet | Workspace gateway subnet range | Allow inbound traffic |
+
+#### [Private/Private](#tab/internal)
+
+| Source / Destination Port(s) | Direction | Transport protocol | Source | Destination | Purpose |
+||--|--||-|--|
+| */80 | Inbound | TCP | AzureLoadBalancer | Workspace gateway subnet range | Allow internal health ping traffic |
+| */80,443 | Inbound | TCP | Virtual network | Workspace gateway subnet range | Allow inbound traffic |
+++
+## DNS settings for Private/Private configuration
+
+In the Private/Private network configuration, you have to manage your own DNS to enable inbound access to your workspace gateway.
+
+We recommend:
+
+1. Configure an Azure [DNS private zone](../dns/private-dns-overview.md).
+1. Link the Azure DNS private zone to the VNet into which you've deployed your workspace gateway.
+
+Learn how to [set up a private zone in Azure DNS](../dns/private-dns-getstarted-portal.md).
++
+### Access on default hostname
+
+When you create an API Management workspace, the workspace gateway is assigned a default hostname. The hostname is visible in the Azure portal on the workspace gateway's **Overview** page, along with its private virtual IP address. The default hostname is in the format `<gateway-name>-<random hash>.gateway.<region>-<number>.azure-api.net`. Example: `team-workspace-123456abcdef.gateway.uksouth-01.azure-api.net`.
+
+> [!NOTE]
+> The workspace gateway only responds to requests to the hostname configured on its endpoint, not its private VIP address.
+
+### Configure DNS record
+
+Create an A record in your DNS server to access the workspace from within your VNet. Map the endpoint record to the private VIP address of your workspace gateway.
+
+For testing purposes, you might update the hosts file on a virtual machine in a subnet connected to the VNet in which API Management is deployed. Assuming the private virtual IP address for your workspace gateway is 10.1.0.5, you can map the hosts file as shown in the following example. The hosts mapping file is at `%SystemDrive%\drivers\etc\hosts` (Windows) or `/etc/hosts` (Linux, macOS).
+
+| Internal virtual IP address | Gateway hostname |
+| -- | -- |
+| 10.1.0.5 | `teamworkspace.gateway.westus.azure-api.net` |
++
+## Related content
+
+* [Use a virtual network to secure inbound or outbound traffic for Azure API Management](virtual-network-concepts.md)
+* [Workspaces in Azure API Management](workspaces-overview.md)
+++++
api-management Wait Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/wait-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
May contain as child elements only `send-request`, `cache-lookup-value`, and `ch
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, backend - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
## Example
api-management Websocket Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/websocket-api.md
With API ManagementΓÇÖs WebSocket API solution, API publishers can quickly add a WebSocket API in API Management via the Azure portal, Azure CLI, Azure PowerShell, and other Azure tools. + You can secure WebSocket APIs by applying existing access control policies, like [JWT validation](validate-jwt-policy.md). You can also test WebSocket APIs using the API test consoles in both Azure portal and developer portal. Building on existing observability capabilities, API Management provides metrics and logs for monitoring and troubleshooting WebSocket APIs. In this article, you will:
api-management Workspaces Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/workspaces-overview.md
Title: Workspaces in Azure API Management | Microsoft Docs
-description: Learn about workspaces (preview) in Azure API Management. Workspaces allow decentralized API development teams to manage and productize their own APIs, while a central API platform team maintains the API Management infrastructure.
+description: Learn about Azure API Management workspaces. With workspaces, decentralized API development teams manage and productize APIs in a common service infrastructure.
- Previously updated : 01/25/2024+ Last updated : 07/19/2024 -
+#customer intent: As administrator of an API Management instance, I want to learn about using workspaces to manage APIs in a decentralized way, so that I can enable my development teams to manage and productize their own APIs.
+
-# Workspaces in Azure API Management
+# What are workspaces in Azure API Management?
[!INCLUDE [api-management-availability-premium](../../includes/api-management-availability-premium.md)]
-In API Management, *workspaces* allow decentralized API development teams to manage and productize their own APIs, while a central API platform team maintains the API Management infrastructure. Each workspace contains APIs, products, subscriptions, and related entities that are accessible only to the workspace collaborators. Access is controlled through Azure role-based access control (RBAC).
+In API Management, *workspaces* bring a new level of autonomy to an organization's API teams, enabling them to create, manage, and publish APIs faster, more reliably, securely, and productively within an API Management service. By providing isolated administrative access and API runtime, workspaces empower API teams while allowing the API platform team to retain oversight. This includes central monitoring, enforcement of API policies and compliance, and publishing APIs for discovery through a unified developer portal.
-> [!NOTE]
-> * Workspaces are a preview feature of API Management and subject to certain [limitations](#preview-limitations).
-> * Workspaces are supported in API Management REST API version 2022-09-01-preview or later.
-> * For pricing considerations, see [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/).
-> * See [upcoming breaking changes](./breaking-changes/workspaces-breaking-changes-june-2024.md) for workspaces.
+Workspaces function like "folders" within an API Management service:
+
+* Each workspace contains APIs, products, subscriptions, named values, and related resources.
+* Access to resources within a workspace is managed through Azure's role-based access control (RBAC) with built-in or custom roles assignable to Microsoft Entra accounts.
+* Each workspace is associated with a *workspace gateway* for routing API traffic to the backend services of APIs in the workspace.
+++
+## Federated API management with workspaces
+
+Workspaces add first-class support for a *federated model* of managing APIs in API Management, in addition to already supported centralized and siloed models. See the following table for a comparison of these models.
+
+|Model|Description |
+|||
+|**Centralized**<br/><br/>:::image type="content" source="media/workspaces-overview/centralized.png" alt-text="Diagram of the centralized model of Azure API Management." border="false" lightbox="media/workspaces-overview/centralized.png"::: |**Pros**<br/>ΓÇó Centralized API governance and observability<br/>ΓÇó Unified developer portal for effective API discovery and onboarding<br/>ΓÇó Cost-efficiency of the infrastructure<br/><br/>**Cons**<br/>ΓÇó No segregation of administrative permissions between teams<br/>ΓÇó API gateway is a single point of failure<br/>ΓÇó Inability to attribute runtime issues to specific teams<br/>ΓÇó Burden on platform team to facilitate collaboration may reduce API growth |
+|**Siloed**<br/><br/>:::image type="content" source="media/workspaces-overview/siloed.png" alt-text="Diagram of the siloed model of Azure API Management." border="false" lightbox="media/workspaces-overview/siloed.png"::: |**Pros**<br/>ΓÇó Segregation of administrative permissions between teams increases productivity and security<br/>ΓÇó Segregation of API runtime between teams increases API reliability, resiliency, and security<br/>ΓÇó Runtime issues are contained and attributable to specific teams<br/><br/>**Cons**<br/>ΓÇó Lack of centralized API governance and observability<br/>ΓÇó Lack of unified developer portal<br/>ΓÇó Increased cost and harder platform managementΓÇï |
+|**Federated**<br/><br/>:::image type="content" source="media/workspaces-overview/federated.png" alt-text="Diagram of the federated model of Azure API Management." border="false" lightbox="media/workspaces-overview/federated.png"::: |**Pros**<br/>ΓÇó Centralized API governance and observability<br/>ΓÇó Unified developer portal for effective API discovery and onboarding<br/>ΓÇó Segregation of administrative permissions between teams increases productivity and security<br/>ΓÇó Segregation of API runtime between teams increases API reliability, resiliency, and security<br/>ΓÇó Runtime issues are contained and attributable to specific teams<br/><br/>**Cons**<br/>ΓÇó Platform cost and management difficulty greater than in the centralized model but lower than in the siloed model |
## Example scenario overview
-An organization that manages APIs using Azure API Management may have multiple development teams that develop, define, maintain, and productize different sets of APIs. Workspaces allow these teams to use API Management to manage and access their APIs separately, and independently of managing the service infrastructure.
+An organization that manages APIs using Azure API Management may have multiple development teams that develop, define, maintain, and productize different sets of APIs. Workspaces allow these teams to use API Management to manage, access, and secure their APIs separately, and independently of managing the service infrastructure.
The following is a sample workflow for creating and using a workspace.
-1. A central API platform team that manages the API Management instance creates a workspace and assigns permissions to workspace collaborators using RBAC roles - for example, permissions to create or read resources in the workspace.
+1. A central API platform team that manages the API Management instance creates a workspace and assigns permissions to workspace collaborators using RBAC roles - for example, permissions to create or read resources in the workspace. A dedicated API gateway is also created for the workspace.
1. A central API platform team uses DevOps tools to create a DevOps pipeline for APIs in that workspace. 1. Workspace members develop, publish, productize, and maintain APIs in the workspace.
-1. The central API platform team manages the infrastructure of the service, such as network connectivity, monitoring, resiliency, and enforcement of all-APIs policies.
+1. The central API platform team manages the infrastructure of the service, such as monitoring, resiliency, and enforcement of all-APIs policies.
+
+## API management in a workspace
-## Workspace features
+Teams manage their own APIs, products, subscriptions, backends, policies, loggers, and other resources within workspaces. See the API Management [REST API reference](/rest/api/apimanagement/workspace?view=rest-apimanagement-2023-09-01-preview&preserve-view=true) for a full list of resources and operations supported in workspaces.
-The following resources can be managed in the workspaces preview.
+While workspaces are managed independently from the API Management service and other workspaces, by design they can reference selected service-level resources. See [Workspaces and other API Management features](#workspaces-and-other-api-management-features), later in this article.
-### APIs and policies
+## Workspace gateway
-* Create and manage APIs and API operations, including API version sets, API revisions, and API policies.
+Each workspace can be associated with workspace gateways to enable runtime of APIs managed within the workspace. The workspace gateway is a standalone Azure resource with the same core functionality as the gateway built into your API Management service.
-* Apply a policy for all APIs in a workspace.
+Workspace gateways are managed independently from the API Management service and from each other. They ensure isolation of runtime between workspaces, increasing API reliability, resiliency, and security and enabling attribution of runtime issues to workspaces.
-* Describe APIs with tags from the workspace level.
+* For information on the cost of workspace gateways, see [API Management pricing](https://aka.ms/apimpricing).
+* For a detailed comparison of API Management gateways, see [API Management gateways overview](api-management-gateways-overview.md).
-* Define named values, policy fragments, and schemas for request and response validation for use in workspace-scoped policies.
+### Gateway hostname
+
+Each association of a workspace to a workspace gateway creates a unique hostname for APIs managed in that workspace. Default hostnames follow the pattern `<workspace-name>-<hash>.gateway.<region>.azure-api.net`. Currently, custom hostnames aren't supported for workspace gateways.
> [!NOTE]
-> In a workspace, policy scopes are as follows:
-> All APIs (service) > All APIs (workspace) > Product > API > API operation
+> Through October 2024, APIs in workspaces can be accessed at runtime using the gateway hostname of your API Management instance in addition to the hostname of the workspace gateway.
-### Users and groups
+### Network isolation
-* Organize users (from the service level) into groups in a workspace.
+A workspace gateway can optionally be configured in a private virtual network to isolate inbound and/or outbound traffic. If configured, the workspace gateway must use a dedicated subnet in the virtual network.
-### Products and subscriptions
+For detailed requirements, see [Network resource requirements for workspace gateways](virtual-network-workspaces-resources.md).
-* Publish APIs with products. APIs in a workspace can only be part of a workspace-level product. Visibility can be configured based on user membership in a workspace-level or a service-level group.
-* Manage access to APIs with subscriptions. Subscriptions requested to an API or product within a workspace are created in that workspace.
+### Scale capacity
-* Publish APIs and products with the developer portal.
+Manage gateway capacity by manually adding or removing scale units, similar to the [units](upgrade-and-scale.md) that can be added to the API Management instance in certain service tiers. The costs of a workspace gateway are based on the number of units you select.
-* Manage administrative email notifications related to resources in the workspace.
+### Regional availability
-## RBAC roles
+Workspace gateways need to be in the same Azure region and subscription as the API Management service.
+
+> [!NOTE]
+> Starting in August 2024, workspace gateway support will be rolled out in the following regions. These regions are a subset of those where API Management is available.
+
+* West US
+* North Central US
+* UK South
+* France Central
+* North Europe
+* East Asia
+* Southeast Asia
+* Australia East
+* Japan East
+
+### Gateway constraints
+The following constraints currently apply to workspace gateways:
+
+* A gateway can be associated only with one workspace
+* A workspace can't be associated with a self-hosted gateway
+* Workspace gateways don't support inbound private endpoints
+* APIs in workspace gateways can't be assigned custom hostnames
+* APIs in workspaces aren't covered by Defender for APIs
+* Workspace gateways don't support the API Management service's credential manager
+* Workspace gateways support only internal cache; external cache isn't supported
+* Workspace gateways don't support synthetic GraphQL APIs and WebSocket APIs
+* Workspace gateways don't support APIs created from Azure resources such as Azure OpenAI Service, App Service, Function Apps, and so on
+* Request metrics can't be split by workspace in Azure Monitor; all workspace metrics are aggregated at the service level
+* Azure Monitor logs are aggregated at the service level; workspace-level logs aren't available
+* Workspace gateways don't support CA certificates
+* Workspace gateways don't support autoscaling
+* Workspace gateways don't support managed identities, including related features like storing secrets in Azure Key Vault and using the `authentication-managed-identity` policy
+
+## RBAC roles for workspaces
Azure RBAC is used to configure workspace collaborators' permissions to read and edit entities in the workspace. For a list of roles, see [How to use role-based access control in API Management](api-management-role-based-access-control.md).
-Workspace members must be assigned both a service-scoped role and a workspace-scoped role, or granted equivalent permissions using custom roles. The service-scoped role enables referencing certain service-level resources from workspace-level resources. For example, organize a user into a workspace-level group to control API and product visibility.
+To manage APIs and other resources in the workspace, workspace members must be assigned roles (or equivalent permissions using custom roles) scoped to the API Management service, the workspace, and the workspace gateway. The service-scoped role enables referencing certain service-level resources from workspace-level resources. For example, organize a user into a workspace-level group to control API and product visibility.
> [!NOTE] > For easier management, set up Microsoft Entra groups to assign workspace permissions to multiple users.
Workspace members must be assigned both a service-scoped role and a workspace-sc
## Workspaces and other API Management features
-* **Infrastructure features** - API Management platform infrastructure features are managed on the service level only, not at the workspace level. These features include:
+Workspaces are designed to be self-contained to maximize segregation of administrative access and API runtime. There are several exceptions to ensure higher productivity and enable platform-wide governance, observability, reusability, and API discovery.
- * Private network connectivity
-
- * API gateways, including scaling, locations, and self-hosted gateways
-
-* **Resource references** - Resources in a workspace can reference other resources in the workspace and users from the service level. They can't reference resources from another workspace.
+* **Resource references** - Resources in a workspace can reference other resources in the workspace and selected resources from the service level, such as users, authorization servers, or built-in user groups. They can't reference resources from another workspace.
- For security reasons, it's not possible to reference service-level resources from workspace-level policies (for example, named values) or by resource names, such as `backend-id` in the [set-backend-service](set-backend-service-policy.md) policy.
+ For security reasons, it's not possible to reference service-level resources from workspace-level policies (for example, named values) or by resource names, such as `backend-id` in the [set-backend-service](set-backend-service-policy.md) policy.
-* **Developer portal** - Workspaces are an administrative concept and aren't surfaced as such to developer portal consumers, including through the developer portal UI and the underlying API. However, APIs and products can be published from a workspace to the developer portal. Because of this, any resource that's used by the developer portal (for example, an API, product, tag, or subscription) needs to have a unique Azure resource name in the service. There can't be any resources of the same type and with the same Azure resource name in the same workspace, in other workspaces, or on the service level.
+ > [!IMPORTANT]
+ > All resources in an API Management service (for example, APIs, products, tags, or subscriptions) need to have unique names, even if they are located in different workspaces. There can't be any resources of the same type and with the same Azure resource name in the same workspace, in other workspaces, or on the service level.
+ >
-* **Deleting a workspace** - Deleting a workspace deletes all its child resources (APIs, products, and so on).
+* **Developer portal** - Workspaces are an administrative concept and aren't surfaced as such to developer portal consumers, including through the developer portal UI and the underlying API. APIs and products within a workspace can be published to the developer portal, just like APIs and products on the service level.
-## Preview limitations
+ > [!NOTE]
+ > API Management supports assigning authorization servers defined on the service level to APIs within workspaces.
+ >
-The following resources aren't currently supported in workspaces:
+## Migrate from preview workspaces
-* Authorization servers (credential providers in credential manager)
+If you created preview workspaces in Azure API Management and want to continue using them, migrate your workspaces to the generally available version by associating a workspace gateway with each workspace.
-* Authorizations (connections to credential providers in credential manager)
+For details and to learn about other changes that could affect your preview workspaces, see [Workspaces breaking changes (March 2025)](breaking-changes/workspaces-breaking-changes-march-2025.md).
-* Backends
+## Deleting a workspace
-* Client certificates
-
-* Current DevOps tooling for API Management
-
-* Diagnostics
-
-* Loggers
-
-* Synthetic GraphQL APIs
-
-* User-assigned managed identity
-
-Therefore, the following sample scenarios aren't currently supported in workspaces:
-
-* Monitoring APIs with workspace-specific configuration
-
-* Managing API backends and importing APIs from Azure services
-
-* Validating client certificates
-
-* Using the credential manager (formerly called authorizations) feature
-
-* Specifying API authorization server information (for example, for the developer portal)
-
-* Publishing workspace APIs to self-hosted gateways
-
-> [!IMPORTANT]
-> All resources in an API Management service need to have unique names, even if they are located in different workspaces.
->
+Deleting a workspace deletes all its child resources (APIs, products, and so on) and its associated gateway, if you're deleting the workspace using the Azure portal interface. It doesn't delete the API Management instance or other workspaces.
+
## Related content * [Create a workspace](how-to-create-workspace.md) * [Workspaces breaking changes - June 2024](breaking-changes/workspaces-breaking-changes-june-2024.md)
+* [Workspaces breaking changes - March 2025](breaking-changes/workspaces-breaking-changes-march-2025.md)
+* [Limits - API Management workspaces](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=/azure/api-management/toc.json&bc=/azure/api-management/breadcrumb/toc.json#limitsapi-management-workspaces)
api-management Xml To Json Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/xml-to-json-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `xml-to-json` policy converts a request or response body from XML to JSON. T
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, on-error - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
## Example
api-management Xsl Transform Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/xsl-transform-policy.md
Previously updated : 03/18/2024 Last updated : 07/23/2024
The `xsl-transform` policy applies an XSL transformation to XML in the request o
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
### Usage notes
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
If your migration includes a custom domain suffix, for App Service Environment v
After completing the previous steps, you should continue with migration as soon as possible. > [!IMPORTANT]
-> Since scaling is blocked during the migration, you should scale your environment to the desired size before starting the migration.
+> Since scaling is blocked during the migration, you should scale your environment to the desired size before starting the migration. If you have auto-scaling enabled, if a scaling event occurs before the migration starts, you have to wait until the scaling event completes before starting the migration. You should disable auto-scaling before starting the migration to avoid this issue. If you need to scale your environment after the migration, you can do so once the migration is complete.
> Migration requires a three to six hour service window for App Service Environment v2 to v3 migrations. Up to a six hour service window is required depending on environment size for v1 to v3 migrations. The service window might be extended in rare cases where manual intervention by the service team is required. During migration, scaling and environment configurations are blocked and the following events occur:
Ensure that there are no locks on your virtual network, resource group, resource
Ensure that no Azure policies are blocking actions that are required for the migration, including subnet modifications and Azure App Service resource creations. Policies that block resource modifications and creations can cause migration to get stuck or fail.
-Since scaling is blocked during the migration, you should scale your environment to the desired size before starting the migration. If you need to scale your environment after the migration, you can do so once the migration is complete.
+Since scaling is blocked during the migration, you should scale your environment to the desired size before starting the migration. If you need to scale your environment after the migration, you can do so once the migration is complete. If you have auto-scaling enabled, if a scaling event occurs before the migration starts, your migration is blocked until the scaling event completes. You should disable auto-scaling before starting the migration to avoid this issue.
::: zone pivot="experience-azcli"
app-service Side By Side Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/side-by-side-migrate.md
After completing the previous steps, you should continue with migration as soon
There's no application downtime during the migration, but as in the IP generation step, you can't scale, modify your existing App Service Environment, or deploy apps to it during this process. > [!IMPORTANT]
-> Since scaling is blocked during the migration, you should scale your environment to the desired size before starting the migration.
+> Since scaling is blocked during the migration, you should scale your environment to the desired size before starting the migration. If you have auto-scaling enabled, if a scaling event occurs before the migration starts, you have to wait until the scaling event completes before starting the migration. You should disable auto-scaling before starting the migration to avoid this issue. If you need to scale your environment after the migration, you can do so once the migration is complete.
> This step is also where you decide if you want to enable zone redundancy for your new App Service Environment v3. Zone redundancy can be enabled as long as your App Service Environment v3 is [in a region that supports zone redundancy](./overview.md#regions).
Ensure that no Azure policies are blocking actions that are required for the mig
Since your App Service Environment v3 is in a different subnet in your virtual network, you need to ensure that you have an available subnet in your virtual network that meets the [subnet requirements for App Service Environment v3](./networking.md#subnet-requirements). The subnet you select must also be able to communicate with the subnet that your existing App Service Environment is in. Ensure there's nothing blocking communication between the two subnets. If you don't have an available subnet, you need to create one before migrating. Creating a new subnet might involve increasing your virtual network address space. For more information, see [Create a virtual network and subnet](../../virtual-network/manage-virtual-network.yml).
-Since scaling is blocked during the migration, you should scale your environment to the desired size before starting the migration. If you need to scale your environment after the migration, you can do so once the migration is complete.
+Since scaling is blocked during the migration, you should scale your environment to the desired size before starting the migration. If you need to scale your environment after the migration, you can do so once the migration is complete. If you have auto-scaling enabled, if a scaling event occurs before the migration starts, your migration is blocked until the scaling event completes. You should disable auto-scaling before starting the migration to avoid this issue.
Follow the steps described here in order and as written, because you're making Azure REST API calls. We recommend that you use the Azure CLI to make these API calls. For information about other methods, see [Azure REST API reference](/rest/api/azure/).
app-service Version Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/version-comparison.md
There's a new version of App Service Environment that is easier to use and runs
|Subnet delegation |Not required |Not required |[Must be delegated to `Microsoft.Web/hostingEnvironments`](networking.md#subnet-requirements) | |Subnet size|An App Service Environment v1 with no App Service plans uses 12 addresses before you create an app. If you use an ILB App Service Environment v1, then it uses 13 addresses before you create an app. As you scale out, infrastructure roles are added at every multiple of 15 and 20 of your App Service plan instances. |An App Service Environment v2 with no App Service plans uses 12 addresses before you create an app. If you use an ILB App Service Environment v2, then it uses 13 addresses before you create an app. As you scale out, infrastructure roles are added at every multiple of 15 and 20 of your App Service plan instances. |Any particular subnet has five addresses reserved for management purposes. In addition to the management addresses, App Service Environment v3 dynamically scales the supporting infrastructure, and uses between 4 and 27 addresses, depending on the configuration and load. You can use the remaining addresses for instances in the App Service plan. The minimal size of your subnet can be a /27 address space (32 addresses). | |DNS fallback |Azure DNS |Azure DNS |[Ensure that you have a forwarder to a public DNS or include Azure DNS in the list of custom DNS servers](migrate.md#in-place-migration-feature-limitations) |
+|Azure Application Gateway version compatibility |[v1](../../application-gateway/overview.md), [v2](../../application-gateway/overview-v2.md) |[v1](../../application-gateway/overview.md), [v2](../../application-gateway/overview-v2.md) |[v2](../../application-gateway/overview-v2.md) |
### Scaling
application-gateway Application Gateway Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-diagnostics.md
You have the following options to store the logs in your preferred location.
* [AGWFirewallLogs](/azure/azure-monitor/reference/tables/agwfirewalllogs) > [!NOTE]
-> The resource specific option is currently available in all **public regions**.<br>
+> The resource specific option is currently available in all **clouds**.<br>
> Existing users can continue using Azure Diagnostics, or can opt for dedicated tables by switching the toggle in Diagnostic settings to **Resource specific**, or to **Dedicated** in API destination. Dual mode isn't possible. The data in all the logs can either flow to Azure Diagnostics, or to dedicated tables. However, you can have multiple diagnostic settings where one data flow is to azure diagnostic and another is using resource specific at the same time. **Selecting the destination table in Log analytics :** All Azure services eventually use the resource-specific tables. As part of this transition, you can select Azure diagnostic or resource specific table in the diagnostic setting using a toggle button. The toggle is set to **Resource specific** by default and in this mode, logs for new selected categories are sent to dedicated tables in Log Analytics, while existing streams remain unchanged. See the following example.
application-gateway Prometheus Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/prometheus-grafana.md
+
+ Title: Configure Application Gateway for Containers for Prometheus and Grafana
+description: Configure Application Gateway for Containers metrics to be sent to Prometheus and displayed on Grafana.
+++++ Last updated : 07/09/2024+++
+# Configure Application Gateway for Containers for Prometheus and Grafana
+
+Establishing monitoring for Application Gateway for Containers is crucial part of successful operations. Firstly, it allows you to visualize how traffic is controlled, providing actionable insights that help optimize performance and troubleshoot issues promptly. Secondly, monitoring enhances security measures by providing valuable insights during investigations, ensuring that your gateway remains secure and resilient against threats. Implementing monitoring for your Application Gateway for Containers not only supports ongoing performance optimization but also strengthens your overall security posture by enabling proactive detection and response capabilities.
+
+You can monitor Azure Application Gateway for Containers resources in the following ways. Refer to the diagram.
+- [Backend Health Metrics](../../application-gateway/for-containers/application-gateway-for-containers-metrics.md): ALB Controller's metric and backend health endpoints exposes several metrics and summary of backend health. The metrics endpoint enables exposure to Prometheus.
+
+- [Metrics](../../application-gateway/for-containers/application-gateway-for-containers-metrics.md): Metrics and Activity Logs are exposed through Azure Monitor to monitor the performance of your Application Gateway for Containers deployments. The metrics contain numerical values in an ordered set of time-series data.
+
+- [Diagnostic Logs](../../application-gateway/for-containers/diagnostics.md): Access Logs audit all requests made to Application Gateway for Containers. Logs can provide several characteristics, such as the client's IP, requested URL, request latencies, return code, and bytes in and out. An access log is collected every 60 seconds.
+
+[![A diagram of architecture grid.](./media/prometheus-grafana/design-arch.png)](./media/prometheus-grafana/design-arch.png#lightbox)
+
+## Learn About the Services
+- [What is Azure Managed Prometheus?](../../azure-monitor/essentials/prometheus-metrics-overview.md)
+ - Why use Prometheus: Azure Prometheus offers native integration and management capabilities, simplifying the setup and management of monitoring infrastructure.
+- [What is Azure Managed Grafana?](../../managed-grafan)
+ - Why use Grafana: Azure Managed Grafana lets you bring together all your telemetry data into one place and Built-in support for Azure Monitor and Azure Data Explorer using Microsoft Entra identities.
+- [What is Azure Log Analytics Workspace?](../../azure-monitor/logs/log-analytics-workspace-overview.md)
+ - Why use Log Analytics Workspace: Log Analytics workspace scales with your business needs, handling large volumes of log data efficiently and detects and diagnose issues quickly.
+
+## Prerequisites
+
+- An Azure account for work or school and an active subscription. You can create an account for free.
+- Active Kubernetes cluster.
+- Active Application Gateway for Container deployment.
+- Active Resource Group with contributor permission.
+ > [!TIP]
+ > Alternative to Contributor role, you may also want to leverage the following:
+ > - Custom Role with 'microsoft.monitor/accounts/write'.
+ > - Read access.
+ > - Grafana Admin.
+ > - Log Analytics Contributor.
+ > - Monitoring Contributor permissions.
+ > [Learn more about custom roles here](https://aka.ms/custom-roles).
+
+
+
+## Create new Applications for Configuration
+
+Complete the steps to configure prometheus and grafana.
+1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
+2. In **Search resources, service, and docs**, type **Application Gateways for Containers** and select your Kubernetes Cluster name.
+
+ [ ![A screenshot of kubernetes insights.](./media/prometheus-grafana/configure.png) ](./media/prometheus-grafana/configure.png#lightbox)
+
+3. Under insights and select **Configure Monitoring**.
+
+ [ ![A screenshot of monitoring metrics.](./media/prometheus-grafana/grafana-container.png) ](./media/prometheus-grafana/grafana-container.png#lightbox)
+
+ Create new instances of Log analytics, Azure Monitor (Prometheus), and Managed Grafana to store current Kubernetes cluster metrics.
+4. In **Search resources, service, and docs**, type **Managed Prometheus** and select.
+
+ [ ![A screenshot of Prometheus Managed.](./media/prometheus-grafana/managed-prometheus.png) ](./media/prometheus-grafana/managed-prometheus.png#lightbox)
+
+5. Follow the steps to enable Azure Monitor to enable Managed Prometheus service by selecting **Create**.
+6. Create Azure Monitor Workspace Instance:
+ 1. In the **Create** an Azure Monitor Workspace page, select a subscription and resource group.
+ 2. Provide a name and a region for the workspace.
+ 3. Select **Review + create** to create the workspace.
+7. Add Prometheus Config Map to your cluster:
+ 1. Copy this file to notepad or Visual Studio Code: https://github.com/Azure/prometheus-collector/blob/main/otelcollector/configmaps/ama-metrics-settings-configmap.yaml.
+ 2. Modify line 35 to set podannotationnamespaceregex from ΓÇ£ΓÇ¥ to "azure-alb-system".
+ ```Bash
+ # Example Kusto Query
+ podannotationnamespaceregex = "azure-alb-system"
+ ```
+ 3. Save the file as configprometheus.yaml.
+ 4. Add file into CLI (command-line interfaces) under manage files.
+ 5. Run the following command:
+ ```Bash
+ # Run the Following Command in Bash
+ kubectl apply -f configprometheus.yaml
+ ```
+8. [Create a managed Grafana](../../managed-grafan).
+ Link a Grafana Workspace:
+ - In **Search resources, service, and docs**, type **Azure Monitor**.
+ - Select your monitor workspace.
+ - Select **Linked Grafana Workspaces**.
+ ![A screenshot of Grafana Link.](./media/prometheus-grafana/grafana-link.png)
+9. Select a Grafana workspace.
+10. Select **Link**.
++
+## Configure Kubernetes cluster for logging
+We created the resources and now we combine all resources and configure prometheus.
+
+1. Cluster configuration
+ 1. In **Search resources, service, and docs**, search for your kubernetes cluster.
+ 2. Search for insights and Select on **Configure Monitoring**.
+2. Specify each instance:
+ - Log analytics workspace: Use the default new log analytics workspace created for you.
+ - Managed Prometheus: Select on **ΓÇ£Enable Prometheus metricsΓÇ¥** checkbox.
+ - Select on advanced setting: specify the Azure monitor workspace recently created.
+ - Grafana Workspace: Select on **Enable Grafana** checkbox.
+ - Select on advanced setting: specify the Grafana instance recently created.
+ - Select **ΓÇ£ConfigureΓÇ¥**.
+ > [!NOTE]
+ > Check for ama-metrics under workloads in your kubernetes cluster.
+ > [ ![A screenshot of Checking Config.](./media/prometheus-grafana/notes-image.png) ](./media/prometheus-grafana/notes-image.png#lightbox)
+
+## Enable diagnostic logs for Application Gateway for Containers
+Activity logging is automatically enabled for every Resource Manager resource. For Access Logs, you must enable access logging to start collecting the data available through those logs. To enable logging, you may configure diagnostic settings in Azure Monitor.
+
+1. [Create a log analytics workspace](../../azure-monitor/logs/quick-create-workspace.md).
+2. Send logs from Application Gateway for Containers to log analytics workspace:
+ 1. Enter **Application Gateway for Containers** in the search box. Select your active Application Gateway for Container resource.
+ 2. Search and select Diagnostic Setting under Monitoring. Add diagnostic setting.
+ 3. Select a name, check box **allLogs** which includes the Application Gateway for Container Access Logs, and select **Send to Log analytics Workspace** with your desired subscription and recently made log analytics workspace.
+ [ ![A screenshot of Application Gateway for Containers Diagnostic Setting.](./media/prometheus-grafana/logs-all.png) ](./media/prometheus-grafana/logs-all.png#lightbox)
+
+3. Select **Save**.
+
+## Access Grafana dashboard
+In this section, we enter Grafana default dashboards.
+
+1. In **Search resources, service, and docs**, select your **Managed Grafana**.
+2. Select the grafana resource used for configuring monitoring in the cluster.
+3. Select on Endpoint URL in the overview.
+ ![A screenshot of Grafana Endpoint.](./media/prometheus-grafana/grafana-end.png)
+
+4. After entering your user credentials, refer to the Grafana introduction.
+5. Select on the left side bar to access default dashboards under dashboards.
+ ![A screenshot of Dafault Grafana Dashboard.](./media/prometheus-grafana/grafana-default.png)
+
+## Graph Prometheus metrics on Grafana
+
+In this section, we visualize a sample metric from Prometheus metrics. Refer to all Prometheus metrics availabilities here: [Prometheus Metrics](../../application-gateway/for-containers/application-gateway-for-containers-metrics.md).
+
+1. In the right top corner, Select **Add Dashboard**.
+2. Select **Add Visualization**.
+3. Search for prometheus under data source.
+![A screenshot of Data Source Prometheus Dashboard.](./media/prometheus-grafana/data-source-prometheus.png)
+4. Select desired metric. For Example: alb_controller_total_unhealthy_endpoints that gives any unhealthy endpoints of your backend service.
+5. Choose app as alb-controller.
+6. Select name of the panel, type of visualization, and time range.
+ ![A screenshot of Prometheus Logging Test.](./media/prometheus-grafana/prometheus-grafana-viewing.png)
+7. **Save + Apply** of your panel to add into your dashboard.
+ > [!NOTE]
+ > Add a custom legend by {{variable_name}}.
+
+## Graph access logs and metrics on Grafana
+
+In this section, we visualize a sample logs from Log Analytics Workspace. Refer to all diagnostic Logs availabilities here: [Diagnostic Logs](../../application-gateway/for-containers/diagnostics.md).
+
+### Workspace for logs
+
+1. In the right top corner, Select **Add + Add Dashboard**.
+2. Select **Add Visualization**.
+3. Search for Azure Monitor under data source + **Add**.
+![A screenshot of Log Data Source.](./media/prometheus-grafana/log-data-source.png)
+4. Change service as **Logs**.
+5. Type:
+ ```kusto
+ // Example Kusto Query
+ AGCAccessLogs
+ | project BackendResponseLatency, TimeGenerated
+ ```
+6. Select a **Time Series** as a visualization.
+7. Select name, description, and time range of the panel.
+![A screenshot of Application Gateway for Containers Logging Example.](./media/prometheus-grafana/logging-example.png)
+8. **Save + Apply** to your dashboard.
+
+### Workspace for metrics
+
+1. In the right top corner, select **Add + Add Dashboard**.
+2. Select **Add Visualization**.
+3. Search for Azure Monitor under data source+ **Add**.
+4. Change service as Metrics.
+5. Select your application gateway for containers instance.
+![[A screenshot of Metrics Log Data Source.](./media/prometheus-grafana/metrics-logs-datasource.png)](./media/prometheus-grafana/metrics-logs-datasource.png#lightbox)
+6. Select metric namespace as microsoft.servicenetworking/trafficcontrollers.
+7. Choose a metric such as **total requests** and type of data visualization.
+[ ![A screenshot of Example Metrics Log Data Source.](./media/prometheus-grafana/metrics-logs.png) ](./media/prometheus-grafana/metrics-logs.png#lightbox)
+8. Select a name, description, and time range of the panel.
+9. **Save + Apply** to your dashboard.
+
+Congratulations! You set up a monitoring service to enhance your health tracking!
azure-functions Functions Bindings Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-openai.md
zone_pivot_groups: programming-languages-set-functions
[!INCLUDE [preview-support](../../includes/functions-openai-support-limitations.md)]
-The Azure OpenAI extension for Azure Functions implements a set of triggers and bindings that enable you to easily integrate features and behaviors of the [Azure OpenAI service](../ai-services/openai/overview.md) into your function code executions.
+The Azure OpenAI extension for Azure Functions implements a set of triggers and bindings that enable you to easily integrate features and behaviors of [Azure OpenAI Service](../ai-services/openai/overview.md) into your function code executions.
Azure Functions is an event-driven compute service that provides a set of [triggers and bindings](./functions-triggers-bindings.md) to easily connect with other Azure services.
azure-functions Openapi Apim Integrate Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/openapi-apim-integrate-visual-studio.md
The function uses an HTTP trigger that takes two parameters:
The function then calculates how much a repair costs, and how much revenue the turbine could make in a 24-hour period. Parameters are supplied either in the query string or in the payload of a POST request.
-In the Function1.cs project file, replace the contents of the generated class library code with the following code:
+In the Turbine.cs project file, replace the contents of the generated class library code with the following code:
This function code returns a message of `Yes` or `No` to indicate whether an emergency repair is cost-effective. It also returns the revenue opportunity that the turbine represents and the cost to fix the turbine.
azure-functions Security Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/security-concepts.md
Defender for Cloud integrates with your function app in the portal. It provides,
### Log and monitor
-One way to detect attacks is through activity monitoring and logging analytics. Functions integrates with Application Insights to collect log, performance, and error data for your function app. Application Insights automatically detects performance anomalies and includes powerful analytics tools to help you diagnose issues and to understand how your functions are used. To learn more, see [Monitor Azure Functions](functions-monitoring.md).
+One way to detect attacks is through activity monitoring and logging analytics. Functions integrates with Application Insights to collect log, performance, and error data for your function app. Application Insights automatically detects performance anomalies and includes powerful analytics tools to help you diagnose issues and understand how your functions are used. To learn more, see [Monitor Azure Functions](functions-monitoring.md).
-Functions also integrates with Azure Monitor Logs to enable you to consolidate function app logs with system events for easier analysis. You can use diagnostic settings to configure streaming export of platform logs and metrics for your functions to the destination of your choice, such as a Logs Analytics workspace. To learn more, see [Monitoring Azure Functions with Azure Monitor Logs](functions-monitor-log-analytics.md).
+Functions also integrates with Azure Monitor Logs to enable you to consolidate function app logs with system events for easier analysis. You can use diagnostic settings to configure the streaming export of platform logs and metrics for your functions to the destination of your choice, such as a Logs Analytics workspace. To learn more, see [Monitoring Azure Functions with Azure Monitor Logs](functions-monitor-log-analytics.md).
For enterprise-level threat detection and response automation, stream your logs and events to a Logs Analytics workspace. You can then connect Microsoft Sentinel to this workspace. To learn more, see [What is Microsoft Sentinel](../sentinel/overview.md).
HTTP endpoints that are exposed publicly provide a vector of attack for maliciou
### Require HTTPS
-By default, clients can connect to function endpoints by using both HTTP or HTTPS. You should redirect HTTP to HTTPs because HTTPS uses the SSL/TLS protocol to provide a secure connection, which is both encrypted and authenticated. To learn how, see [Enforce HTTPS](../app-service/configure-ssl-bindings.md#enforce-https).
+By default, clients can connect to function endpoints by using either HTTP or HTTPS. You should redirect HTTP to HTTPS because HTTPS uses the SSL/TLS protocol to provide a secure connection, which is both encrypted and authenticated. To learn how, see [Enforce HTTPS](../app-service/configure-ssl-bindings.md#enforce-https).
When you require HTTPS, you should also require the latest TLS version. To learn how, see [Enforce TLS versions](../app-service/configure-ssl-bindings.md#enforce-tls-versions).
APIM provides various API security options for incoming requests. To learn more,
### Permissions
-As with any application or service, the goal is run your function app with the lowest possible permissions.
+As with any application or service, the goal is to run your function app with the lowest possible permissions.
#### User management permissions
Permissions are effective at the function app level. The Contributor role is req
#### Organize functions by privilege
-Connection strings and other credentials stored in application settings gives all of the functions in the function app the same set of permissions in the associated resource. Consider minimizing the number of functions with access to specific credentials by moving functions that don't use those credentials to a separate function app. You can always use techniques such as [function chaining](/training/modules/chain-azure-functions-data-using-bindings/) to pass data between functions in different function apps.
+Connection strings and other credentials stored in application settings give all of the functions in the function app the same set of permissions in the associated resource. Consider minimizing the number of functions with access to specific credentials by moving functions that don't use those credentials to a separate function app. You can always use techniques such as [function chaining](/training/modules/chain-azure-functions-data-using-bindings/) to pass data between functions in different function apps.
#### Managed identities
While it's tempting to use a wildcard that allows all sites to access your endpo
### Managing secrets
-To be able to connect to the various services and resources need to run your code, function apps need to be able to access secrets, such as connection strings and service keys. This section describes how to store secrets required by your functions.
+To be able to connect to the various services and resources needed to run your code, function apps need to be able to access secrets, such as connection strings and service keys. This section describes how to store secrets required by your functions.
Never store secrets in your function code.
By default, you store connection strings and secrets used by your function app a
For example, every function app requires an associated storage account, which is used by the runtime. By default, the connection to this storage account is stored in an application setting named `AzureWebJobsStorage`.
-App settings and connection strings are stored encrypted in Azure. They're decrypted only before being injected into your app's process memory when the app starts. The encryption keys are rotated regularly. If you prefer to instead manage the secure storage of your secrets, the app setting should instead be references to Azure Key Vault.
+App settings and connection strings are stored encrypted in Azure. They're decrypted only before being injected into your app's process memory when the app starts. The encryption keys are rotated regularly. If you prefer to manage the secure storage of your secrets, the app settings should instead be references to Azure Key Vault secrets.
-You can also encrypt settings by default in the local.settings.json file when developing functions on your local computer. For more information, see [Encrypt the local settings file](functions-run-local.md#encrypt-the-local-settings-file).
+You can also encrypt settings by default in the `local.settings.json` file when developing functions on your local computer. For more information, see [Encrypt the local settings file](functions-run-local.md#encrypt-the-local-settings-file).
#### Key Vault references
-While application settings are sufficient for most many functions, you may want to share the same secrets across multiple services. In this case, redundant storage of secrets results in more potential vulnerabilities. A more secure approach is to a central secret storage service and use references to this service instead of the secrets themselves.
+While application settings are sufficient for most functions, you may want to share the same secrets across multiple services. In this case, redundant storage of secrets results in more potential vulnerabilities. A more secure approach is to use a central secret storage service and use references to this service instead of the secrets themselves.
-[Azure Key Vault](../key-vault/general/overview.md) is a service that provides centralized secrets management, with full control over access policies and audit history. You can use a Key Vault reference in the place of a connection string or key in your application settings. To learn more, see [Use Key Vault references for App Service and Azure Functions](../app-service/app-service-key-vault-references.md?toc=/azure/azure-functions/toc.json).
+[Azure Key Vault](../key-vault/general/overview.md) is a service that provides centralized secrets management, with full control over access policies and audit history. You can use a Key Vault reference in place of a connection string or key in your application settings. To learn more, see [Use Key Vault references for App Service and Azure Functions](../app-service/app-service-key-vault-references.md?toc=/azure/azure-functions/toc.json).
### Identity-based connections
Some Azure Functions binding extensions can be configured to access services usi
### Set usage quotas
-Consider setting a usage quota on functions running in a Consumption plan. When you set a daily GB-sec limit on the sum total execution of functions in your function app, execution is stopped when the limit is reached. This could potentially help mitigate against malicious code executing your functions. To learn how to estimate consumption for your functions, see [Estimating Consumption plan costs](functions-consumption-costs.md).
+Consider setting a usage quota for functions running in a Consumption plan. When you set a daily GB-sec limit on the total execution of functions in your function app, execution is stopped when the limit is reached. This could potentially help mitigate against malicious code executing your functions. To learn how to estimate consumption for your functions, see [Estimating Consumption plan costs](functions-consumption-costs.md).
### Data validation
Don't assume that the data coming into your function has already been validated
### Handle errors
-While it seems basic, it's important to write good error handling in your functions. Unhandled errors bubble-up to the host and are handled by the runtime. Different bindings handle processing of errors differently. To learn more, see [Azure Functions error handling](functions-bindings-error-pages.md).
+While it seems basic, it's important to write good error handling in your functions. Unhandled errors bubble up to the host and are handled by the runtime. Different bindings handle the processing of errors differently. To learn more, see [Azure Functions error handling](functions-bindings-error-pages.md).
### Disable remote debugging
You should also consult the guidance for any resource types your application log
## Secure deployment
-Azure Functions tooling an integration make it easy to publish local function project code to Azure. It's important to understand how deployment works when considering security for an Azure Functions topology.
+Azure Functions tooling integration makes it easy to publish local function project code to Azure. It's important to understand how deployment works when considering security for an Azure Functions topology.
### Deployment credentials
FTP isn't recommended for deploying your function code. FTP deployments are manu
When you're not planning on using FTP, you should disable it in the portal. If you do choose to use FTP, you should [enforce FTPS](../app-service/deploy-ftp.md#enforce-ftps).
-### Secure the scm endpoint
+### Secure the `scm` endpoint
-Every function app has a corresponding `scm` service endpoint that used by the Advanced Tools (Kudu) service for deployments and other App Service [site extensions](https://github.com/projectkudu/kudu/wiki/Azure-Site-Extensions). The scm endpoint for a function app is always a URL in the form `https://<FUNCTION_APP_NAME.scm.azurewebsites.net>`. When you use network isolation to secure your functions, you must also account for this endpoint.
+Every function app has a corresponding `scm` service endpoint that is used by the Advanced Tools (Kudu) service for deployments and other App Service [site extensions](https://github.com/projectkudu/kudu/wiki/Azure-Site-Extensions). The `scm` endpoint for a function app is always a URL in the form `https://<FUNCTION_APP_NAME>.scm.azurewebsites.net`. When you use network isolation to secure your functions, you must also account for this endpoint.
-By having a separate scm endpoint, you can control deployments and other advanced tools functionalities for function app that are isolated or running in a virtual network. The scm endpoint supports both basic authentication (using deployment credentials) and single sign-on with your Azure portal credentials. To learn more, see [Accessing the Kudu service](https://github.com/projectkudu/kudu/wiki/Accessing-the-kudu-service).
+By having a separate `scm` endpoint, you can control deployments and other Advanced Tools functionalities for function apps that are isolated or running in a virtual network. The `scm` endpoint supports both basic authentication (using deployment credentials) and single sign-on with your Azure portal credentials. To learn more, see [Accessing the Kudu service](https://github.com/projectkudu/kudu/wiki/Accessing-the-kudu-service).
### Continuous security validation
Restricting network access to your function app lets you control who can access
### Set access restrictions
-Access restrictions allow you to define lists of allow/deny rules to control traffic to your app. Rules are evaluated in priority order. If there are no rules defined, then your app will accept traffic from any address. To learn more, see [Azure App Service Access Restrictions](../app-service/app-service-ip-restrictions.md?toc=/azure/azure-functions/toc.json).
+Access restrictions allow you to define lists of allow/deny rules to control traffic to your app. Rules are evaluated in priority order. If no rules are defined, your app will accept traffic from any address. To learn more, see [Azure App Service Access Restrictions](../app-service/app-service-ip-restrictions.md?toc=/azure/azure-functions/toc.json).
### Secure the storage account
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
The following features of Azure OpenAI are available in Azure Government:
|Feature|Azure OpenAI| |--|--|
-|Models available|US Gov Arizona:<br>&nbsp;&nbsp;&nbsp;GPT-4o (2024-05-13)&nbsp;&nbsp;&nbsp;GPT-4 (1106-Preview)<br>&nbsp;&nbsp;&nbsp;GPT-3.5-Turbo (1106)<br>&nbsp;&nbsp;&nbsp;GPT-3.5-Turbo (0125)<br>&nbsp;&nbsp;&nbsp;text-embedding-ada-002 (version 2)<br><br>US Gov Virginia:<br>&nbsp;&nbsp;&nbsp;GPT-4 (1106-Preview)<br>&nbsp;&nbsp;&nbsp;GPT-3.5-Turbo (0125)<br>&nbsp;&nbsp;&nbsp;text-embedding-ada-002 (version 2)<br><br>Learn more about the different capabilities of each model in [Azure OpenAI Service models](../ai-services/openai/concepts/models.md)|
+|Models available|US Gov Arizona:<br>&nbsp;&nbsp;&nbsp;GPT-4o (2024-05-13)&nbsp;&nbsp;&nbsp;GPT-4 (1106-Preview)<br>&nbsp;&nbsp;&nbsp;GPT-3.5-Turbo (0125)&nbsp;&nbsp;&nbsp;GPT-3.5-Turbo (1106)<br>&nbsp;&nbsp;&nbsp;text-embedding-ada-002 (version 2)<br><br>US Gov Virginia:<br>&nbsp;&nbsp;&nbsp;GPT-4o (2024-05-13)&nbsp;&nbsp;&nbsp;GPT-4 (1106-Preview)<br>&nbsp;&nbsp;&nbsp;GPT-3.5-Turbo (0125)<br>&nbsp;&nbsp;&nbsp;text-embedding-ada-002 (version 2)<br><br>Learn more about the different capabilities of each model in [Azure OpenAI Service models](../ai-services/openai/concepts/models.md)|
|Virtual network support & private link support| Yes. | | Connect your data | Available in US Gov Virginia and Arizona. Virtual network and private links are supported. Deployment to a web app or a copilot in Copilot Studio is not supported. | |Managed Identity|Yes, via Microsoft Entra ID|
The following features of Azure OpenAI are available in Azure Government:
|Data Storage|In AOAI, customer data is only stored at rest as part of our Finetuning solution. Since Finetuning is not enabled within Azure Gov, there is no customer data stored at rest in Azure Gov associated with AOAI. However, Customer Managed Keys (CMK) can still be enabled in Azure Gov to support use of the same policies in Azure Gov as in Public cloud. Note also that if Finetuning is enabled in Azure Gov in the future, any existing CMK deployment would be applied to that data at that time.| **Next steps**
-* Get started by requesting access to Azure OpenAI Service in Azure Government at [https://aka.ms/AOAIgovaccess](https://aka.ms/AOAIgovaccess)
-* Request quota increases for the pay-as-you-go consumption model, please fill out a separate form at [https://aka.ms/AOAIGovQuota](https://aka.ms/AOAIGovQuota)
+* To request quota increases for the pay-as-you-go consumption model, apply at [https://aka.ms/AOAIGovQuota](https://aka.ms/AOAIGovQuota)
* If modified content filters are required, apply at [https://aka.ms/AOAIGovModifyContentFilter](https://aka.ms/AOAIGovModifyContentFilter)
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
Title: Overview for Microsoft Azure Maps description: Learn about services and capabilities in Microsoft Azure Maps and how to use them in your applications.--++ Last updated 10/21/2022
azure-maps Android Map Add Line Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/android-map-add-line-layer.md
Last updated 2/26/2021 -+ zone_pivot_groups: azure-maps-android
azure-maps Android Map Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/android-map-events.md
Last updated 2/26/2021 -+ zone_pivot_groups: azure-maps-android
azure-maps Azure Maps Event Grid Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-event-grid-integration.md
Title: React to Azure Maps events by using Event Grid description: Find out how to react to Azure Maps events involving geofences. See how to listen to map events and how to use Event Grid to reroute events to event handlers.--++ Last updated 01/08/2024
azure-maps Clustering Point Data Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-android-sdk.md
Last updated 03/23/2021 -+ zone_pivot_groups: azure-maps-android
azure-maps Create Data Source Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-android-sdk.md
Last updated 2/26/2021 -+ zone_pivot_groups: azure-maps-android
azure-maps Data Driven Style Expressions Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-android-sdk.md
Last updated 2/26/2021 -+ zone_pivot_groups: azure-maps-android
azure-maps Display Feature Information Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/display-feature-information-android.md
Last updated 2/26/2021 -+ zone_pivot_groups: azure-maps-android
azure-maps Geocoding Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geocoding-coverage.md
Title: Geocoding coverage in Microsoft Azure Maps Search service description: See which regions Azure Maps Search covers. Geocoding categories include address points, house numbers, street level, city level, and points of interest.--++ Last updated 11/30/2021
azure-maps Geofence Geojson https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geofence-geojson.md
Title: GeoJSON data format for geofence | Microsoft Azure Maps description: Learn about Azure Maps geofence data. See how to use the GET Geofence and POST Geofence APIs when retrieving the position of coordinates relative to a geofence.--++ Last updated 02/14/2019
azure-maps Geographic Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geographic-coverage.md
Title: Geographic coverage information in Microsoft Azure Maps description: Details of where geographic data is available within Microsoft Azure Maps.--++ Last updated 6/23/2021
azure-maps Geographic Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geographic-scope.md
Title: Azure Maps service geographic scope description: Learn about Azure Maps service's geographic mappings--++ Last updated 04/18/2022
azure-maps Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/glossary.md
Title: Azure Maps Glossary | Microsoft Docs description: A glossary of commonly used terms associated with Azure Maps, Location-Based Services, and GIS. --++ Last updated 09/18/2018
azure-maps How To Add Shapes To Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-add-shapes-to-android-map.md
Last updated 2/26/2021 -+ zone_pivot_groups: azure-maps-android
azure-maps How To Add Symbol To Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-add-symbol-to-android-map.md
Last updated 2/26/2021 -+ zone_pivot_groups: azure-maps-android
azure-maps How To Add Tile Layer Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-add-tile-layer-android-map.md
Last updated 3/25/2021 -+ zone_pivot_groups: azure-maps-android
azure-maps How To Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-template.md
Title: Create your Azure Maps account using an Azure Resource Manager template in Azure Maps description: Learn how to create an Azure Maps account using an Azure Resource Manager template.--++ Last updated 04/27/2021
azure-maps How To Dev Guide Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-java-sdk.md
Last updated 01/25/2023 -+
azure-maps How To Manage Account Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-account-keys.md
Title: Manage your Azure Maps account in the Azure portal | Microsoft Azure Maps description: Learn how to use the Azure portal to manage an Azure Maps account. See how to create a new account and how to delete an existing account.--++ Last updated 04/26/2021
azure-maps How To Manage Pricing Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-pricing-tier.md
Title: Manage your Azure Maps account's pricing tier description: You can use the Azure portal to manage your Microsoft Azure Maps account and its pricing tier.--++ Last updated 09/14/2023
azure-maps How To Secure Sas App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-sas-app.md
-+ # Secure an Azure Maps account with a SAS token
azure-maps How To Show Traffic Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-show-traffic-android.md
Last updated 2/26/2021 -+ zone_pivot_groups: azure-maps-android
azure-maps How To Use Android Map Control Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-android-map-control-library.md
Last updated 2/26/2021 -+ zone_pivot_groups: azure-maps-android
azure-maps How To Use Feedback Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-feedback-tool.md
Title: Provide data feedback to Azure Maps description: Provide data feedback using Microsoft Azure Maps feedback tool.--++ Last updated 03/15/2024
azure-maps Map Add Bubble Layer Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-bubble-layer-android.md
Last updated 2/26/2021 -+ zone_pivot_groups: azure-maps-android
azure-maps Map Add Controls Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-controls-android.md
Last updated 02/26/2021 -+ zone_pivot_groups: azure-maps-android
azure-maps Map Add Heat Map Layer Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-heat-map-layer-android.md
Last updated 02/26/2021 -+ zone_pivot_groups: azure-maps-android
azure-maps Map Add Image Layer Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-image-layer-android.md
Last updated 02/26/2021 -+ zone_pivot_groups: azure-maps-android
azure-maps Map Extruded Polygon Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-extruded-polygon-android.md
Last updated 02/26/2021 -+ zone_pivot_groups: azure-maps-android
azure-maps Migrate Bing Maps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-bing-maps-overview.md
Title: Migrate from Bing Maps to Azure Maps overview description: Overview for the migration guides that show how to migrate code from Bing Maps to Azure Maps.--++ Last updated 05/16/2024
azure-maps Migrate From Google Maps Android App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-android-app.md
Last updated 12/1/2021 -+ zone_pivot_groups: azure-maps-android
azure-maps Migrate From Google Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps.md
Title: 'Tutorial - Migrate from Google Maps to Azure Maps | Microsoft Azure Maps' description: Tutorial on how to migrate from Google Maps to Microsoft Azure Maps. Guidance walks you through how to switch to Azure Maps APIs and SDKs.--++ Last updated 09/23/2020
azure-maps Open Source Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/open-source-projects.md
Title: Azure Maps community Open-source projects | Microsoft Azure Maps description: Open-source projects coordinated for the Microsoft Azure Maps platform.--++ Last updated 12/07/2020
azure-maps Quick Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-android-map.md
Last updated 09/22/2022
-+ zone_pivot_groups: azure-maps-android
azure-maps Rest Sdk Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-sdk-developer-guide.md
Last updated 10/31/2021 -+
azure-maps Set Android Map Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/set-android-map-styles.md
Last updated 02/26/2021 -+ zone_pivot_groups: azure-maps-android
azure-maps Supported Browsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-browsers.md
Title: Web SDK supported browsers description: Find out how to check whether the Azure Maps Web SDK supports a browser. View a list of supported browsers. Learn how to use map services with legacy browsers.--++ Last updated 06/22/2023
azure-maps Supported Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-languages.md
Title: Localization support in Microsoft Azure Maps description: Lists the regions Azure Maps supports with services such as maps, search, routing, weather, and traffic incidents, and shows how to set up the View parameter.--++ Last updated 01/05/2022
azure-maps Supported Map Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-map-styles.md
Title: Supported built-in Azure Maps map styles description: Learn about the built-in map styles that Azure Maps supports, such as road, blank_accessible, satellite, satellite_road_labels, road_shaded_relief, and night.--++ Last updated 11/01/2023
azure-maps Tutorial Load Geojson File Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-load-geojson-file-android.md
Last updated 12/10/2020 -+ zone_pivot_groups: azure-maps-android
azure-maps Understanding Azure Maps Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/understanding-azure-maps-transactions.md
Title: Understanding Microsoft Azure Maps Transactions description: Learn about Microsoft Azure Maps Transactions--++ Last updated 04/05/2024
azure-monitor Azure Monitor Agent Data Field Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-data-field-differences.md
This table collects log data from the Internet Information Service on Window sys
### Windows event table This table collects Events from the Windows Event log. There are two other tables that are used to store Windows events, the SecurityEvent and Event tables.+ |LAW Field | Difference | Reason| Additional Information | ||||| | UserName | MMA enriches the event with the username prior to sending the event for ingestion. AMA do not do the same enrichment. | The AMA enrichment is not yet implemented. | AMA principles dictate that the event data should remain unchanged by default. Adding and enriched field adds possible processing errors and additional cost for storage. In this case, the customer demand for the field is very high and work is underway to add the username. |
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
We strongly recommended to always update to the latest version, or opt in to the
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
-| July 2024 | **Windows**<ul><li>Security hardening of Agent data folder.</li><li>Fixed credential leaks in agent logs.</li><li>Various bug fix for AzureWatson.</li><li>Added columns to the Windows Event table: Keywords, UserName, Opcode, Correlation, ProcessId, ThreadId, EventRecordId.</li><li>AMA: Support for private preview of Agent side transformation.</li><li>AMA: Support AMA Client Installer for selected partners.</li></ul>**Linux Features**<ul><li>Enable Dynamic Linking of OpenSSL 1.1 in all regions</li><li>Add Computer field to Custom Logs</li><li>Add EventHub upload support for Custom Logs </li><li>Reliability improvement for upload task scheduling</li><li>Added support for SUSE15 SP5, Ubuntu 24, and AWS 3 distributions</li></ul>**Linux Fixes**<ul><li>Fix Direct upload to storage for perf counters when no other destination is configured. You don't see perf counters If storage was the only configured destination for perf counters, they wouldn't see perf counters in their blob or table.</li><li>Fix proxy for system-wide proxy using http(s)_proxy env var </li><li>Support for syslog hostnames that are up to 255characters</li><li>Stop sending rows longer than 1MB. This exceeds ingestion limits and destabilizes the agent. Now the row is gracefully dropped and a diagnostic message is written.</li><li>Set max disk space used for rsyslog spooling to 1GB. There was no limit before which could lead to high memory usage.</li><li>Use random available TCP port when there is a port conflict with AMA port 28230 and 28330 . This resolved issues where port 28230 and 28330 were already in uses by the customer which prevented data upload to Azure.</li></ul>| 1.29 | 1.32.2 |
+| July 2024 | **Windows**<ul><li>Security hardening of Agent data folder.</li><li>Fixed credential leaks in agent logs.</li><li>Various bug fix for AzureWatson.</li><li>Added columns to the Windows Event table: Keywords, UserName, Opcode, Correlation, ProcessId, ThreadId, EventRecordId.</li><li>AMA: Support for private preview of Agent side transformation.</li><li>AMA: Support AMA Client Installer for selected partners.</li></ul>**Linux Features**<ul><li>Enable Dynamic Linking of OpenSSL 1.1 in all regions</li><li>Add Computer field to Custom Logs</li><li>Add EventHub upload support for Custom Logs </li><li>Reliability improvement for upload task scheduling</li><li>Added support for SUSE15 SP5, Ubuntu 24, and AWS 3 distributions</li></ul>**Linux Fixes**<ul><li>Fix Direct upload to storage for perf counters when no other destination is configured. You don't see perf counters If storage was the only configured destination for perf counters, they wouldn't see perf counters in their blob or table.</li><li>Fluent-Bit updated to version 3.0.7. This fixes the issue with Fluent-Bit creating junk files in the root directory on process shutdown.</li><li>Fix proxy for system-wide proxy using http(s)_proxy env var </li><li>Support for syslog hostnames that are up to 255characters</li><li>Stop sending rows longer than 1MB. This exceeds ingestion limits and destabilizes the agent. Now the row is gracefully dropped and a diagnostic message is written.</li><li>Set max disk space used for rsyslog spooling to 1GB. There was no limit before which could lead to high memory usage.</li><li>Use random available TCP port when there is a port conflict with AMA port 28230 and 28330 . This resolved issues where port 28230 and 28330 were already in uses by the customer which prevented data upload to Azure.</li></ul>| 1.29 | 1.32.2 |
| June 2024 |**Windows**<ul><li>Fix encoding issues with Resource ID field.</li><li>AMA: Support new ingestion endpoint for GovSG environment.</li><li>Upgrade AzureSecurityPack version to 4.33.0.1.</li><li>Upgrade Metrics Extension version to 2.2024.517.533.</li><li>Upgrade Health Extension version to 2024.528.1.</li></ul>**Linux**<ul><li>Coming Soon</li></ul>| 1.28.2 | | | May 2024 |**Windows**<ul><li>Upgraded Fluent-bit version to 3.0.5. This Fix resolves as security issue in fluent-bit (NVD - CVE-2024-4323 (nist.gov)</li><li>Disabled Fluent-bit logging that caused disk exhaustion issues for some customers. Example error is Fluentbit log with "[C:\projects\fluent-bit-2e87g\src\flb_scheduler.c:72 errno=0] No error" fills up the entire disk of the server.</li><li>Fixed AMA extension getting stuck in deletion state on some VMs that are using Arc. This fix improves reliability.</li><li>Fixed AMA not using system proxy, this issue is a bug introduced in 1.26.0. The issue was caused by a new feature that uses the Arc agentΓÇÖs proxy settings. When the system proxy as set as None the proxy was broken in 1.26.</li><li>Fixed Windows Firewall Logs log file rollover issues</li></ul>| 1.27.0 | | | April 2024 |**Windows**<ul><li>In preparation for the May 17 public preview of Firewall Logs, the agent completed the addition of a profile filter for Domain, Public, and Private Logs. </li><li>AMA running on an Arc enabled server will default to using the Arc proxy settings if available.</li><li>The AMA VM extension proxy settings override the Arc defaults.</li><li>Bug fix in MSI installer: Symptom - If there are spaces in the fluent-bit config path, AMA wasn't recognizing the path properly. AMA now adds quotes to configuration path in fluent-bit.</li><li>Bug fix for Container Insights: Symptom - custom resource ID weren't being honored.</li><li>Security issue fix: skip the deletion of files and directory whose path contains a redirection (via Junction point, Hard links, Mount point, OB Symlinks etc.).</li><li>Updating MetricExtension package to 2.2024.328.1744.</li></ul>**Linux**<ul><li>AMA 1.30 now available in Arc.</li><li>New distribution support Debian 12, RHEL CIS L2.</li><li>Fix for mdsd version 1.30.3 in persistence mode, which converted positive integers to float/double values ("3.0", "4.0") to type ulong which broke Azure stream analytics.</li></ul>| 1.26.0 | 1.31.1 |
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
A SCOM Admin Management Pack exists and can help you remove the workspace config
- Sentinel: Windows Firewall logs aren't generally available (GA) yet. - SQL Assessment Solution: This is now part of SQL best practice assessment. The deployment policies require one Log Analytics Workspace per subscription, which isn't the best practice recommended by the AMA team. - Microsoft Defender for cloud: Some features for the new agent-less solution are in development. Your migration maybe impacted if you use File Integrity Monitoring (FIM), Endpoint protection discovery recommendations, OS Misconfigurations (Azure Security Benchmark (ASB) recommendations) and Adaptive Application controls.-- Container Insights: The Windows version is in public preview. ## Next steps
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
After you create your cluster resource, you can edit properties such as *sku*, *
Deleted clusters take two weeks to be completely removed. You can have up to seven clusters per subscription and region, five active, and two deleted in past two weeks. > [!NOTE]
-> Cluster creation triggers resource allocation and provisioning. This operation can take a few hours to complete.
+> Creating a cluster involves multiple resources and operation typically complete in two hours.
> Dedicated cluster is billed once provisioned regardless data ingestion and it's recommended to prepare the deployment to expedite the provisioning and workspaces link to cluster. Verify the following: > - A list of initial workspace to be linked to cluster is identified > - You have permissions to subscription intended for the cluster and any workspace to be linked
N/A
> [!NOTE] > - Linking a workspace can be performed only after the completion of the Log Analytics cluster provisioning.
-> - Linking a workspace to a cluster involves syncing multiple backend components and cache hydration, which can take up to two hours.
-> - When linking a Log Analytics workspace workspace, the workspace billing plan in changed to *LACluster*, and you should remove sku in workspace template to prevent conflict during workspace deployment.
+> - Linking a workspace to a cluster involves syncing multiple backend components and cache hydration, which typically complete in two hours.
+> - When linking a Log Analytics workspace workspace, the workspace billing plan in changed to *LACluster*, and you should remove SKU in workspace template to prevent conflict during workspace deployment.
> - Other than the billing aspects that is governed by the cluster plan, all workspace configurations and query aspects remain unchanged during and after the link. You need 'write' permissions to both the workspace and the cluster resource for workspace link operation:
azure-netapp-files Large Volumes Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/large-volumes-requirements-considerations.md
The following requirements and considerations apply to large volumes. For perfor
* A regular volume canΓÇÖt be converted to a large volume. * You must create a large volume at a size of 50 TiB or larger. A single volume can't exceed 1 PiB. * You can't resize a large volume to less than 50 TiB.
- A large volume cannot be resized to less than 30% of its lowest provisioned size. This limit is adjustable via [a support request](azure-netapp-files-resource-limits.md#resource-limits).
+ A large volume cannot be resized to more than 30% of its lowest provisioned size. This limit is adjustable via [a support request](azure-netapp-files-resource-limits.md#resource-limits).
* Large volumes are currently not supported with Azure NetApp Files backup. * You can't create a large volume with application volume groups. * Currently, large volumes aren't suited for database (HANA, Oracle, SQL Server, etc.) data and log volumes. For database workloads requiring more than a single volumeΓÇÖs throughput limit, consider deploying multiple regular volumes. To optimize multiple volume deployments for databases, use [application volume groups](application-volume-group-concept.md).
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
Title: Azure subscription limits and quotas description: Provides a list of common Azure subscription and service limits, quotas, and constraints. This article includes information on how to increase limits along with maximum values. Previously updated : 06/13/2024 Last updated : 07/19/2024 # Azure subscription and service limits, quotas, and constraints
This section provides information about limits that apply to Azure API Managemen
* [API Management classic tiers](#limitsapi-management-classic-tiers) * [API Management v2 tiers](#limitsapi-management-v2-tiers)
+* [API Management workspaces](#limitsapi-management-workspaces)
* [Developer portal in API Management v2 tiers](#limitsdeveloper-portal-in-api-management-v2-tiers) ### Limits - API Management classic tiers
This section provides information about limits that apply to Azure API Managemen
[!INCLUDE [api-management-service-limits-v2](../../../includes/api-management-service-limits-v2.md)]
+### Limits - API Management workspaces
+++ ### Limits - Developer portal in API Management v2 tiers [!INCLUDE [api-management-developer-portal-limits-v2](../../../includes/api-management-developer-portal-limits-v2.md)]
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md
This article describes which capabilities Azure Communication Services SDKs supp
| | [Customer managed keys](/microsoft-365/compliance/customer-key-overview) | ✔️ | | Mid-call control | Turn your video on/off | ✔️ | | | Mute/unmute mic | ✔️ |
+| | Mute remote participants | ✔️ |
| | Switch between cameras | ✔️ | | | Local hold/unhold | ✔️ | | | Indicator of dominant speakers in the call | ✔️ |
communication-services Call Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/call-transcription.md
Last updated 08/10/2021
-zone_pivot_groups: acs-plat-ios-android-windows
-
-#Customer intent: As a developer, I want to display the call transcription state on the client.
+zone_pivot_groups: acs-plat-web-ios-android-windows
# Display call transcription state on the client
-> [!NOTE]
-> Call transcription state is only available from Teams meetings. Currently there's no support for call transcription state for Azure Communication Services to Azure Communication Services calls.
-
-When using call transcription you may want to let your users know that a call is being transcribe. Here's how.
+You need to collect consent from all participants in the call before you can transcribe them. Microsoft Teams allows users to start transcription in the meetings or calls. You would receive event when transcription has started on you can check the transcription state, if transcription started before you joined the call or meeting.
## Prerequisites
When using call transcription you may want to let your users know that a call is
- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md). - Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md)
+## Support
+The following tables define support of call transcription in Azure Communication Services.
+
+## Identities and call types
+The following tables show support of transcription for specific call type and identity.
+
+|Identities | Teams meeting | Room | 1:1 call | Group call | 1:1 Teams interop call | Group Teams interop call |
+|--|||-|||--|
+|Communication Services user | ✔️ | | | | ✔️ | ✔️ |
+|Microsoft 365 user | ✔️ | | | | ✔️ | ✔️ |
+
+## Operations
+The following tables show support of individual APIs in calling SDK to individual identity types.
+
+|Operations | Communication Services user | Microsoft 365 user |
+|--||-|
+|Get event that transcription has started | ✔️ | ✔️ |
+|Get transcription state | ✔️ | ✔️ |
+|Start or stop transcription | | |
+
+## SDKs
+The following tables show support of transcription in individual Azure Communication Services SDKs.
+
+| Platforms | Web | Web UI | iOS | iOS UI | Android | Android UI | Windows |
+|-|--|--|--|--||||
+|Is Supported | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
++ ::: zone pivot="platform-android" [!INCLUDE [Call transcription client-side Android](./includes/call-transcription/call-transcription-android.md)] ::: zone-end
container-apps Aspire Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/aspire-dashboard.md
Previously updated : 05/09/2024 Last updated : 07/18/2024 zone_pivot_groups: azure-azd-cli-portal
You can enable the .NET Aspire Dashboard on any existing container app using the
```azurecli dotnet new aspire-starter azd init --location westus2
-azd config set alpha.aspire.dashboard on
+azd config set aspire.dashboard on
azd up ```
container-apps Dapr Authentication Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-authentication-token.md
Previously updated : 05/16/2023 Last updated : 08/02/2024 # Enable token authentication for Dapr requests
container-apps Dapr Component Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-component-connection.md
Previously updated : 12/20/2023 Last updated : 08/02/2024
container-apps Dapr Component Resiliency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-component-resiliency.md
Previously updated : 02/22/2024 Last updated : 08/02/2024 # Customer Intent: As a developer, I'd like to learn how to make my container apps resilient using Azure Container Apps.
resource myPolicyDoc 'Microsoft.App/managedEnvironments/daprComponents/resilienc
### Before you begin
-Log-in to the Azure CLI:
+Log in to the Azure CLI:
```azurecli az login
container-apps Dapr Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-components.md
Previously updated : 04/29/2024 Last updated : 08/02/2024 # Dapr components in Azure Container Apps
container-apps Dapr Functions Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-functions-extension.md
Previously updated : 10/30/2023 Last updated : 08/05/2024 # Customer Intent: I'm a developer who wants to use the Dapr extension for Azure Functions in my Dapr-enabled container app
The [Dapr extension for Azure Functions](../azure-functions/functions-bindings-d
## Set up the environment
-1. In the terminal, log into your Azure subscription.
+1. In the terminal, log in to your Azure subscription.
```azurecli az login
container-apps Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md
Title: Dapr integration with Azure Container Apps
+ Title: Microservice APIs powered by Dapr
description: Learn more about using Dapr on your Azure Container App service to develop applications. Previously updated : 04/22/2024 Last updated : 08/05/2024
-# Dapr integration with Azure Container Apps
+# Microservice APIs powered by Dapr
-[Distributed Application Runtime (Dapr)][dapr-concepts] provides APIs that run as a sidecar process that helps you write and implement simple, portable, resilient, and secured microservices. Dapr works together with Azure Container Apps as an abstraction layer to provide a low-maintenance, serverless, and scalable platform. [Enabling Dapr on your container app][dapr-enable] creates a secondary process alongside your application code that simplifies application intercommunication with Dapr via HTTP or gRPC.
+Azure Container Apps provides APIs powered by [Distributed Application Runtime (Dapr)][dapr-concepts] that help you write and implement simple, portable, resilient, and secured microservices. Dapr works together with Azure Container Apps as an abstraction layer to provide a low-maintenance and scalable platform. Azure Container Apps offers a selection of fully managed Dapr APIs, components, and features, catered specifically to microservice scenarios. Simply [enable and configure Dapr][dapr-enable] as usual in your container app environment.
-## Dapr in Azure Container Apps
+## How the microservices APIs work with your container app
-Configure Dapr for your container apps environment with a [Dapr-enabled container app][dapr-enable], a [Dapr component configured for your solution][dapr-components], and a Dapr sidecar invoking communication between them. The following diagram demonstrates these core concepts related to Dapr in Azure Container Apps.
+Configure microservices APIs for your container apps environment with a [Dapr-enabled container app][dapr-enable], a [Dapr component configured for your solution][dapr-components], and a Dapr sidecar invoking communication between them. The following diagram demonstrates these core concepts, using the pub/sub API as an example.
:::image type="content" source="media/dapr-overview/dapr-in-aca.png" alt-text="Diagram demonstrating Dapr pub/sub and how it works in Container Apps.":::
Azure Container Apps ensures compatibility with Dapr open source tooling, such a
- **Actor reminders**: Require a minReplicas of 1+ to ensure reminders is always active and fires correctly. - **Jobs**: Dapr isn't supported for jobs.
-## Next Steps
+## Next steps
- [Enable Dapr in your container app.][dapr-enable] - [Learn how Dapr components work in Azure Container Apps.][dapr-components]
container-apps Deploy Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/deploy-visual-studio-code.md
description: Deploy containerized .NET applications to Azure Container Apps usin
-+ Last updated 10/29/2023
container-apps Deploy Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/deploy-visual-studio.md
description: Deploy your containerized .NET applications to Azure Container Apps
-+ Last updated 3/04/2022
container-apps Microservices Dapr Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-bindings.md
Title: "Tutorial: Event-driven work using Dapr Bindings"
-description: Deploy a sample Dapr Bindings application to Azure Container Apps.
+description: Deploy a sample application to Azure Container Apps that leverages the Dapr Bindings API.
Previously updated : 12/20/2023 Last updated : 08/02/2024 zone_pivot_group_filename: container-apps/dapr-zone-pivot-groups.json zone_pivot_groups: dapr-languages-set
zone_pivot_groups: dapr-languages-set
In this tutorial, you create a microservice to demonstrate [Dapr's Bindings API](https://docs.dapr.io/developing-applications/building-blocks/bindings/bindings-overview/) to work with external systems as inputs and outputs. You'll: > [!div class="checklist"]
-> * Run the application locally.
+> * Run the application locally with the Dapr CLI.
> * Deploy the application to Azure Container Apps via the Azure Developer CLI with the provided Bicep. The service listens to input binding events from a system CRON and then outputs the contents of local data to a PostreSql output binding. ## Prerequisites
Before deploying the application to Azure Container Apps, start by running the P
### Prepare the project
-1. Clone the [sample Dapr application](https://github.com/Azure-Samples/bindings-dapr-nodejs-cron-postgres) to your local machine.
+1. Clone the [sample application](https://github.com/Azure-Samples/bindings-dapr-nodejs-cron-postgres) to your local machine.
```bash git clone https://github.com/Azure-Samples/bindings-dapr-nodejs-cron-postgres.git
Before deploying the application to Azure Container Apps, start by running the P
cd bindings-dapr-nodejs-cron-postgres ```
-### Run the Dapr application using the Dapr CLI
+### Run the application using the Dapr CLI
1. From the sample's root directory, change directories to `db`.
Before deploying the application to Azure Container Apps, start by running the P
npm install ```
-1. Run the JavaScript service application with Dapr.
+1. Run the JavaScript service application.
```bash dapr run --app-id batch-sdk --app-port 5002 --dapr-http-port 3500 --resources-path ../components -- node index.js ```
- The `dapr run` command runs the Dapr binding application locally. Once the application is running successfully, the terminal window shows the output binding data.
+ The `dapr run` command runs the binding application locally. Once the application is running successfully, the terminal window shows the output binding data.
#### Expected output
Before deploying the application to Azure Container Apps, start by running the P
docker compose stop ```
-## Deploy the Dapr application template using Azure Developer CLI
+## Deploy the application template using Azure Developer CLI
-Now that you've run the application locally, let's deploy the Dapr bindings application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview). During deployment, we will swap the local containerized PostgreSQL for an Azure PostgreSQL component.
+Now that you've run the application locally, let's deploy the bindings application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview). During deployment, we will swap the local containerized PostgreSQL for an Azure PostgreSQL component.
### Prepare the project
cd bindings-dapr-nodejs-cron-postgres
| Azure Location | The Azure location for your resources. [Make sure you select a location available for Azure PostgreSQL](../postgresql/flexible-server/overview.md#azure-regions). | | Azure Subscription | The Azure subscription for your resources. |
-1. Run `azd up` to provision the infrastructure and deploy the Dapr application to Azure Container Apps in a single command.
+1. Run `azd up` to provision the infrastructure and deploy the application to Azure Container Apps in a single command.
```azdeveloper azd up
Upon successful completion of the `azd up` command:
### Prepare the project
-1. Clone the [sample Dapr application](https://github.com/Azure-Samples/bindings-dapr-python-cron-postgres) to your local machine.
+1. Clone the [sample application](https://github.com/Azure-Samples/bindings-dapr-python-cron-postgres) to your local machine.
```bash git clone https://github.com/Azure-Samples/bindings-dapr-python-cron-postgres.git
Upon successful completion of the `azd up` command:
cd bindings-dapr-python-cron-postgres ```
-### Run the Dapr application using the Dapr CLI
+### Run the application using the Dapr CLI
Before deploying the application to Azure Container Apps, start by running the PostgreSQL container and Python service locally with [Docker Compose](https://docs.docker.com/compose/) and Dapr.
Before deploying the application to Azure Container Apps, start by running the P
pip install -r requirements.txt ```
-1. Run the Python service application with Dapr.
+1. Run the Python service application.
```bash dapr run --app-id batch-sdk --app-port 5001 --dapr-http-port 3500 --resources-path ../components -- python3 app.py ```
- The `dapr run` command runs the Dapr binding application locally. Once the application is running successfully, the terminal window shows the output binding data.
+ The `dapr run` command runs the binding application locally. Once the application is running successfully, the terminal window shows the output binding data.
#### Expected output
Before deploying the application to Azure Container Apps, start by running the P
docker compose stop ```
-## Deploy the Dapr application template using Azure Developer CLI
+## Deploy the application template using Azure Developer CLI
-Now that you've run the application locally, let's deploy the Dapr bindings application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview). During deployment, we will swap the local containerized PostgreSQL for an Azure PostgreSQL component.
+Now that you've run the application locally, let's deploy the bindings application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview). During deployment, we will swap the local containerized PostgreSQL for an Azure PostgreSQL component.
### Prepare the project
cd bindings-dapr-python-cron-postgres
| Azure Location | The Azure location for your resources. [Make sure you select a location available for Azure PostgreSQL](../postgresql/flexible-server/overview.md#azure-regions). | | Azure Subscription | The Azure subscription for your resources. |
-1. Run `azd up` to provision the infrastructure and deploy the Dapr application to Azure Container Apps in a single command.
+1. Run `azd up` to provision the infrastructure and deploy the application to Azure Container Apps in a single command.
```azdeveloper azd up
Upon successful completion of the `azd up` command:
### Prepare the project
-1. Clone the [sample Dapr application](https://github.com/Azure-Samples/bindings-dapr-csharp-cron-postgres) to your local machine.
+1. Clone the [sample application](https://github.com/Azure-Samples/bindings-dapr-csharp-cron-postgres) to your local machine.
```bash git clone https://github.com/Azure-Samples/bindings-dapr-csharp-cron-postgres.git
Upon successful completion of the `azd up` command:
cd bindings-dapr-csharp-cron-postgres ```
-### Run the Dapr application using the Dapr CLI
+### Run the application using the Dapr CLI
Before deploying the application to Azure Container Apps, start by running the PostgreSQL container and .NET service locally with [Docker Compose](https://docs.docker.com/compose/) and Dapr.
Before deploying the application to Azure Container Apps, start by running the P
dotnet build ```
-1. Run the .NET service application with Dapr.
+1. Run the .NET service application.
```bash dapr run --app-id batch-sdk --app-port 7002 --resources-path ../components -- dotnet run ```
- The `dapr run` command runs the Dapr binding application locally. Once the application is running successfully, the terminal window shows the output binding data.
+ The `dapr run` command runs the binding application locally. Once the application is running successfully, the terminal window shows the output binding data.
#### Expected output
Before deploying the application to Azure Container Apps, start by running the P
docker compose stop ```
-## Deploy the Dapr application template using Azure Developer CLI
+## Deploy the application template using Azure Developer CLI
-Now that you've run the application locally, let's deploy the Dapr bindings application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview). During deployment, we will swap the local containerized PostgreSQL for an Azure PostgreSQL component.
+Now that you've run the application locally, let's deploy the bindings application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview). During deployment, we will swap the local containerized PostgreSQL for an Azure PostgreSQL component.
### Prepare the project
cd bindings-dapr-csharp-cron-postgres
| Azure Location | The Azure location for your resources. [Make sure you select a location available for Azure PostgreSQL](../postgresql/flexible-server/overview.md#azure-regions). | | Azure Subscription | The Azure subscription for your resources. |
-1. Run `azd up` to provision the infrastructure and deploy the Dapr application to Azure Container Apps in a single command.
+1. Run `azd up` to provision the infrastructure and deploy the application to Azure Container Apps in a single command.
```azdeveloper azd up
azd down
## Next steps -- Learn more about [deploying Dapr applications to Azure Container Apps](./microservices-dapr.md).
+- Learn more about [deploying microservices using Dapr to Azure Container Apps](./microservices-dapr.md).
- [Enable token authentication for Dapr requests.](./dapr-authentication-token.md) - Learn more about [Azure Developer CLI](/azure/developer/azure-developer-cli/overview) and [making your applications compatible with `azd`](/azure/developer/azure-developer-cli/make-azd-compatible).-- [Scale your Dapr applications using KEDA scalers](./dapr-keda-scaling.md)
+- [Scale your applications using KEDA scalers](./dapr-keda-scaling.md)
container-apps Microservices Dapr Pubsub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-pubsub.md
Title: "Tutorial: Microservices communication using Dapr Publish and Subscribe"
-description: Enable two sample Dapr applications to send and receive messages and leverage Azure Container Apps.
+description: Enable two sample applications to send and receive messages and leverage the Dapr pub/sub API.
Previously updated : 12/20/2023 Last updated : 08/05/2024 zone_pivot_group_filename: container-apps/dapr-zone-pivot-groups.json zone_pivot_groups: dapr-languages-set
-# Tutorial: Microservices communication using Dapr Publish and Subscribe
+# Tutorial: Microservices communication using Dapr Publish and Subscribe
+
+In this tutorial, you create publisher and subscriber microservices that leverage [the Dapr Pub/sub API](./dapr-overview.md#supported-dapr-apis-components-and-tooling) to communicate using messages for event-driven architectures. You'll:
-In this tutorial, you'll:
> [!div class="checklist"]
-> * Create a publisher microservice and a subscriber microservice that leverage the [Dapr pub/sub API](https://docs.dapr.io/developing-applications/building-blocks/pubsub/pubsub-overview/) to communicate using messages for event-driven architectures.
+> * Create a publisher microservice and a subscriber microservice that leverage the [Dapr pub/sub API](https://docs.dapr.io/developing-applications/building-blocks/pubsub/pubsub-overview/) to communicate using messages for event-driven architectures.
> * Deploy the application to Azure Container Apps via the Azure Developer CLI with provided Bicep. The sample pub/sub project includes:
-1. A message generator (publisher) `checkout` service that generates messages of a specific topic.
-1. An (subscriber) `order-processor` service that listens for messages from the `checkout` service of a specific topic.
+1. A message generator `checkout` service (publisher) that generates messages of a specific topic.
+1. An `order-processor` service (subscriber) that listens for messages from the `checkout` service of a specific topic.
## Prerequisites
Before deploying the application to Azure Container Apps, run the `order-process
### Prepare the project
-1. Clone the [sample Dapr application](https://github.com/Azure-Samples/pubsub-dapr-nodejs-servicebus) to your local machine.
+1. Clone the [sample application](https://github.com/Azure-Samples/pubsub-dapr-nodejs-servicebus) to your local machine.
```bash git clone https://github.com/Azure-Samples/pubsub-dapr-nodejs-servicebus.git
Before deploying the application to Azure Container Apps, run the `order-process
cd pubsub-dapr-nodejs-servicebus ```
-### Run the Dapr applications using the Dapr CLI
+### Run the applications using the Dapr CLI
-Start by running the `order-processor` subscriber service with Dapr.
+Start by running the `order-processor` subscriber service.
1. From the sample's root directory, change directories to `order-processor`.
Start by running the `order-processor` subscriber service with Dapr.
npm install ```
-1. Run the `order-processor` service with Dapr.
+1. Run the `order-processor` service.
```bash dapr run --app-port 5001 --app-id order-processing --app-protocol http --dapr-http-port 3501 --resources-path ../components -- npm run start
Start by running the `order-processor` subscriber service with Dapr.
npm install ```
-1. Run the `checkout` service with Dapr.
+1. Run the `checkout` service.
```bash dapr run --app-id checkout --app-protocol http --resources-path ../components -- npm run start
Start by running the `order-processor` subscriber service with Dapr.
dapr stop --app-id order-processor ```
-## Deploy the Dapr application template using Azure Developer CLI
+## Deploy the application template using Azure Developer CLI
-Deploy the Dapr application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview).
+Deploy the application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview).
### Prepare the project
cd pubsub-dapr-nodejs-servicebus
| Azure Location | The Azure location for your resources. | | Azure Subscription | The Azure subscription for your resources. |
-1. Run `azd up` to provision the infrastructure and deploy the Dapr application to Azure Container Apps in a single command.
+1. Run `azd up` to provision the infrastructure and deploy the application to Azure Container Apps in a single command.
```azdeveloper azd up
Before deploying the application to Azure Container Apps, run the `order-process
### Prepare the project
-1. Clone the [sample Dapr application](https://github.com/Azure-Samples/pubsub-dapr-python-servicebus) to your local machine.
+1. Clone the [sample application](https://github.com/Azure-Samples/pubsub-dapr-python-servicebus) to your local machine.
```bash git clone https://github.com/Azure-Samples/pubsub-dapr-python-servicebus.git
Before deploying the application to Azure Container Apps, run the `order-process
cd pubsub-dapr-python-servicebus ```
-### Run the Dapr applications using the Dapr CLI
+### Run the applications using the Dapr CLI
-Start by running the `order-processor` subscriber service with Dapr.
+Start by running the `order-processor` subscriber service.
1. From the sample's root directory, change directories to `order-processor`.
Start by running the `order-processor` subscriber service with Dapr.
pip3 install -r requirements.txt ```
-1. Run the `order-processor` service with Dapr.
+1. Run the `order-processor` service.
```bash dapr run --app-id order-processor --resources-path ../components/ --app-port 5001 -- python3 app.py
Start by running the `order-processor` subscriber service with Dapr.
pip3 install -r requirements.txt ```
-1. Run the `checkout` service with Dapr.
+1. Run the `checkout` service.
```bash dapr run --app-id checkout --resources-path ../components/ -- python3 app.py
Start by running the `order-processor` subscriber service with Dapr.
dapr stop --app-id order-processor ```
-## Deploy the Dapr application template using Azure Developer CLI
+## Deploy the application template using Azure Developer CLI
-Deploy the Dapr application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview).
+Deploy the application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview).
### Prepare the project
cd pubsub-dapr-python-servicebus
| Azure Location | The Azure location for your resources. | | Azure Subscription | The Azure subscription for your resources. |
-1. Run `azd up` to provision the infrastructure and deploy the Dapr application to Azure Container Apps in a single command.
+1. Run `azd up` to provision the infrastructure and deploy the application to Azure Container Apps in a single command.
```azdeveloper azd up
Before deploying the application to Azure Container Apps, run the `order-process
### Prepare the project
-1. Clone the [sample Dapr application](https://github.com/Azure-Samples/pubsub-dapr-csharp-servicebus) to your local machine.
+1. Clone the [sample application](https://github.com/Azure-Samples/pubsub-dapr-csharp-servicebus) to your local machine.
```bash git clone https://github.com/Azure-Samples/pubsub-dapr-csharp-servicebus.git
Before deploying the application to Azure Container Apps, run the `order-process
cd pubsub-dapr-csharp-servicebus ```
-### Run the Dapr applications using the Dapr CLI
+### Run the applications using the Dapr CLI
-Start by running the `order-processor` subscriber service with Dapr.
+Start by running the `order-processor` subscriber service
1. From the sample's root directory, change directories to `order-processor`.
Start by running the `order-processor` subscriber service with Dapr.
dotnet build ```
-1. Run the `order-processor` service with Dapr.
+1. Run the `order-processor` service.
```bash dapr run --app-id order-processor --resources-path ../components/ --app-port 7001 -- dotnet run --project .
Start by running the `order-processor` subscriber service with Dapr.
dotnet build ```
-1. Run the `checkout` service with Dapr.
+1. Run the `checkout` service.
```bash dapr run --app-id checkout --resources-path ../components/ -- dotnet run --project .
Start by running the `order-processor` subscriber service with Dapr.
dapr stop --app-id order-processor ```
-## Deploy the Dapr application template using Azure Developer CLI
+## Deploy the application template using Azure Developer CLI
-Deploy the Dapr application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview).
+Deploy the application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview).
### Prepare the project
cd pubsub-dapr-csharp-servicebus
| Azure Location | The Azure location for your resources. | | Azure Subscription | The Azure subscription for your resources. |
-1. Run `azd up` to provision the infrastructure and deploy the Dapr application to Azure Container Apps in a single command.
+1. Run `azd up` to provision the infrastructure and deploy the application to Azure Container Apps in a single command.
```azdeveloper azd up
azd down
## Next steps -- Learn more about [deploying Dapr applications to Azure Container Apps](./microservices-dapr.md).
+- Learn more about [deploying applications to Azure Container Apps](./microservices-dapr.md).
- [Enable token authentication for Dapr requests.](./dapr-authentication-token.md) - Learn more about [Azure Developer CLI](/azure/developer/azure-developer-cli/overview) and [making your applications compatible with `azd`](/azure/developer/azure-developer-cli/make-azd-compatible).-- [Scale your Dapr applications using KEDA scalers](./dapr-keda-scaling.md)
+- [Scale your applications using KEDA scalers](./dapr-keda-scaling.md)
container-apps Microservices Dapr Service Invoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-service-invoke.md
Previously updated : 12/20/2023 Last updated : 08/02/2024 zone_pivot_group_filename: container-apps/dapr-zone-pivot-groups.json zone_pivot_groups: dapr-languages-set
-# Tutorial: Microservices communication using Dapr Service Invocation
+# Tutorial: Microservices communication using Dapr Service Invocation
+
+In this tutorial, you create and run two microservices that communicate securely using auto-mTLS and reliably using built-in retries via [the Dapr Service Invocation API](./dapr-overview.md#supported-dapr-apis-components-and-tooling). You'll:
-In this tutorial, you'll:
> [!div class="checklist"]
-> * Create and run locally two microservices that communicate securely using auto-mTLS and reliably using built-in retries via [Dapr's Service Invocation API](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/service-invocation-overview/).
+> * Run the application locally.
> * Deploy the application to Azure Container Apps via the Azure Developer CLI with the provided Bicep. The sample service invocation project includes:
-1. A `checkout` service that uses Dapr's HTTP proxying capability on a loop to invoke a request on the `order-processor` service.
-1. A `order-processor` service that receives the request from the `checkout` service.
+1. A `checkout` service that uses HTTP proxying on a loop to invoke a request on the `order-processor` service.
+1. An `order-processor` service that receives the request from the `checkout` service.
## Prerequisites
Before deploying the application to Azure Container Apps, start by running the `
### Prepare the project
-1. Clone the [sample Dapr application](https://github.com/Azure-Samples/svc-invoke-dapr-nodejs) to your local machine.
+1. Clone the [sample applications](https://github.com/Azure-Samples/svc-invoke-dapr-nodejs) to your local machine.
```bash git clone https://github.com/Azure-Samples/svc-invoke-dapr-nodejs.git
Before deploying the application to Azure Container Apps, start by running the `
cd svc-invoke-dapr-nodejs ```
-### Run the Dapr applications using the Dapr CLI
+### Run the applications using the Dapr CLI
Start by running the `order-processor` service.
Start by running the `order-processor` service.
npm install ```
-1. Run the `order-processor` service with Dapr.
+1. Run the `order-processor` service.
```bash dapr run --app-port 5001 --app-id order-processor --app-protocol http --dapr-http-port 3501 -- npm start
Start by running the `order-processor` service.
npm install ```
-1. Run the `checkout` service with Dapr.
+1. Run the `checkout` service.
```bash dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 -- npm start
Start by running the `order-processor` service.
1. Press <kbd>Cmd/Ctrl</kbd> + <kbd>C</kbd> in both terminals to exit out of the service-to-service invocation.
-## Deploy the Dapr application template using Azure Developer CLI
+## Deploy the application template using Azure Developer CLI
-Deploy the Dapr application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview).
+Deploy the application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview).
### Prepare the project
In a new terminal window, navigate into the sample's root directory.
| Azure Location | The Azure location for your resources. | | Azure Subscription | The Azure subscription for your resources. |
-1. Run `azd up` to provision the infrastructure and deploy the Dapr application to Azure Container Apps in a single command.
+1. Run `azd up` to provision the infrastructure and deploy the application to Azure Container Apps in a single command.
```azdeveloper azd up
Before deploying the application to Azure Container Apps, start by running the `
### Prepare the project
-1. Clone the [sample Dapr application](https://github.com/Azure-Samples/svc-invoke-dapr-python) to your local machine.
+1. Clone the [sample applications](https://github.com/Azure-Samples/svc-invoke-dapr-python) to your local machine.
```bash git clone https://github.com/Azure-Samples/svc-invoke-dapr-python.git
Before deploying the application to Azure Container Apps, start by running the `
cd svc-invoke-dapr-python ```
-### Run the Dapr applications using the Dapr CLI
+### Run the applications using the Dapr CLI
Start by running the `order-processor` service.
Start by running the `order-processor` service.
pip3 install -r requirements.txt ```
-1. Run the `order-processor` service with Dapr.
+1. Run the `order-processor` service.
```bash dapr run --app-port 8001 --app-id order-processor --app-protocol http --dapr-http-port 3501 -- python3 app.py
Start by running the `order-processor` service.
pip3 install -r requirements.txt ```
-1. Run the `checkout` service with Dapr.
+1. Run the `checkout` service.
```bash dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 -- python3 app.py
Start by running the `order-processor` service.
1. Press <kbd>Cmd/Ctrl</kbd> + <kbd>C</kbd> in both terminals to exit out of the service-to-service invocation
-## Deploy the Dapr application template using Azure Developer CLI
+## Deploy the application template using Azure Developer CLI
-Deploy the Dapr application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview).
+Deploy the application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview).
### Prepare the project
Deploy the Dapr application to Azure Container Apps using [`azd`](/azure/develop
| Azure Location | The Azure location for your resources. | | Azure Subscription | The Azure subscription for your resources. |
-1. Run `azd up` to provision the infrastructure and deploy the Dapr application to Azure Container Apps in a single command.
+1. Run `azd up` to provision the infrastructure and deploy the application to Azure Container Apps in a single command.
```azdeveloper azd up
Before deploying the application to Azure Container Apps, start by running the `
### Prepare the project
-1. Clone the [sample Dapr application](https://github.com/Azure-Samples/svc-invoke-dapr-csharp) to your local machine.
+1. Clone the [sample applications](https://github.com/Azure-Samples/svc-invoke-dapr-csharp) to your local machine.
```bash git clone https://github.com/Azure-Samples/svc-invoke-dapr-csharp.git
Before deploying the application to Azure Container Apps, start by running the `
cd svc-invoke-dapr-csharp ```
-### Run the Dapr applications using the Dapr CLI
+### Run the applications using the Dapr CLI
-Start by running the `order-processor` callee service with Dapr.
+Start by running the `order-processor` callee service.
1. From the sample's root directory, change directories to `order-processor`.
Start by running the `order-processor` callee service with Dapr.
dotnet build ```
-1. Run the `order-processor` service with Dapr.
+1. Run the `order-processor` service.
```bash dapr run --app-port 7001 --app-id order-processor --app-protocol http --dapr-http-port 3501 -- dotnet run
Start by running the `order-processor` callee service with Dapr.
dotnet build ```
-1. Run the `checkout` service with Dapr.
+1. Run the `checkout` service.
```bash dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 -- dotnet run
Start by running the `order-processor` callee service with Dapr.
1. Press <kbd>Cmd/Ctrl</kbd> + <kbd>C</kbd> in both terminals to exit out of the service-to-service invocation.
-## Deploy the Dapr application template using Azure Developer CLI
+## Deploy the application template using Azure Developer CLI
-Deploy the Dapr application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview).
+Deploy the application to Azure Container Apps using [`azd`](/azure/developer/azure-developer-cli/overview).
### Prepare the project
In a new terminal window, navigate into the [sample's](https://github.com/Azure-
| Azure Location | The Azure location for your resources. | | Azure Subscription | The Azure subscription for your resources. |
-1. Run `azd up` to provision the infrastructure and deploy the Dapr application to Azure Container Apps in a single command.
+1. Run `azd up` to provision the infrastructure and deploy the application to Azure Container Apps in a single command.
```azdeveloper azd up
container-apps Microservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices.md
Previously updated : 06/23/2022 Last updated : 08/02/2024
- Independent [scaling](scale-app.md), [versioning](application-lifecycle-management.md), and [upgrades](application-lifecycle-management.md) - [Service discovery](connect-apps.md)-- Native [Dapr integration](./dapr-overview.md)
+- [Dapr integration](./dapr-overview.md)
:::image type="content" source="media/microservices/azure-container-services-microservices.png" alt-text="Container apps are deployed as microservices.":::
container-apps Service Discovery Resiliency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/service-discovery-resiliency.md
Previously updated : 11/06/2023 Last updated : 08/02/2024 # Customer Intent: As a developer, I'd like to learn how to make my container apps resilient using Azure Container Apps.
resource myPolicyDoc 'Microsoft.App/containerApps/resiliencyPolicies@2023-11-02-
### Before you begin
-Log-in to the Azure CLI:
+Log in to the Azure CLI:
```azurecli az login
cost-management-billing Direct Ea Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md
Anyone with access to view the request can view its details. In the request deta
:::image type="content" source="./media/direct-ea-administration/request-details.png" alt-text="Screenshot showing request details to view Accept ownership URL." lightbox="./media/direct-ea-administration/request-details.png" :::
+> [!NOTE]
+> You can now view the **Service tenant ID** for subscriptions billed to your account on the **Azure Subscriptions** page under __Cost Management + Billing.__
## Cancel a subscription Only account owners can cancel their own subscriptions.
The Azure EA customer is opted out of the extended term, and the Azure EA enroll
**Transferred**<br> Enrollments where all associated accounts and services were transferred to a new enrollment appear with a transferred status.
- > [!NOTE]
- > Enrollments don't automatically transfer if a new enrollment number is generated at renewal. You must include your prior enrollment number in your renewal paperwork to facilitate an automatic transfer.
-
+> [!NOTE]
+> Enrollments don't automatically transfer if a new enrollment number is generated at renewal. You must include your prior enrollment number in your renewal paperwork to facilitate an automatic transfer.
## Related content - If you need to create an Azure support request for your EA enrollment, see [How to create an Azure support request for an Enterprise Agreement issue](../troubleshoot-billing/how-to-create-azure-support-request-ea.md).
ddos-protection Test Through Simulations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/test-through-simulations.md
RedWolf's [DDoS Testing](https://www.redwolfsecurity.com/services/) service suit
- **Attack Vectors**: Unique cloud attacks designed by RedWolf. For more information about RedWolf attack vectors, see [Technical Details](https://www.redwolfsecurity.com/redwolf-technical-details/). - **Guided Service**: Leverage RedWolf's team to run tests. For more information about RedWolf's guided service, see [Guided Service](https://www.redwolfsecurity.com/managed-testing-explained/).
- - **Self Service**: Leverage RedWol to run tests yourself. For more information about RedWolf's self-service, see [Self Service](https://www.redwolfsecurity.com/self-serve-testing/).
+ - **Self Service**: Leverage RedWolf to run tests yourself. For more information about RedWolf's self-service, see [Self Service](https://www.redwolfsecurity.com/self-serve-testing/).
## MazeBolt
defender-for-cloud Concept Regulatory Compliance Standards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-regulatory-compliance-standards.md
The following standards are available in Defender for Cloud:
| Australian Government ISM Protected | AWS Foundational Security Best Practices | Brazilian General Personal Data Protection Law (LGPD)| | Canada Federal PBMM | AWS Well-Architected Framework | California Consumer Privacy Act (CCPA)| | CIS Azure Foundations | Brazilian General Personal Data Protection Law (LGPD) | CIS Controls|
-| CIS Azure Kubernetes Service (AKS)| California Consumer Privacy Act (CCPA) | CIS GCP Foundations|
-| CMMC | CIS AWS Foundations | CIS Google Cloud Platform Foundation Benchmark|
-| FedRAMP ΓÇÿHΓÇÖ & ΓÇÿMΓÇÖ | CRI Profile | CIS Google Kubernetes Engine (GKE) Benchmark|
-| HIPAA/HITRUST | CSA Cloud Controls Matrix (CCM) | CRI Profile|
-| ISO/IEC 27001 | GDPR | CSA Cloud Controls Matrix (CCM)|
-| New Zealand ISM Restricted | ISO/IEC 27001 | Cybersecurity Maturity Model Certification (CMMC)|
-| NIST SP 800-171 | ISO/IEC 27002 | FFIEC Cybersecurity Assessment Tool (CAT)|
-| NIST SP 800-53 | NIST Cybersecurity Framework (CSF) | GDPR|
-| PCI DSS | NIST SP 800-172 | ISO/IEC 27001|
-| RMIT Malaysia | PCI DSS | ISO/IEC 27002|
-| SOC 2 | | ISO/IEC 27017|
+| CIS Azure Kubernetes Service (AKS Benchmark) | California Consumer Privacy Act (CCPA) | CIS GCP Foundations|
+| CMMC |CIS Amazon Elastic Kubernetes Service (EKS) Benchmark| CIS Google Cloud Platform Foundation Benchmark|
+| FedRAMP ΓÇÿHΓÇÖ & ΓÇÿMΓÇÖ | CIS AWS Foundations | CIS Google Kubernetes Engine (GKE) Benchmark|
+| HIPAA/HITRUST | CRI Profile | CRI Profile|
+| ISO/IEC 27001 | CSA Cloud Controls Matrix (CCM) | CSA Cloud Controls Matrix (CCM)|
+| New Zealand ISM Restricted | GDPR | Cybersecurity Maturity Model Certification (CMMC)|
+| NIST SP 800-171 | ISO/IEC 27001 | FFIEC Cybersecurity Assessment Tool (CAT)|
+| NIST SP 800-53 | ISO/IEC 27002 | GDPR|
+| PCI DSS | NIST Cybersecurity Framework (CSF) | ISO/IEC 27001|
+| RMIT Malaysia | NIST SP 800-172 | ISO/IEC 27002|
+| SOC 2 | PCI DSS | ISO/IEC 27017|
| SWIFT CSP CSCF | | NIST Cybersecurity Framework (CSF)| | UK OFFICIAL and UK NHS | | NIST SP 800-53 | | | | NIST SP 800-171|
defender-for-iot Dell Edge 3200 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-edge-3200.md
The following image shows a view of the Dell Edge Gateway 3200 back panel:
|Management|iDRAC Group Manager, Disabled | |Rack support| Wall mount/ DIN rail support |
-## Dell Edge Gateway 3200 - Bill of Materials
+## Dell Edge Gateway 3200 - Bill of materials
|type|Description| |-|-|
defender-for-iot Dell Poweredge R660 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-poweredge-r660.md
+
+ Title: Dell PowerEdge R660 for operational technology (OT) monitoring - Microsoft Defender for IoT
+description: Learn about the Dell PowerEdge R660 appliance's configuration when used for OT monitoring with Microsoft Defender for IoT in enterprise deployments.
Last updated : 07/29/2024+++
+# Dell PowerEdge R660
+
+This article describes the Dell PowerEdge R660 appliance, supported for operational technology (OT) sensors in an enterprise deployment.
+The Dell PowerEdge R660 is also available for the on-premises management console.
+
+|Appliance characteristic | Description|
+|||
+|**Hardware profile** | R660 |
+|**Performance** | Max bandwidth: 3 Gbps<br>Max devices: 12,000 |
+|**Physical Specifications** | Mounting: 1U with rail kit<br>Ports: 6x RJ45 1 GbE|
+|**Status** | Supported, available as a preconfigured appliance|
+
+The following image shows a view of the Dell PowerEdge R660 front panel:
++
+The following image shows a view of the Dell PowerEdge R660 back panel:
++
+## Specifications
+
+|Component| Technical specifications|
+|:-|:-|
+|Chassis| 1U rack server|
+|Dimensions| Height: 1.68 in / 42.8 mm <br>Width: 18.97 in / 482.0 cm<br>Depth: 23.04 in / 585.3 mm (without bezel) 23.57 in / 598.9 mm (with bezel)|
+|Processor| Intel Xeon E-2434 3.4 GHz <br>8M Cache<br> 4C/8T, Turbo, HT (55 W) DDR5-4800|
+|Memory| 128 GB |
+|Storage| 7.2 TB Hard Drive |
+|Network controller| - PowerEdge R660 Motherboard with Broadcom 5720 Dual Port 1 Gb On-Board LOM, <br>- PCIe Blank Filler, Low Profile. <br>- Intel Ethernet i350 Quad Port 1 GbE BASE-T Adapter, PCIe Low Profile, V2|
+|Management|iDRAC Group Manager, Disabled|
+|Rack support| ReadyRails Sliding Rails With Cable Management Arm|
+
+## Dell PowerEdge R660 - Bill of Materials
+
+### Components
+
+|Quantity|PN| Module| Description|
+|-||-||
+|1| 210-BFUZ | Base | PowerEdge R660xs |
+|1| 461-AAIG | Trusted platform module | Trusted platform module 2.0 V3 |
+|1| 470-AFQI | Chassis configuration | 2.5" Chassis with up to 8 Hard Drives (SAS/SATA), 2 CPU |
+|1| 338-CKVW | Processor | Intel Xeon Silver 4410T 2.7 G 10C/20T, 16 GT/s, 27 M caches, Turbo, HT (150 W) DDR5-4000 |
+|1| 338-CKVW | Additional processor | Intel Xeon Silver 4410T 2.7 G 10C/20T, 16 GT/s, 27 M caches, Turbo, HT (150 W) DDR5-4000 |
+|1| 379-BDCO | Additional processor | Additional processor selected |
+|1| 338-CHQT | Processor thermal configuration | Heatsink for 2 CPU configuration (CPU less than or equal to 150 W)|
+|1| 370-AAIP | Memory configuration type | Performance Optimized |
+|1| 370-AHCL | Memory DIMM type and speed | 4800-MT/s RDIMMs |
+|4| 370-AGZP | Memory capacity | 32 GB RDIMM, 4,800 MT/s dual rank |
+|1| 780-BCDS | RAID configuration | unconfigured RAID |
+|1| 405-AAZB | RAID controller | PERC H755 SAS Front |
+|1| 750-ACFR | RAID controller | Front PERC Mechanical Parts, front load |
+|6| 161-BCBX | Hard drives | 2.4 TB Hard Drive SAS ISE 12 Gbps 10k 512e 2.5in Hot Plug |
+|1| 384-BBBH | BIOS and Advanced System Configuration Settings | Power Saving BIOS Settings |
+|1| 387-BBEY | Advanced System Configurations | No Energy Star |
+|1| 384-BDJC | Fans | Standard Fan X7 |
+|1| 528-CTIC | Embedded Systems Management | iDRAC9, Enterprise 16G |
+|1| 450-AKLF | Power supply | Dual, Redundant(1+1), Hot-Plug Power Supply, 1100 W MM(100-240Vac) Titanium |
+|2| 450-AADY | Power cords | C13 to C14, PDU Style, 10 AMP, 6.5 Feet (2 m), Power Cord |
+|1| 330-BCCE | PCIe Riser | Riser Config 6, Low profile, 1x 16 LP slots (Gen 5) + 1x8 LP Slot (Gen 5), 2 CPU |
+|1| 384-BDKV | Motherboard | PowerEdge R660xs Motherboard with Broadcom 5720 Dual Port 1 Gb On-Board LOM |
+|1| 540-BCOB | Network daughter card | Broadcom 5720 Quad Port 1 GbE BASE-T Adapter, OCP NIC 3.0 |
+|1| 350-BCEL | Quick sync | Quick Sync 2 (At-the-box mgmt) |
+|1| 379-BCSF | Password | iDRAC, Factory Generated Password |
+|1| 379-BCQX | IDRAC service module | iDRAC Service Module (ISM), NOT Installed |
+|1| 379-BCQV | Group manager | iDRAC group manager, Enabled |
+|1| 325-BEVH | Bezel | PowerEdge 1U Standard Bezel |
+|1| 350-BEUF | Bezel | Dell Luggage Tag, 0/6/8/10 |
+|1| 770-BCJI | Rack rails | A11 drop-in/stab-in Combo Rails Without Cable Management Arm |
+|1| 340-DLRR | Shipping | PowerEdge R660XS Shipping EMEA1 (English/French/German/Spanish/Russian/Hebrew) |
+|1| 340-DFKP | Shipping material | PowerEdge R660xs, 8x2.5, Short Drive Shipping Material |
+|1| 389-FBMD | Regulatory |PowerEdge R660xs HS5610 Label, CE and CCC Marking, for below 1,300 W PSU |
+|1| 683-11870 | Dell
+
+### Software
+
+|Quantity|PN| Module| Description|
+|-||-||
+|1| 800-BBDM | Advanced system configuration | UEFI BIOS Boot Mode with GPT Partition |
+|1| 528-COYT | Embedded Systems Management | Secured Component Verification |
+|1| 611-BBBF | Operating system | No operating system |
+|1| 605-BBFN | OS media kits | No media required |
+|1| 631-AACK | System documentation | No Systems Documentation, No OpenManage DVD Kit |
+
+### Service
+
+|Quantity|PN| Module| Description|
+|-||-||
+|1| 293-10049 | Shipping Box Labels - Standard | Order Configuration Shipbox Label (Ship Date, Model, Processor Speed, HDD Size, RAM) |
+|1| 865-BBLL | Dell
+|1| 865-BBLM | Dell
+|1| 709-BBIX | Dell
+
+## Install Defender for IoT software on the DELL R660
+
+This procedure describes how to install Defender for IoT software on the Dell R660.
+
+The installation process takes about 20 minutes. During the installation, the system restarts several times.
+
+To install Defender for IoT software:
+
+1. Connect the screen and keyboard to the appliance, and then connect to the CLI.
+
+1. Connect an external CD or disk-on-key that contains the software you downloaded from the Azure portal.
+
+1. Start the appliance.
+
+1. Continue with the generic procedure for installing Defender for IoT software. For more information, see [Defender for IoT software installation](../how-to-install-software.md).
+
+## Next steps
+
+Continue learning about the system requirements for physical or virtual appliances. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md).
+
+Then, use any of the following procedures to continue:
+
+- [Download software for an OT sensor](../ot-deploy/install-software-ot-sensor.md#download-software-files-from-the-azure-portal)
+- [Download software files for an on-premises management console](../legacy-central-management/install-software-on-premises-management-console.md#download-software-files-from-the-azure-portal)
deployment-environments How To Configure Extensibility Bicep Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-extensibility-bicep-container-image.md
# Configure container image to execute deployments with ARM and Bicep
-In this article, you learn how to build custom Azure Resource Manager (ARM) and Bicep container images to deploy your environment definitions in Azure Deployment Environments (ADE).
+In this article, you learn how to build custom Azure Resource Manager (ARM) and Bicep container images to deploy your [environment definitions](configure-environment-definition.md) in Azure Deployment Environments (ADE).
An environment definition comprises at least two files: a template file, like *azuredeploy.json* or *main.bicep*, and a manifest file named *environment.yaml*. ADE uses containers to deploy environment definitions, and natively supports the ARM and Bicep IaC frameworks.
-The ADE extensibility model enables you to create custom container images to use with your environment definitions. By using the extensibility model, you can create your own custom container images, and store them in a container registry like DockerHub. You can then reference these images in your environment definitions to deploy your environments.
+The ADE extensibility model enables you to create custom container images to use with your environment definitions. By using the extensibility model, you can create your own custom container images, and store them in a container registry like Azure Container Registry (ACR) or Docker Hub. You can then reference these images in your environment definitions to deploy your environments.
The ADE team provides a selection of images to get you started, including a core image, and an Azure Resource Manager (ARM)/Bicep image. You can access these sample images in the [Runner-Images](https://aka.ms/deployment-environments/runner-images) folder.
echo "{\"outputs\": $deploymentOutput}" > $ADE_OUTPUTS
## Make the custom image accessible to ADE
-You must build your Docker image and push it to your container registry to make it available for use in ADE. You can build your image using the Docker CLI, or by using a script provided by ADE.
+You must build your Docker image and push it to your container registry to make it available for use in ADE.
+
+You can build your image using the Docker CLI, or by using a script provided by ADE.
Select the appropriate tab to learn more about each approach.
docker build . -t {YOUR_REGISTRY}.azurecr.io/customImage:1.0.0
### Push the Docker image to a registry
-In order to use custom images, you need to set up a publicly accessible image registry with anonymous image pull enabled. This way, Azure Deployment Environments can access your custom image to execute in our container.
+In order to use custom images, you need to store them in a container registry. Azure Container Instances (ACR) is highly recommended for that. Due to its tight integration with ADE, the image can be published without allowing public anonymous pull access.
+
+It's also possible to store the image in a different container registry such as Docker Hub, but in that case it needs to be publicly accessible.
-Azure Container Registry is an Azure offering that stores container images and similar artifacts.
+> [!Caution]
+> Enabling anonymous (unauthenticated) pull access makes all registry content publicly available for read (pull) actions.
+
+To use a custom image stored in ACR, you need to ensure that ADE has appropriate permissions to access your image. Anonymous pull access is disabled by default in ACR.
To create a registry, which can be done through the Azure CLI, the Azure portal, PowerShell commands, and more, follow one of the [quickstarts](/azure/container-registry/container-registry-get-started-azure-cli).
+#### Use a public registry with anonymous pull
+ To set up your registry to have anonymous image pull enabled, run the following commands in the Azure CLI: ```azurecli
When you're ready to push your image to your registry, run the following command
docker push {YOUR_REGISTRY}.azurecr.io/{YOUR_IMAGE_LOCATION}:{YOUR_TAG} ```
+#### Use ACR with secured access
+
+By default, access to pull or push content from an Azure Container Registry is only available to authenticated users. You can further secure access to ACR by limiting access from certain networks and assigning specific roles.
+
+##### Limit network access
+
+To secure network access to your ACR, you can limit access to your own networks, or disable public network access entirely. If you limit network access, you must enable the firewall exception *Allow trusted Microsoft services to access this container registry*.
+
+To disable access from public networks:
+
+1. [Create an ACR instance](/azure/container-registry/container-registry-get-started-azure-cli) or use an existing one.
+1. In the Azure portal, go to the ACR that you want to configure.
+1. On the left menu, under **Settings**, select **Networking**.
+1. On the Networking page, on the **Public access** tab, under **Public network access**, select **Disabled**.
+
+ :::image type="content" source="media/how-to-configure-extensibility-bicep-container-image/container-registry-network-settings.png" alt-text="Screenshot of the Azure portal, showing the ACR network settings, with Public access and Disabled highlighted.":::
+
+1. Under **Firewall exception**, check that **Allow trusted Microsoft services to access this container registry** is selected, and then select **Save**.
+
+ :::image type="content" source="media/how-to-configure-extensibility-bicep-container-image/container-registry-network-disable-public.png" alt-text="Screenshot of the ACR network settings, with Allow trusted Microsoft services to access this container registry and Save highlighted.":::
+
+##### Assign the AcrPull role
+
+Creating environments by using container images uses the ADE infrastructure, including projects and environment types. Each project has one or more project environment types, which need read access to the container image that defines the environment to be deployed. To access the images within your ACR securely, assign the AcrPull role to each project environment type.
+
+To assign the AcrPull role to the Project Environment Type:
+
+1. In the Azure portal, go to the ACR that you want to configure.
+1. On the left menu, select **Access Control (IAM)**.
+1. Select **Add** > **Add role assignment**.
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml).
+
+ | Setting | Value |
+ | | |
+ | **Role** | Select **AcrPull**. |
+ | **Assign access to** | Select **User, group, or service principal**. |
+ | **Members** | Enter the name of the project environment type that needs to access the image in the container. |
+
+ The project environment type displays like the following example:
+
+ :::image type="content" source="media/how-to-configure-extensibility-bicep-container-image/container-registry-access-control.png" alt-text="Screenshot of the Select members pane, showing a list of project environment types with part of the name highlighted.":::
+
+In this configuration, ADE uses the Managed Identity for the PET, whether system assigned or user assigned.
+
+> [!Tip]
+> This role assignment has to be made for every project environment type. It can be automated through the Azure CLI.
+When you're ready to push your image to your registry, run the following command:
+
+```docker
+docker push {YOUR_REGISTRY}.azurecr.io/{YOUR_IMAGE_LOCATION}:{YOUR_TAG}
+```
++ ## [Build a container image with a script](#tab/build-a-container-image-with-a-script/) [!INCLUDE [custom-image-script](includes/custom-image-script.md)]
event-hubs Azure Event Hubs Kafka Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/azure-event-hubs-kafka-overview.md
Conceptually, Kafka and Event Hubs are very similar. They're both partitioned lo
### Compression
+The Kafka compression for Event Hubs is only supported in Premium and Dedicated tiers currently.
+ The client-side [compression](https://cwiki.apache.org/confluence/display/KAFKA/Compression) feature in Apache Kafka clients conserves compute resources and bandwidth by compressing a batch of multiple messages into a single message on the producer side and decompressing the batch on the consumer side. The Apache Kafka broker treats the batch as a special message. Kafka producer application developers can enable message compression by setting the compression.type property. Azure Event Hubs currently supports `gzip` compression.
hdinsight-aks Hdinsight Aks Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/release-notes/hdinsight-aks-release-notes-archive.md
Title: Archived release notes for Azure HDInsight on AKS
description: Archived release notes for Azure HDInsight on AKS. Get development tips and details for Trino, Flink, and Spark. Previously updated : 03/21/2024 Last updated : 08/05/2024 # Azure HDInsight on AKS archived release notes Azure HDInsight on AKS is one of the most popular services among enterprise customers for open-source analytics on Azure. If you would like to subscribe on release notes, watch releases on this [GitHub repository](https://github.com/Azure/HDInsight-on-aks/releases).
+### Release date: March 20, 2024
+
+**This release applies to the following**
+
+- Cluster Pool Version: 1.1
+- Cluster Version: 1.1.1
+- AKS version: 1.27
++
+### New Features
+
+**Apache Flink Application Mode Cluster**
+
+Application mode clusters are designed to support dedicated resources for large and long-running jobs. When you have resource-intensive or extensive data processing tasks, you can use the [Application Mode Cluster](https://flink.apache.org/2020/07/14/application-deployment-in-flink-current-state-and-the-new-application-mode/#application-mode). This mode allows you to allocate dedicated resources for specific Apache Flink applications, ensuring that they have the necessary computing power and memory to handle large workloads effectively.
+
+For more information, see [Apache Flink Application Mode cluster on HDInsight on AKS](../flink/application-mode-cluster-on-hdinsight-on-aks.md).
+
+**Private Clusters for HDInsight on AKS**
+
+With private clusters, and outbound cluster settings you can now control ingress and egress traffic from HDInsight on AKS cluster pools and clusters.
+
+- Use Azure Firewall or Network Security Groups (NSGs) to control the egress traffic, when you opt to use outbound cluster pool with load balancer.
+- Use Outbound cluster pool with User defined routing to control egress traffic at the subnet level.
+- Use Private AKS cluster feature - To ensure AKS control plane, or API server has internal IP addresses. The network traffic between AKS Control plane / API server and HDInsight on AKS node pools (clusters) remains on the private network only.
+- Avoid creating public IPs for the cluster. Use private ingress feature on your clusters.
+
+For more information, see [Control network traffic from HDInsight on AKS Cluster pools and cluster](../control-egress-traffic-from-hdinsight-on-aks-clusters.md).
+
+**In place Upgrade**
+
+Upgrade your clusters and cluster pools with the latest software updates. This means that you can enjoy the latest cluster package hotfixes, security updates, and AKS patches, without recreating clusters. For more information, see [Upgrade your HDInsight on AKS clusters and cluster pools](../in-place-upgrade.md).
+
+> [!IMPORTANT]
+> To take benefit of all these **latest features**, you are required to create a new cluster pool with 1.1 and cluster version 1.1.1.
+
+### Known issues
+
+- **Workload identity limitation:**
+ - There's a known [limitation](/azure/aks/workload-identity-overview#limitations) when transitioning to workload identity. This limitation is due to the permission-sensitive nature of FIC operations. Users can't perform deletion of a cluster by deleting the resource group. Cluster deletion requests must be triggered by the application/user/principal with FIC/delete permissions. In case, the FIC deletion fails, the high-level cluster deletion also fails.
+ - **User Assigned Managed Identities (UAMI)** support ΓÇô There's a limit of 20 FICs per UAMI. You can only create 20 Federated Credentials on an identity. In HDInsight on AKS cluster, FIC (Federated Identity Credential) and SA have one-to-one mapping and only 20 SAs can be created against an MSI. If you want to create more clusters, then you are required to provide different MSIs to overcome the limitation.
+ - Creation of federated identity credentials is currently not supported on user-assigned managed identities created in [these regions](/entra/workload-id/workload-identity-federation-considerations#unsupported-regions-user-assigned-managed-identities)
+
+
+### Operating System version
+
+- Mariner OS 2.0
+
+**Workload versions**
+
+|Workload|Version|
+| -- | -- |
+|Trino | 426 |
+|Flink | 1.17.0 |
+|Apache Spark | 3.3.1 |
+
+**Supported Java and Scala versions**
+
+|Workload |Java|Scala|
+| -- | -- | -- |
+|Trino |Open JDK 17.0.7  |- |
+|Flink |Open JDK 11.0.21 |2.12.7 |
+|Spark |Open JDK 1.8.0_345  |2.12.15 |
+
+The preview is available in the following [regions](../overview.md#region-availability-public-preview).
+
+If you have any more questions, contact [Azure Support](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview) or refer to the [Support options](../hdinsight-aks-support-help.md) page. If you have product specific feedback, write us on [aka.ms/askhdinsight](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR6HHTBN7UDpEhLm8BJmDhGJURDhLWEhBVE5QN0FQRUpHWDg4ODlZSDA4RCQlQCN0PWcu).
++ ### Release date: February 05, 2024 **This release applies to the following**
hdinsight-aks Hdinsight Aks Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/release-notes/hdinsight-aks-release-notes.md
Title: Release notes for Azure HDInsight on AKS
description: Latest release notes for Azure HDInsight on AKS. Get development tips and details for Trino, Flink, Spark, and more. Previously updated : 03/20/2024 Last updated : 08/05/2024 # Azure HDInsight on AKS release notes
You can refer to [What's new](../whats-new.md) page for all the details of the f
## Release Information
-### Release date: March 20, 2024
+### Release date: Aug 05, 2024
**This release applies to the following** -- Cluster Pool Version: 1.1-- Cluster Version: 1.1.1
+- Cluster Pool Version: 1.2
+- Cluster Version: 1.2.1
- AKS version: 1.27 - ### New Features
-**Apache Flink Application Mode Cluster**
-
-Application mode clusters are designed to support dedicated resources for large and long-running jobs. When you have resource-intensive or extensive data processing tasks, you can use the [Application Mode Cluster](https://flink.apache.org/2020/07/14/application-deployment-in-flink-current-state-and-the-new-application-mode/#application-mode). This mode allows you to allocate dedicated resources for specific Apache Flink applications, ensuring that they have the necessary computing power and memory to handle large workloads effectively.
-
-For more information, see [Apache Flink Application Mode cluster on HDInsight on AKS](../flink/application-mode-cluster-on-hdinsight-on-aks.md).
+**MSI based SQL authentication**
+Users can now authenticate external Azure SQL DB Metastore with MSI instead of User ID password authentication. This feature helps to further secure the cluster connection with Metastore.
-**Private Clusters for HDInsight on AKS**
+**Configurable VM SKUs for Head node, SSH node**
+This functionality allows users to choose specific SKUs for head nodes, worker nodes, and SSH nodes, offering the flexibility to select according to the use case and the potential to lower total cost of ownership (TCO).
-With private clusters, and outbound cluster settings you can now control ingress and egress traffic from HDInsight on AKS cluster pools and clusters.
+**Multiple MSI in cluster**
+Users can configure multiple MSI for cluster admins operations and for job related resource access. This feature allows users to demarcate and control the access to the cluster and data lying in the storage account.
+For example, one MSI for access to data in storage account and dedicated MSI for cluster operations.
-- Use Azure Firewall or Network Security Groups (NSGs) to control the egress traffic, when you opt to use outbound cluster pool with load balancer.-- Use Outbound cluster pool with User defined routing to control egress traffic at the subnet level.-- Use Private AKS cluster feature - To ensure AKS control plane, or API server has internal IP addresses. The network traffic between AKS Control plane / API server and HDInsight on AKS node pools (clusters) remains on the private network only.-- Avoid creating public IPs for the cluster. Use private ingress feature on your clusters.
+### Updated
-For more information, see [Control network traffic from HDInsight on AKS Cluster pools and cluster](../control-egress-traffic-from-hdinsight-on-aks-clusters.md).
+**Script action**
+Script Action now can be added with Sudo user permission. Users can now install multiple dependencies including custom jars to customize the clusters as required.
-**In place Upgrade**
+**Library Management**
+Maven repository shortcut feature added to the Library Management in this release. User can now install Maven dependencies directly from the open-source repositories.
-Upgrade your clusters and cluster pools with the latest software updates. This means that you can enjoy the latest cluster package hotfixes, security updates, and AKS patches, without recreating clusters. For more information, see [Upgrade your HDInsight on AKS clusters and cluster pools](../in-place-upgrade.md).
+**Spark 3.4**
+Spark 3.4 update brings a range of new features includes
+* API enhancements
+* Structured streaming improvements
+* Improved usability and developer experience
> [!IMPORTANT]
-> To take benefit of all these **latest features**, you are required to create a new cluster pool with 1.1 and cluster version 1.1.1.
+> To take benefit of all these **latest features**, you are required to create a new cluster pool with 1.2 and cluster version 1.2.1
### Known issues
Upgrade your clusters and cluster pools with the latest software updates. This m
|Workload|Version| | -- | -- |
-|Trino | 426 |
+|Trino | 440 |
|Flink | 1.17.0 |
-|Apache Spark | 3.3.1 |
+|Apache Spark | 3.4 |
**Supported Java and Scala versions** |Workload |Java|Scala| | -- | -- | -- |
-|Trino |Open JDK 17.0.7  |- |
+|Trino |Open JDK 21.0.2  |- |
|Flink |Open JDK 11.0.21 |2.12.7 | |Spark |Open JDK 1.8.0_345  |2.12.15 |
hdinsight Hdinsight Hadoop Oms Log Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-oms-log-analytics-tutorial.md
For the log table mappings from the classic Azure Monitor integration to the new
#### [Classic Azure Monitor experience](#tab/previous)
+> [!Important]
+> On 31 August 2024, Azure is retiring the Classic Azure Monitor experience on HDInsight.
+ ## Prerequisites * A Log Analytics workspace. You can think of this workspace as a unique Azure Monitor logs environment with its own data repository, data sources, and solutions. For the instructions, see [Create a Log Analytics workspace](../azure-monitor/vm/monitor-virtual-machine.md).
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/release-notes.md
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Server for Azure. The server is an implementation of the [FHIR](https://hl7.org/fhir) standard. This document provides details about the features and enhancements made to Azure API for FHIR.
+## **July 2024**
+
+### FHIR service
+
+**Bug Fixes**
+
+**Fixed: Exporting Data as SMART User**
+Exporting data as a SMART user no longer requires write scopes. Previously, it was necessary to grant "write" privileges to a SMART user for exporting data, which implied higher privilege levels. To initiate an export job as a SMART user, ensure the user is a member of the FHIR export role in RBAC and requests the "read" SMART clinical scope.
+
+**Fixed: Updating Status Code from HTTP 500 to HTTP 400**
+During a patch operation, if the payload requested an update for a resource type other than Parameter, an internal server error (HTTP 500) was initially thrown. This has been updated to throw an HTTP 400 error instead.
+ ## **May 2024** ### FHIR service
healthcare-apis Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/import-data.md
Content-Type:application/fhir+json
| -- | -- | -- | -- | | `inputFormat`| String that represents the name of the data source format. Only FHIR NDJSON files are supported. | 1..1 | `application/fhir+ndjson` | | `mode`| Import mode value. | 1..1 | For an initial-mode import, use the `InitialLoad` mode value. For incremental-mode import, use the `IncrementalLoad` mode value. If you don't provide a mode value, the `IncrementalLoad` mode value is used by default. |
+| `allowNegativeVersions`| Allows FHIR server assigning negative versions for resource records with explicit lastUpdated value and no version specified when input does not fit in contiguous space of positive versions existing in the store. | 0..1 | To enable this feature pass true. By default it is false. |
| `input`| Details of the input files. | 1..* | A JSON array with the three parts described in the following table. | + | Input part name | Description | Cardinality | Accepted values | | -- | -- | -- | -- | | `type`| Resource type of the input file. | 0..1 | A valid [FHIR resource type](https://www.hl7.org/fhir/resourcelist.html) that matches the input file. This field is optional.|
Content-Type:application/fhir+json
}, { "name": "mode",
- "valueString": "<Use "InitialLoad" for initial mode import / Use "IncrementalLoad" for incremental mode import>",
+ "valueString": "<Use "InitialLoad" for initial mode import / Use "IncrementalLoad" for incremental mode import>"
+ },
+ {
+ "name": "allowNegativeVersions",
+ "valueBoolean": true
}, { "name": "input",
healthcare-apis Release Notes 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes-2024.md
It's possible for dates supplied within JSON data to be returned in a different
The coercion of strings to .NET DateTime objects can be disabled using the boolean parameter `jsonDeserializationTreatDatesAsStrings`. When set to `true`, the supplied data is treated as a string and won't be modified before being supplied to the Liquid engine.
+#### Import Operation enhancement
+The FHIR service now allows ingestion of data without specifying a version at the resource level. The order of resources is maintained using the lastUpdated value. This enhancement introduces the "allowNegativeVersions" flag. Setting flag true allows the FHIR service to assign negative versions for resource records with an explicit lastUpdated value and no version specified.
+
+#### Bug Fixes
+- **Fixed inclusion of soft deleted resources when using _security:not search parameter**
+When using the _security:not search parameter in search operations, IDs for soft-deleted resources were being included in the search results. We have fixed the issue so that soft-deleted resources are now excluded from search results.
+- **Exporting Data as SMART User**
+Exporting data as a SMART user no longer requires write scopes. Previously, it was necessary to grant "write" privileges to a SMART user for exporting data, which implied higher privilege levels. To initiate an export job as a SMART user, ensure the user is a member of the FHIR export role in RBAC and requests the "read" SMART clinical scope.
+Updating Status Code from HTTP 500 to HTTP 400
+- **Updating Status Code from HTTP 500 to HTTP 400**
+During a patch operation, if the payload requested an update for a resource type other than parameter, an internal server error (HTTP 500) was initially thrown. This has been updated to throw an HTTP 400 error instead.
+
+#### Performance enhancement
+Query optimization is added when searching FHIR resources with a data range. This query optimization will help with efficient querying as one combined CTE is generated.
+ ## May 2024 ### Azure Health Data Services
load-balancer Load Balancer Custom Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-custom-probe-overview.md
If you don't allow the [source IP](#probe-source-ip-address) of the probe in you
## Limitations
-* HTTPS probes don't support mutual authentication with a client certificate.
+* HTTPS probes doesn't support mutual authentication with a client certificate.
+
+* HTTP probes doesn't support using hostnames to probes backends
* Enabling TCP timestamps can cause throttling or other performance issues, which can then cause health probes to timeout. * A Basic SKU load balancer health probe isn't supported with a virtual machine scale set.
-* HTTP probes don't support probing on the following ports due to security concerns: 19, 21, 25, 70, 110, 119, 143, 220, 993.
+* HTTP probes doesn't support probing on the following ports due to security concerns: 19, 21, 25, 70, 110, 119, 143, 220, 993.
## Next steps
logic-apps Add Run Csharp Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/add-run-csharp-scripts.md
Title: Add and run C# scripts in Standard workflows description: Write and run C# scripts inline from Standard workflows to perform custom integration tasks using Inline Code operations in Azure Logic Apps.-+ ms.suite: integration
logic-apps Create Maps Data Transformation Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-maps-data-transformation-visual-studio-code.md
Title: Create maps for data transformation description: Create maps to transform data between schemas in Azure Logic Apps using Visual Studio Code. -+ ms.suite: integration
logic-apps Create Workflow With Trigger Or Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-workflow-with-trigger-or-action.md
Title: Create a workflow with a trigger or action description: Start building your workflow by adding a trigger or an action in Azure Logic Apps. -+ ms.suite: integration
logic-apps Create Integration Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/enterprise-integration/create-integration-account.md
Title: Create and manage integration accounts description: Create and manage integration accounts for building B2B enterprise integration workflows in Azure Logic Apps with the Enterprise Integration Pack. -+ ms.suite: integration
logic-apps Monitor Logic Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/monitor-logic-apps-overview.md
Title: Monitor logic app workflows description: Start here to learn about monitoring workflows in Azure Logic Apps.-+ Last updated 07/11/2024
logic-apps Monitor Logic Apps Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/monitor-logic-apps-reference.md
description: This article contains important reference material you need when yo
Last updated 03/19/2024 -+ # Azure Logic Apps monitoring data reference
logic-apps Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/plan-manage-costs.md
Title: Plan to manage costs for Azure Logic Apps description: Learn how to plan for and manage costs for Azure Logic Apps by using cost analysis in the Azure portal.-+
logic-apps Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Logic Apps
description: Lists Azure Policy Regulatory Compliance controls available for Azure Logic Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 02/06/2024 -+ # Azure Policy Regulatory Compliance controls for Azure Logic Apps
logic-apps Support Non Unicode Character Encoding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/support-non-unicode-character-encoding.md
Title: Convert non-Unicode encoded text for compatibility description: Handle non-Unicode characters in Azure Logic Apps by converting text payloads to UTF-8 with base64 encoding and Azure Functions.-+ Last updated 01/04/2024
machine-learning Azure Machine Learning Ci Image Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-ci-image-release-notes.md
description: Release notes for the Azure Machine Learning compute instance images -+
machine-learning Azure Machine Learning Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-glossary.md
Title: Azure Machine Learning glossary description: Glossary of terms for the Azure Machine Learning platform. -+
machine-learning Azure Machine Learning Release Notes Cli V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes-cli-v2.md
Title: CLI (v2) release notes
description: Learn about the latest updates to Azure Machine Learning CLI (v2) -+
machine-learning Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/classification.md
Title: "AutoML Classification"
description: Learn how to use the AutoML Classification component in Azure Machine Learning to create a classifier using ML Table data. -+
machine-learning Component Reference V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/component-reference-v2.md
Title: "Algorithm & component reference (v2)"
description: Learn about the Azure Machine Learning designer components that you can use to create your own machine learning projects. (v2) -+
machine-learning Forecasting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/forecasting.md
Title: "AutoML Forecasting Component in Microsoft Azure Machine Learning Design
description: Learn how to use the AutoML Forecasting component in Azure Machine Learning to create a classifier using ML Table data. -+
machine-learning Image Classification Multilabel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/image-classification-multilabel.md
Title: "AutoML Image Classification Multi-label"
description: Learn how to use the AutoML Image Classification Multi-label component in Azure Machine Learning to create a classifier using ML Table data. -+
machine-learning Image Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/image-classification.md
Title: "AutoML Image Classification"
description: Learn how to use the AutoML Image Classification component in Azure Machine Learning to create a classifier using ML Table data. -+
machine-learning Image Instance Segmentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/image-instance-segmentation.md
Title: "AutoML Image Instance Segmentation Component in Microsoft Azure Machine
description: Learn how to use the AutoML Image Instance Segmentation component in Azure Machine Learning to create a classifier using ML Table data. -+
machine-learning Image Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/image-object-detection.md
Title: "AutoML Image Object Detection"
description: Learn how to use the AutoML Image Object Detection component in Azure Machine Learning to create a classifier using ML Table data. -+
machine-learning Regression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/regression.md
Title: "AutoML Regression"
description: Learn how to use the AutoML Regression component in Azure Machine Learning to create a classifier using ML Table data. -+
machine-learning Text Classification Multilabel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/text-classification-multilabel.md
Title: "AutoML Text Multi-label Classification"
description: Learn how to use the AutoML Text Multi-label Classification component in Azure Machine Learning to create a classifier using ML Table data. -+
machine-learning Text Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/text-classification.md
Title: "AutoML Text Classification"
description: Learn how to use the AutoML Text Classification component in Azure Machine Learning to create a classifier using ML Table data. -+
machine-learning Text Ner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/text-ner.md
Title: "AutoML Text NER (Named Entry Recognition)"
description: Learn how to use the AutoML Text NER component in Azure Machine Learning to create a classifier using ML Table data. -+
machine-learning Add Columns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/add-columns.md
Title: "Add Columns: Component Reference"
description: Learn how to use the Add Columns component in the drag-and-drop Azure Machine Learning designer to concatenate two datasets. -+
machine-learning Add Rows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/add-rows.md
Title: "Add Rows: Component Reference"
description: Learn how to use the Add Rows component in Azure Machine Learning designer to concatenate two datasets. -+
machine-learning Apply Image Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/apply-image-transformation.md
Title: "Apply Image Transformation"
description: Learn how to use the Apply Image Transformation component to apply an image transformation to a image directory. -+
machine-learning Apply Math Operation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/apply-math-operation.md
Title: "Apply Math Operation"
description: Learn how to use the Apply Math Operation component in Azure Machine Learning to apply a mathematical operation to column values in a pipeline. -+
machine-learning Apply Sql Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/apply-sql-transformation.md
Title: "Apply SQL Transformation"
description: Learn how to use the Apply SQL Transformation component in Azure Machine Learning to run a SQLite query on input datasets to transform the data. -+
machine-learning Apply Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/apply-transformation.md
Title: "Apply Transformation: Component Reference"
description: Learn how to use the Apply Transformation component in Azure Machine Learning to modify an input dataset based on a previously computed transformation. -+
machine-learning Assign Data To Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/assign-data-to-clusters.md
Title: "Assign Data to Cluster: Component Reference"
description: Learn how to use the Assign Data to Cluster component in Azure Machine Learning to score clustering model. -+
machine-learning Boosted Decision Tree Regression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/boosted-decision-tree-regression.md
Title: "Boosted Decision Tree Regression: Component Reference"
description: Learn how to use the Boosted Decision Tree Regression component in Azure Machine Learning to create an ensemble of regression trees using boosting. -+
machine-learning Clean Missing Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/clean-missing-data.md
Title: "Clean Missing Data: Component Reference"
description: Learn how to use the Clean Missing Data component in Azure Machine Learning to remove, replace, or infer missing values. -+
machine-learning Clip Values https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/clip-values.md
Title: "Clip Values"
description: Learn how to use the Clip Values component in Azure Machine Learning to detect outliers and clip or replace their values. -+
machine-learning Component Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/component-reference.md
Title: "Algorithm & component reference"
description: Learn about the Azure Machine Learning designer components that you can use to create your own machine learning projects. -+
machine-learning Convert To Csv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/convert-to-csv.md
Title: "Convert to CSV: Component Reference"
description: Learn how to use the Convert to CSV component in Azure Machine Learning designer to convert a dataset into a CSV file that can be reused later. -+
machine-learning Convert To Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/convert-to-dataset.md
Title: "Convert to Dataset: Component reference"
description: Learn how to use the Convert to Dataset component in Azure Machine Learning designer to convert data input to the internal dataset format. -+
machine-learning Convert To Image Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/convert-to-image-directory.md
Title: "Convert to Image Directory"
description: Learn how to use the Convert to Image Directory component to Convert dataset to image directory format. -+
machine-learning Convert To Indicator Values https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/convert-to-indicator-values.md
Title: "Convert to Indicator Values"
description: Use the Convert to Indicator Values component in Azure Machine Learning designer to convert categorical columns into a series of binary indicator columns. -+
machine-learning Convert Word To Vector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/convert-word-to-vector.md
Title: "Convert Word to Vector: Component reference"
description: Learn how to use three provided Word2Vec models to extract a vocabulary and its corresponding word embeddings from a corpus of text. -+
machine-learning Create Python Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/create-python-model.md
Title: "Create Python Model: Component reference"
description: Learn how to use the Create Python Model component in Azure Machine Learning to create a custom modeling or data processing component. -+
machine-learning Cross Validate Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/cross-validate-model.md
Title: "Cross Validate Model: Component reference"
description: Use the Cross-Validate Model component in Azure Machine Learning designer to cross-validate parameter estimates for classification or regression models. -+
machine-learning Decision Forest Regression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/decision-forest-regression.md
Title: "Decision Forest Regression: Component Reference"
description: Learn how to use the Decision Forest Regression component in Azure Machine Learning to create a regression model based on an ensemble of decision trees. -+
machine-learning Densenet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/densenet.md
Title: "DenseNet"
description: Learn how to use the DenseNet component in Azure Machine Learning designer to create an image classification model using the DenseNet algorithm. -+
machine-learning Designer Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/designer-error-codes.md
Title: Troubleshoot designer component errors
description: Learn how you can read and troubleshoot automated component error codes in Azure Machine Learning designer. -+
machine-learning Edit Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/edit-metadata.md
Title: "Edit Metadata: Component reference"
description: Learn how to use the Edit Metadata component in the Azure Machine Learning to change metadata that's associated with columns in a dataset. -+
machine-learning Enter Data Manually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/enter-data-manually.md
Title: "Enter Data Manually: Component reference"
description: Learn how to use the Enter Data Manually component in Azure Machine Learning to create a small dataset by typing values. The dataset can have multiple columns. -+
machine-learning Evaluate Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/evaluate-model.md
Title: "Evaluate Model: Component Reference"
description: Learn how to use the Evaluate Model component in Azure Machine Learning to measure the accuracy of a trained model. -+
machine-learning Evaluate Recommender https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/evaluate-recommender.md
Title: "Evaluate Recommender: Component reference"
description: Learn how to use the Evaluate Recommender component in Azure Machine Learning to evaluate the accuracy of recommender model predictions. -+
machine-learning Execute Python Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/execute-python-script.md
Title: "Execute Python Script: Component reference"
description: Learn how to use the Execute Python Script component in Azure Machine Learning designer to run Python code. -+
machine-learning Execute R Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/execute-r-script.md
Title: "Execute R Script: Component reference"
description: Learn how to use the Execute R Script component in Azure Machine Learning designer to run custom R code. -+
machine-learning Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/export-data.md
Title: "Export Data: Component Reference"
description: Use the Export Data component in Azure Machine Learning designer to save results and intermediate data outside of Azure Machine Learning. -+
machine-learning Extract N Gram Features From Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/extract-n-gram-features-from-text.md
Title: "Extract N-Gram Features from Text component reference"
description: Learn how to use the Extract N-Gram component in the Azure Machine Learning designer to featurize text data. -+
machine-learning Fast Forest Quantile Regression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/fast-forest-quantile-regression.md
Title: "Fast Forest Quantile Regression: Module reference"
description: Learn how to use the Fast Forest Quantile Regression component to create a regression model that can predict values for a specified number of quantiles. -+
machine-learning Feature Hashing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/feature-hashing.md
Title: "Feature Hashing component reference"
description: Learn how to use the Feature Hashing component in the Azure Machine Learning designer to featurize text data. -+
machine-learning Filter Based Feature Selection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/filter-based-feature-selection.md
Title: "Filter Based Feature Selection: Component reference"
description: Learn how to use the Filter Based Feature Selection component in Azure Machine Learning to identify the features in a dataset with the greatest predictive power. -+
machine-learning Graph Search Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/graph-search-syntax.md
Title: "Graph search query syntax"
description: Learn how to use the search query syntax in Azure Machine Learning designer to search for nodes in pipeline graph. -+
machine-learning Group Data Into Bins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/group-data-into-bins.md
Title: "Group Data into Bins: Component reference"
description: Learn how to use the Group Data into Bins component to group numbers or change the distribution of continuous data. -+
machine-learning Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/import-data.md
Title: "Import Data: Component Reference"
description: Learn how to use the Import Data component in Azure Machine Learning to load data into a machine learning pipeline from existing cloud data services. -+
machine-learning Init Image Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/init-image-transformation.md
Title: "Init Image Transformationply Image Transformation"
description: Learn how to use the Init Image Transformation component in Azure Machine Learning designer to initialize image transformation. -+
machine-learning Join Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/join-data.md
Title: "Join Data: Component Reference"
description: Learn how to use the Join Data component in Azure Machine Learning designer to merge two datasets together. -+
machine-learning K Means Clustering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/k-means-clustering.md
Title: "K-Means Clustering: Component Reference"
description: Learn how to use the K-Means Clustering component in the Azure Machine Learning to train clustering models. -+
machine-learning Latent Dirichlet Allocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/latent-dirichlet-allocation.md
Title: "Latent Dirichlet Allocation: Component reference"
description: Learn how to use the Latent Dirichlet Allocation component to group otherwise unclassified text into categories. -+
machine-learning Linear Regression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/linear-regression.md
Title: "Linear Regression: Component Reference"
description: Learn how to use the Linear Regression component in Azure Machine Learning to create a linear regression model for use in a pipeline. -+
machine-learning Multiclass Boosted Decision Tree https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/multiclass-boosted-decision-tree.md
Title: "Multiclass Boosted Decision Tree: Component Reference"
description: Learn how to use the Multiclass Boosted Decision Tree component in Azure Machine Learning to create a classifier using labeled data. -+
machine-learning Multiclass Decision Forest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/multiclass-decision-forest.md
Title: "Multiclass Decision Forest: Component Reference"
description: Learn how to use the Multiclass Decision Forest component in Azure Machine Learning to create a machine learning model based on the *decision forest* algorithm. -+
machine-learning Multiclass Logistic Regression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/multiclass-logistic-regression.md
Title: "Multiclass Logistic Regression: Component Reference"
description: Learn how to use the Multiclass Logistic Regression component in Azure Machine Learning designer to predict multiple values. -+
machine-learning Multiclass Neural Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/multiclass-neural-network.md
Title: "Multiclass Neural Network: Component Reference"
description: Learn how to use the Multiclass Neural Network component in Azure Machine Learning designer to predict a target that has multi-class values. -+
machine-learning Neural Network Regression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/neural-network-regression.md
Title: "Neural Network Regression: Component Reference"
description: Learn how to use the Neural Network Regression component in Azure Machine Learning to create a regression model using a customizable neural network algorithm.. -+
machine-learning Normalize Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/normalize-data.md
Title: "Normalize Data: Component Reference"
description: Learn how to use the Normalize Data component in Azure Machine Learning to transform a dataset through *normalization*.. -+
machine-learning One Vs All Multiclass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/one-vs-all-multiclass.md
Title: "One-vs-All Multiclass"
description: Learn how to use the One-vs-All Multiclass component in Azure Machine Learning designer to create an ensemble of binary classification models. -+
machine-learning One Vs One Multiclass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/one-vs-one-multiclass.md
Title: "One-vs-One Multiclass"
description: Learn how to use the One-vs-One Multiclass component in Azure Machine Learning to create a multiclass classification model from an ensemble of binary classification models. -+
machine-learning Partition And Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/partition-and-sample.md
Title: "Partition and Sample: Component reference"
description: Learn how to use the Partition and Sample component in Azure Machine Learning to perform sampling on a dataset or to create partitions from your dataset. -+
machine-learning Pca Based Anomaly Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/pca-based-anomaly-detection.md
Title: "PCA-Based Anomaly Detection: Component reference"
description: Learn how to use the PCA-Based Anomaly Detection component to create an anomaly detection model based on principal component analysis (PCA). -+
machine-learning Permutation Feature Importance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/permutation-feature-importance.md
Title: "Permutation Feature Importance: Component reference"
description: Learn how to use the Permutation Feature Importance component in the designer to compute the permutation feature importance scores of feature variables. -+
machine-learning Poisson Regression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/poisson-regression.md
Title: "Poisson Regression: Component reference"
description: Learn how to use the Poisson Regression component in Azure Machine Learning designer to create a Poisson regression model. -+
machine-learning Preprocess Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/preprocess-text.md
Title: "Preprocess Text: Component Reference"
description: Learn how to use the Preprocess Text component in Azure Machine Learning designer to clean and simplify text. -+
machine-learning Remove Duplicate Rows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/remove-duplicate-rows.md
Title: "Remove Duplicate Rows: Component Reference"
description: Learn how to use the Remove Duplicate Rows component in Azure Machine Learning to remove potential duplicates from a dataset. -+
machine-learning Resnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/resnet.md
Title: "ResNet"
description: Learn how to create an image classification model in Azure Machine Learning designer using the ResNet algorithm. -+
machine-learning Score Image Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/score-image-model.md
Title: Use the Score Image Model component
description: Learn how to use the Score Image Model component in Azure Machine Learning to generate predictions using a trained image model. -+
machine-learning Score Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/score-model.md
Title: "Score Model: Component Reference"
description: Learn how to use the Score Model component in Azure Machine Learning to generate predictions using a trained classification or regression model. -+
machine-learning Score Svd Recommender https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/score-svd-recommender.md
Title: "Score SVD Recommender: Component reference"
description: Learn how to use the Score SVD Recommender component in Azure Machine Learning to score recommendation predictions for a dataset. -+
machine-learning Score Vowpal Wabbit Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/score-vowpal-wabbit-model.md
Title: "Score Vowpal Wabbit Model" description: Learn how to use the Score Vowpal Wabbit Model component to generate scores for a set of input data, using an existing trained Vowpal Wabbit model. -+
machine-learning Score Wide And Deep Recommender https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/score-wide-and-deep-recommender.md
Title: Use the Score Wide & Deep Recommender component
description: Learn how to use the Score Wide & Deep Recommender component in Azure Machine Learning to score recommendation predictions for a dataset. -+
machine-learning Select Columns In Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/select-columns-in-dataset.md
Title: "Select Columns in Dataset: Component Reference"
description: Learn how to use the Select Columns in Dataset component in Azure Machine Learning to choose a subset of columns to use in downstream operations. -+
machine-learning Select Columns Transform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/select-columns-transform.md
Title: "Select Columns Transform: Component reference"
description: Learn how to use the Select Columns Transform component in Azure Machine Learning designer to perform a select transformation. -+
machine-learning Smote https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/smote.md
Title: "SMOTE"
description: Learn how to use the SMOTE component in Azure Machine Learning to increase the number of low-incidence examples in a dataset by using oversampling. -+
machine-learning Split Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/split-data.md
Title: "Split Data: Component reference"
description: Learn how to use the Split Data component in Azure Machine Learning to divide a dataset into two distinct sets. -+
machine-learning Split Image Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/split-image-directory.md
Title: "Split Image Directory"
description: Learn how to use the Split Image Directory component in the designer to divide the images of an image directory into two distinct sets. -+
machine-learning Summarize Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/summarize-data.md
Title: "Summarize Data"
description: Learn how to use the Summarize Data component in Azure Machine Learning to generate a basic descriptive statistics report for the columns in a dataset. -+
machine-learning Train Anomaly Detection Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/train-anomaly-detection-model.md
Title: "Train Anomaly Detection Model: Component reference"
description: Learn how to use the Train Anomaly Detection Model component to create a trained anomaly detection model. -+
machine-learning Train Clustering Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/train-clustering-model.md
Title: "Train Clustering Model: Component Reference"
description: Learn how to use the Train Clustering Model component in Azure Machine Learning to train clustering models. -+
machine-learning Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/train-model.md
Title: "Train Model: Component Reference"
description: Learn how to use the **Train Model** component in Azure Machine Learning to train a classification or regression model. -+
machine-learning Train Pytorch Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/train-pytorch-model.md
Title: "Train PyTorch Model"
description: Use the Train PyTorch Models component in Azure Machine Learning designer to train models from scratch, or fine-tune existing models. -+
machine-learning Train Svd Recommender https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/train-svd-recommender.md
Title: "Train SVD Recommender: Component Reference"
description: Learn how to use the Train SVD Recommender component in Azure Machine Learning to train a Bayesian recommender by using the SVD algorithm. -+
machine-learning Train Vowpal Wabbit Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/train-vowpal-wabbit-model.md
Title: "Train Vowpal Wabbit Model" description: Learn how to use the Train Vowpal Wabbit Model component to create a machine learning model by using an instance of Vowpal Wabbit.-+
machine-learning Train Wide And Deep Recommender https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/train-wide-and-deep-recommender.md
Title: Use the Train Wide & Deep Recommender component
description: Learn how to use the Train Wide & Deep Recommender component in Azure Machine Learning designer to train a recommendation model. -+
machine-learning Tune Model Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/tune-model-hyperparameters.md
Title: "Tune Model Hyperparameters"
description: Use the Tune Model Hyperparameters component in the designer to perform a parameter sweep to tune hyper-parameters. -+
machine-learning Two Class Averaged Perceptron https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/two-class-averaged-perceptron.md
Title: "Two-Class Averaged Perceptron: Component Reference"
description: Learn how to use the Two-Class Averaged Perceptron component in the designer to create a binary classifier. -+
machine-learning Two Class Boosted Decision Tree https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/two-class-boosted-decision-tree.md
Title: "Two-Class Boosted Decision Tree: Component Reference"
description: Learn how to use the Two-Class Boosted Decision Tree component in the designer to create a binary classifier. -+
machine-learning Two Class Decision Forest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/two-class-decision-forest.md
Title: "Two-Class Decision Forest: Component Reference"
description: Learn how to use the Two-Class Decision Forest component in Azure Machine Learning to create a machine learning model based on the decision forests algorithm. -+
machine-learning Two Class Logistic Regression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/two-class-logistic-regression.md
description: Learn how to use the Two-Class Logistic Regression component in Azure Machine Learning to create a binary classifier. -+
machine-learning Two Class Neural Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/two-class-neural-network.md
Title: "Two-Class Neural Network: Component Reference"
description: Learn how to use the Two-Class Neural Network component in Azure Machine Learning to create a binary classifier. -+
machine-learning Two Class Support Vector Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/two-class-support-vector-machine.md
Title: "Two-Class Support Vector Machine: Component Reference"
description: Learn how to use the Two-Class Support Vector Machine component in Azure Machine Learning to create a binary classifier. -+
machine-learning Web Service Input Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/web-service-input-output.md
Title: "Web Service Input/Output: Component reference"
description: Learn how to use the web service components in Azure Machine Learning designer to manage inputs and outputs. -+
machine-learning Concept Azure Machine Learning V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-azure-machine-learning-v2.md
Title: 'How Azure Machine Learning works (v2)'
description: This article gives you a high-level understanding of the resources and assets that make up Azure Machine Learning (v2). -+
machine-learning Concept Component https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-component.md
Title: "What is a component"
description: Use Azure Machine Learning components to build machine learning pipelines. -+
machine-learning Concept Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-designer.md
Title: What is Designer (v2)? description: Learn about the drag-and-drop Designer UI in Machine Learning studio, and how it uses Designer v2 custom components to build and edit machine learning pipelines.-+
machine-learning Concept Endpoint Serverless Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoint-serverless-availability.md
Title: Region availability for models in Serverless API endpoints
description: Learn about the regions where each model is available for deployment in serverless API endpoints. -+ Last updated 05/09/2024
machine-learning Concept Endpoints Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints-batch.md
Title: What are batch endpoints?
description: Learn how Azure Machine Learning uses batch endpoints to simplify machine learning deployments. -+
machine-learning Concept Endpoints Online Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints-online-auth.md
Title: Authentication for managed online endpoints
description: Learn how authentication works for Azure Machine Learning managed online endpoints. -+
machine-learning Concept Endpoints Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints-online.md
Title: What are online endpoints?
description: Learn about online endpoints for real-time inference in Azure Machine Learning. -+
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
Title: Endpoints for inference
description: Learn how Azure Machine Learning endpoints simplify deployments. -+
machine-learning Concept Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-environments.md
Title: About Azure Machine Learning environments
description: Learn about machine learning environments, which enable reproducible, auditable, & portable machine learning dependency definitions for various compute targets. -+
machine-learning Concept Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-expressions.md
Title: 'SDK and CLI v2 expressions'
description: SDK and CLI v2 use expressions when a value may not be known when authoring a job or component. -+
machine-learning Concept Hub Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-hub-workspace.md
Title: 'What are hub workspaces?'
description: Hubs provide a central way to govern security, connectivity, and compute resources for a team with multiple workspaces. Project workspaces that are created using a hub obtain the same security settings and shared resource access. -+
machine-learning Concept Onnx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-onnx.md
Title: 'ONNX models: Optimize inference'
description: Learn how using the Open Neural Network Exchange (ONNX) can help optimize the inference of your machine learning model. -+
machine-learning Concept Prebuilt Docker Images Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-prebuilt-docker-images-inference.md
Title: Prebuilt Docker images
description: 'Prebuilt Docker images for inference (scoring) in Azure Machine Learning' -+
machine-learning Concept Retrieval Augmented Generation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-retrieval-augmented-generation.md
-+ Last updated 06/10/2024
machine-learning Concept Secret Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-secret-injection.md
Title: What is secret injection in online endpoints (preview)?
description: Learn about secret injection as it applies to online endpoints in Azure Machine Learning. -+
machine-learning Concept Secure Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-secure-online-endpoint.md
Title: Network isolation with managed online endpoints
description: Learn how private endpoints provide network isolation for Azure Machine Learning managed online endpoints. -+
machine-learning Concept Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-soft-delete.md
Title: 'Workspace soft deletion'
description: Soft delete allows you to recover workspace data after accidental deletion -+
machine-learning Concept V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-v2.md
Title: 'Azure Machine Learning CLI & SDK v2'
description: This article explains the difference between the v1 and v2 versions of Azure Machine Learning. -+
machine-learning Concept Vector Stores https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-vector-stores.md
-+ - ignite-2023
machine-learning Concept Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-workspace.md
Title: 'What is a workspace?'
description: The workspace is the top-level resource for Azure Machine Learning. It keeps a history of all training runs, with logs, metrics, output, and a snapshot of your scripts. -+
machine-learning Designer Accessibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/designer-accessibility.md
Title: Use accessibility features in the designer
description: Learn about the keyboard shortcuts and screen reader accessibility features available in the designer. -+
machine-learning How To Access Data Batch Endpoints Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-batch-endpoints-jobs.md
Title: "Create jobs and input data for batch endpoints"
description: Learn how to access data from different sources in batch endpoints jobs. -+
machine-learning How To Access Resources From Endpoints Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-resources-from-endpoints-managed-identities.md
Title: Access Azure resources from an online endpoint
description: Securely access Azure resources for your machine learning model deployment from an online endpoint with a system-assigned or user-assigned managed identity. -+
machine-learning How To Attach Kubernetes To Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-kubernetes-to-workspace.md
-+ Last updated 01/18/2024
machine-learning How To Authenticate Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-authenticate-batch-endpoint.md
Title: "Authorization on batch endpoints"
description: Learn how authentication works on Batch Endpoints. -+
machine-learning How To Authenticate Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-authenticate-online-endpoint.md
Title: Authenticate clients for online endpoints
description: Learn to authenticate clients for an Azure Machine Learning online endpoint. -+
machine-learning How To Autoscale Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-autoscale-endpoints.md
Title: Autoscale online endpoints description: Learn to scale up online endpoints. Get more CPU, memory, disk space, and extra features.-+
machine-learning How To Azure Container For Pytorch Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-azure-container-for-pytorch-environment.md
-+
machine-learning How To Batch Scoring Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-batch-scoring-script.md
Title: 'Author scoring scripts for batch deployments'
description: In this article, learn how to author scoring scripts to perform batch inference in batch deployments. -+
machine-learning How To Configure Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-environment.md
description: Set up Azure Machine Learning Python development environments in Ju
-+ Last updated 04/08/2024
machine-learning How To Connect Models Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-connect-models-serverless.md
Title: Consume deployed serverless API endpoints from a different workspace
description: Learn how to consume a serverless API endpoint from a different workspace than the one where it was deployed. -+ Last updated 05/09/2024
machine-learning How To Create Component Pipelines Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-cli.md
Title: Create and run component-based ML pipelines (CLI)
description: Create and run machine learning pipelines using the Azure Machine Learning CLI. -+
machine-learning How To Create Component Pipelines Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-ui.md
Title: Create and run component-based ML pipelines (UI)
description: Create and run machine learning pipelines using the Azure Machine Learning studio UI. -+
machine-learning How To Create Vector Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-vector-index.md
-+ Last updated 01/22/2024
machine-learning How To Debug Managed Online Endpoints Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-managed-online-endpoints-visual-studio-code.md
Title: Debug online endpoints locally in Visual Studio Code
description: Learn how to use Visual Studio Code to test and debug online endpoints locally before deploying them to Azure. -+
machine-learning How To Debug Pipeline Failure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-pipeline-failure.md
-+ Last updated 05/23/2024
machine-learning How To Debug Pipeline Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-pipeline-performance.md
-+ Last updated 05/24/2024
machine-learning How To Debug Pipeline Reuse Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-pipeline-reuse-issues.md
Title: Debug pipeline reuse issues in Azure Machine Learning
description: Learn how reuse works in pipeline and how to debug reuse issues -+
machine-learning How To Deploy Automl Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-automl-endpoint.md
Title: Deploy an AutoML model with an online endpoint
description: Learn to deploy your AutoML model as a web service that's automatically managed by Azure. -+
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-custom-container.md
Title: Deploy a model in a custom container to an online endpoint
description: Learn how to use a custom container with an open-source server to deploy a model in Azure Machine Learning. -+
machine-learning How To Deploy Kubernetes Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-kubernetes-extension.md
-+ Last updated 01/19/2024
machine-learning How To Deploy Mlflow Model Spark Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-model-spark-jobs.md
Title: Deploy and run MLflow models in Spark jobs
description: Learn to deploy your MLflow model in Spark jobs to perform inference. -+
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
Title: Deploy MLflow models to real-time endpoints
description: Learn to deploy your MLflow model as a web service that's managed by Azure. -+
machine-learning How To Deploy Mlflow Models Online Progressive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-progressive.md
Title: Progressive rollout of MLflow models to Online Endpoints
description: Learn to deploy your MLflow model progressively using MLflow SDK. -+
machine-learning How To Deploy Model Custom Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-model-custom-output.md
Title: "Customize outputs in batch deployments"
description: Learn how create deployments that generate custom outputs and files. -+
machine-learning How To Deploy Models Cohere Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-cohere-command.md
Title: How to deploy Cohere Command models with Azure Machine Learning studio
description: Learn how to deploy Cohere Command models with Azure Machine Learning studio. -+ Last updated 04/02/2024
machine-learning How To Deploy Models Cohere Embed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-cohere-embed.md
Title: How to deploy Cohere Embed models with Azure Machine Learning studio
description: Learn how to deploy Cohere Embed models with Azure Machine Learning studio. -+ Last updated 04/02/2024
machine-learning How To Deploy Models Cohere Rerank https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-cohere-rerank.md
Title: How to deploy Cohere Rerank models as serverless APIs
description: Learn to deploy and use Cohere Rerank models with Azure Machine Learning studio. -+ Last updated 07/24/2024
machine-learning How To Deploy Models Mistral https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-mistral.md
Title: How to deploy Mistral family of models with Azure Machine Learning studio
description: Learn how to deploy Mistral family of models with Azure Machine Learning studio. -+ Last updated 04/29/2024
machine-learning How To Deploy Models Phi 3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-phi-3.md
Title: How to deploy Phi-3 family of small language models with Azure Machine Le
description: Learn how to deploy Phi-3 family of small language models with Azure Machine Learning. -+ Last updated 07/01/2024
machine-learning How To Deploy Models Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-serverless.md
Title: Deploy models as serverless APIs
description: Learn to deploy models as serverless APIs, using Azure Machine Learning. -+ Last updated 07/19/2024
machine-learning How To Deploy Models Timegen 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-timegen-1.md
Title: How to deploy TimeGEN-1 model with Azure Machine Learning
description: Learn how to deploy TimeGEN-1 with Azure Machine Learning studio. -+ Last updated 5/21/2024
machine-learning How To Deploy Online Endpoint With Secret Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoint-with-secret-injection.md
Title: Access secrets from online deployment using secret injection (preview)
description: Learn to use secret injection with online endpoint and deployment to access secrets like API keys. -+
machine-learning How To Deploy Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoints.md
Title: Deploy machine learning models to online endpoints for inference
description: Learn to deploy your machine learning model as an online endpoint in Azure. -+
machine-learning How To Deploy Pipeline Component As Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-pipeline-component-as-batch-endpoint.md
-+ - ignite-2023
machine-learning How To Deploy With Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-rest.md
Title: Deploy models by using online endpoints with REST APIs
description: Learn how to deploy models by using online endpoints with REST APIs, including creation of assets, training jobs, and hyperparameter tuning sweep jobs. -+
machine-learning How To Deploy With Triton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-triton.md
Title: High-performance model serving with Triton
description: 'Learn to deploy your model with NVIDIA Triton Inference Server in Azure Machine Learning.' -+ Last updated 11/09/2023
machine-learning How To Enable Preview Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-enable-preview-features.md
Title: Manage preview features
description: Learn about, and enable, preview features available with Azure Machine Learning. -+
machine-learning How To Image Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-image-processing-batch.md
Title: "Image processing with batch model deployments"
description: Learn how to deploy a model in batch endpoints that process images -+
machine-learning How To Inference Server Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-server-http.md
-+
machine-learning How To Kubernetes Inference Routing Azureml Fe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-kubernetes-inference-routing-azureml-fe.md
-+ Last updated 02/05/2024
machine-learning How To Launch Vs Code Remote https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-launch-vs-code-remote.md
Title: 'Launch Visual Studio Code integrated with Azure Machine Learning (previe
description: Connect to an Azure Machine Learning compute instance in Visual Studio Code to run interactive Jupyter Notebook and remote development workloads. -+
machine-learning How To Manage Environments In Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-environments-in-studio.md
Title: Manage environments in the studio
description: Learn how to create and manage environments in the Azure Machine Learning studio. -+
machine-learning How To Manage Environments V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-environments-v2.md
Title: 'Manage Azure Machine Learning environments with the CLI & SDK (v2)' description: Learn how to manage Azure Machine Learning environments using Python SDK and Azure CLI extension for Machine Learning.-+
machine-learning How To Manage Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-files.md
-+ Last updated 03/25/2024
machine-learning How To Manage Hub Workspace Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-hub-workspace-portal.md
Title: Manage hub workspaces in portal
description: Learn how to manage Azure Machine Learning hub workspaces in the Azure portal. -+
machine-learning How To Manage Inputs Outputs Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-inputs-outputs-pipeline.md
Title: Manage inputs and outputs of a pipeline
description: How to manage inputs and outputs of components and pipeline in Azure Machine Learning. -+
machine-learning How To Manage Kubernetes Instance Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-kubernetes-instance-types.md
-+ Last updated 01/09/2024
machine-learning How To Manage Models Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models-mlflow.md
-+ Last updated 06/08/2022
machine-learning How To Manage Optimize Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-optimize-cost.md
-+ Last updated 08/01/2023
machine-learning How To Manage Resources Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-resources-vscode.md
-+ Last updated 01/15/2024
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-cli.md
Title: Create workspaces with Azure CLI
description: Learn how to use the Azure CLI machine learning extension to create and manage Azure Machine Learning workspaces. -+
machine-learning How To Manage Workspace Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-powershell.md
Title: Create workspaces with Azure PowerShell
description: Learn how to use the Azure PowerShell module to create and manage a new Azure Machine Learning workspace. -+
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md
Title: Manage workspaces in portal or Python SDK (v2)
description: Learn how to manage Azure Machine Learning workspaces in the Azure portal or with the SDK for Python (v2). -+
machine-learning How To Migrate From V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-migrate-from-v1.md
Title: 'Upgrade from v1 to v2'
description: Upgrade from v1 to v2 of Azure Machine Learning REST APIs, CLI extension, and Python SDK. -+
machine-learning How To Mlflow Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-mlflow-batch.md
Title: Deploy MLflow models in batch deployments
description: Learn how to deploy MLflow models in batch deployments -+
machine-learning How To Nlp Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-nlp-processing-batch.md
Title: "Deploy and run language models in batch endpoints"
description: Learn how to use batch deployments to process text with large language models. -+
machine-learning How To Package Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-package-models.md
Before following the steps in this article, make sure you have the following pre
* Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the owner or contributor role for the Azure Machine Learning workspace, or a custom role. For more information, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
+* A model to package. This example, uses an MLflow model registered in the workspace.
+
+ > [!CAUTION]
+ > Model packaging is not supported for models in the Azure AI model catalog, including large language models. Models in the Azure AI model catalog are optimized for inference on Azure AI deployment targets and are not suitable for packaging.
## About this example
machine-learning How To R Deploy R Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-r-deploy-r-model.md
Title: Deploy a registered R model to an online (real time) endpoint description: 'Learn how to deploy your R model to an online (real-time) managed endpoint'-+ Last updated 01/12/2023
machine-learning How To R Interactive Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-r-interactive-development.md
Title: Use R interactively on Azure Machine Learning description: 'Learn how to work with R interactively on Azure Machine Learning'-+ Last updated 06/01/2023
machine-learning How To R Modify Script For Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-r-modify-script-for-production.md
Title: Adapt your R script to run in production description: 'Learn how to modify your existing R scripts to run in production on Azure Machine Learning'-+ Last updated 01/11/2023
machine-learning How To R Overview R Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-r-overview-r-capabilities.md
Title: Bring R workloads into Azure Machine Learning description: 'Learn how to bring your R workloads into Azure Machine Learning'-+ Last updated 01/12/2023
machine-learning How To R Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-r-train-model.md
Title: Train R models description: 'Learn how to train your machine learning model with R for use in Azure Machine Learning.'-+ Last updated 03/22/2024
machine-learning How To Run Jupyter Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-run-jupyter-notebooks.md
-+
machine-learning How To Search Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-search-assets.md
Title: Search for assets
description: Find your Azure Machine Learning assets with search -+
machine-learning How To Secure Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-batch-endpoint.md
Title: "Network isolation in batch endpoints"
description: Learn how to deploy Batch Endpoints in private networks with isolation. -+
machine-learning How To Secure Kubernetes Inferencing Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-kubernetes-inferencing-environment.md
-+ Last updated 08/31/2022
machine-learning How To Secure Rag Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-rag-workflows.md
-+ Last updated 09/12/2023
machine-learning How To Setup Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-vs-code.md
-+ Last updated 01/16/2024
machine-learning How To Track Experiments Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-experiments-mlflow.md
-+ Last updated 03/20/2024
machine-learning How To Train Mlflow Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-mlflow-projects.md
-+ Last updated 11/04/2022
machine-learning How To Troubleshoot Batch Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-batch-endpoints.md
Title: Troubleshoot batch endpoints
description: Learn how to troubleshoot and diagnose errors with batch endpoints jobs, including examining logs for scoring jobs and solution steps for common issues. -+
machine-learning How To Troubleshoot Kubernetes Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-kubernetes-compute.md
-+ Last updated 02/11/2024
machine-learning How To Troubleshoot Kubernetes Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-kubernetes-extension.md
-+ Last updated 03/10/2024
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
Title: Troubleshooting online endpoints deployment
description: Learn how to troubleshoot some common deployment and scoring errors with online endpoints. -+
machine-learning How To Troubleshoot Protobuf Descriptor Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-protobuf-descriptor-error.md
Title: "Troubleshoot `descriptors cannot not be created directly`"
description: Troubleshooting steps when you get the "descriptors cannot not be created directly" message. -+
machine-learning How To Troubleshoot Validation For Schema Failed Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-validation-for-schema-failed-error.md
Title: Troubleshoot Validation For Schema Failed Error
description: Troubleshooting steps when you get the "Validation for schema failed" error message in Azure Machine Learning v2 CLI -+
machine-learning How To Use Batch Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-azure-data-factory.md
Title: "Run batch endpoints from Azure Data Factory"
description: Learn how to use Azure Data Factory to invoke Batch Endpoints. -+
machine-learning How To Use Batch Fabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-fabric.md
Title: "Consume models deployed in Azure Machine Learning from Fabric, using bat
description: Learn to consume an Azure Machine Learning batch model deployment while working in Microsoft Fabric. -+
machine-learning How To Use Batch Model Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-model-deployments.md
Title: 'Deploy models for scoring in batch endpoints'
description: In this article, learn how to create a batch endpoint to continuously batch score large data. -+
machine-learning How To Use Batch Model Openai Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-model-openai-embeddings.md
Title: 'Run OpenAI models in batch endpoints'
description: In this article, learn how to use batch endpoints with OpenAI models. -+
machine-learning How To Use Batch Pipeline Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-pipeline-deployments.md
Title: "Deploy pipelines with batch endpoints"
description: Learn how to create a batch deploy a pipeline component and invoke it. -+
machine-learning How To Use Batch Pipeline From Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-pipeline-from-job.md
Title: How to deploy existing pipeline jobs to a batch endpoint
description: Learn how to create pipeline component deployment for Batch Endpoints -+
machine-learning How To Use Batch Scoring Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-scoring-pipeline.md
Title: "Operationalize a scoring pipeline on batch endpoints"
description: Learn how to operationalize a pipeline that performs batch scoring with preprocessing. -+
machine-learning How To Use Batch Training Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-training-pipeline.md
Title: "Operationalize a training pipeline on batch endpoints"
description: Learn how to deploy a training pipeline under a batch endpoint. -+
machine-learning How To Use Event Grid Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-event-grid-batch.md
Title: "Run batch endpoints from Event Grid events in storage"
description: Learn how to use batch endpoints to be automatically triggered when new files are generated in storage. -+
machine-learning How To Use Low Priority Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-low-priority-batch.md
Title: "Using low priority VMs in batch deployments"
description: Learn how to use low priority VMs to save costs when running batch jobs. -+
machine-learning How To Use Mlflow Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
description: Set up MLflow with Azure Machine Learning to log metrics and artif
-+ Last updated 07/01/2022
machine-learning How To Use Mlflow Azure Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-synapse.md
description: Set up MLflow with Azure Machine Learning to log metrics and artif
-+ Last updated 07/06/2022
machine-learning How To Use Pipelines Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-pipelines-prompt-flow.md
-+ Last updated 06/20/2024
machine-learning How To Use Retrieval Augmented Generation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-retrieval-augmented-generation.md
-+ Last updated 06/26/2024
machine-learning How To Use Serverless Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-serverless-compute.md
Title: Model training on serverless compute
description: You no longer need to create your own compute cluster to train your model in a scalable way. You can now use a compute cluster that Azure Machine Learning has made available for you. -+ - build-2023
machine-learning How To View Online Endpoints Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-view-online-endpoints-costs.md
Title: View costs for managed online endpoints
description: 'Learn to how view costs for a managed online endpoint in Azure Machine Learning.' -+
machine-learning How To Work In Vs Code Remote https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-work-in-vs-code-remote.md
Title: 'Work in VS Code remotely connected to a compute instance (preview)'
description: Details for working with Jupyter notebooks and services from a VS Code remote connection to an Azure Machine Learning compute instance. -+
machine-learning Application Sharing Policy Not Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/application-sharing-policy-not-supported.md
description: Configuring the applicationSharingPolicy property for a compute ins
-+ Last updated 08/14/2023
machine-learning Azure Machine Learning Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/azure-machine-learning-known-issues.md
description: Identify issues that are affecting Azure Machine Learning features.
-+ Last updated 08/04/2023
machine-learning Compute A10 Sku Not Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/compute-a10-sku-not-supported.md
description: While trying to create a compute instance with A10 SKU, users encou
-+ Last updated 08/14/2023
machine-learning Compute Idleshutdown Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/compute-idleshutdown-bicep.md
description: When creating an Azure Machine Learning compute instance through Bi
-+ Last updated 08/04/2023
machine-learning Compute Slowness Terminal Mounted Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/compute-slowness-terminal-mounted-path.md
description: While using the compute instance terminal inside a mounted path of
-+ Last updated 08/04/2023
machine-learning Inferencing Invalid Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/inferencing-invalid-certificate.md
description: During machine learning deployments with an AKS cluster, you may re
-+ Last updated 08/04/2023
machine-learning Inferencing Updating Kubernetes Compute Appears To Succeed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/inferencing-updating-kubernetes-compute-appears-to-succeed.md
description: Updating a Kubernetes attached compute instance using the az ml att
-+ Last updated 08/04/2023
machine-learning Jupyter R Kernel Not Starting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/jupyter-r-kernel-not-starting.md
description: When trying to launch an R kernel in JupyterLab or a notebook in a
-+ Last updated 08/14/2023
machine-learning Workspace Move Compute Instance Same Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/workspace-move-compute-instance-same-name.md
description: After moving a workspace to a different subscription or resource gr
-+ Last updated 08/14/2023
machine-learning Migrate To V2 Assets Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-assets-model.md
Title: Upgrade model management to SDK v2
description: Upgrade model management from v1 to v2 of Azure Machine Learning SDK -+
machine-learning Migrate To V2 Command Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-command-job.md
Title: 'Upgrade script run to SDK v2'
description: Upgrade how to run a script from SDK v1 to SDK v2 -+
machine-learning Migrate To V2 Deploy Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-deploy-endpoints.md
Title: Upgrade deployment endpoints to SDK v2
description: Upgrade deployment endpoints from v1 to v2 of Azure Machine Learning SDK -+
machine-learning Migrate To V2 Deploy Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-deploy-pipelines.md
Title: Upgrade pipeline endpoints to SDK v2
description: Upgrade pipeline endpoints from v1 to v2 of Azure Machine Learning SDK. -+
machine-learning Migrate To V2 Execution Automl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-automl.md
Title: Upgrade AutoML to SDK v2
description: Upgrade AutoML from v1 to v2 of Azure Machine Learning SDK -+
machine-learning Migrate To V2 Execution Hyperdrive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-hyperdrive.md
Title: Upgrade hyperparameter tuning to SDK v2
description: Upgrade hyperparameter tuning from v1 to v2 of Azure Machine Learning SDK -+
machine-learning Migrate To V2 Execution Parallel Run Step https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-parallel-run-step.md
Title: Upgrade parallel run step to SDK v2
description: Upgrade parallel run step from v1 to v2 of Azure Machine Learning SDK -+
machine-learning Migrate To V2 Execution Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-pipeline.md
Title: Upgrade pipelines to SDK v2
description: Upgrade pipelines from v1 to v2 of Azure Machine Learning SDK -+
machine-learning Migrate To V2 Local Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-local-runs.md
Title: Upgrade local runs to SDK v2
description: Upgrade local runs from v1 to v2 of Azure Machine Learning SDK -+
machine-learning Migrate To V2 Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-managed-online-endpoints.md
Title: Upgrade steps for Azure Container Instances web services to managed onlin
description: Upgrade steps for Azure Container Instances web services to managed online endpoints in Azure Machine Learning -+
machine-learning Migrate To V2 Resource Datastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-resource-datastore.md
Title: Upgrade datastore management to SDK v2
description: Upgrade datastore management from v1 to v2 of Azure Machine Learning SDK -+
machine-learning Migrate To V2 Resource Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-resource-workspace.md
Title: Upgrade workspace management to SDK v2
description: Upgrade workspace management from v1 to v2 of Azure Machine Learning SDK -+
machine-learning Overview What Is Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/overview-what-is-azure-machine-learning.md
Title: What is Azure Machine Learning? description: 'Azure Machine Learning is a cloud service for accelerating and managing the machine learning project lifecycle: Train and deploy models, and manage MLOps.' -+
machine-learning Concept Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/concept-connections.md
Title: Connections in Azure Machine Learning prompt flow
description: Learn about how in Azure Machine Learning prompt flow, you can utilize connections to effectively manage credentials or secrets for APIs and data sources. -+ - ignite-2023
machine-learning Concept Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/concept-flows.md
Title: What are flows in Azure Machine Learning prompt flow
description: Learn about how a flow in prompt flow serves as an executable workflow that streamlines the development of your LLM-based AI application. It provides a comprehensive framework for managing data flow and processing within your application. -+ - ignite-2023
machine-learning Concept Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/concept-session.md
Title: Compute session in Azure Machine Learning prompt flow
description: Learn about how in Azure Machine Learning prompt flow, the execution of flows is facilitated by using compute session. -+ - ignite-2023
machine-learning Quickstart Create Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-create-resources.md
Title: "Tutorial: Create workspace resources"
description: Create an Azure Machine Learning workspace and cloud resources that can be used to train machine learning models. -+
machine-learning Reference Automated Ml Forecasting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automated-ml-forecasting.md
Title: 'CLI (v2) Automated ML Forecasting command job YAML schema'
description: Reference documentation for the CLI (v2) Forecasting command job YAML schema. -+
machine-learning Reference Automl Images Cli Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-cli-classification.md
Title: 'CLI (v2) Automated ML Image Classification job YAML schema'
description: Reference documentation for the CLI (v2) Automated ML Image Classification job YAML schema. -+
machine-learning Reference Automl Images Cli Instance Segmentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-cli-instance-segmentation.md
Title: 'CLI (v2) Automated ML Image Instance Segmentation job YAML schema'
description: Reference documentation for the CLI (v2) Automated ML Image Instance Segmentation job YAML schema. -+
machine-learning Reference Automl Images Cli Multilabel Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-cli-multilabel-classification.md
Title: 'CLI (v2) Automated ML Image Multi-Label Classification job YAML schema'
description: Reference documentation for the CLI (v2) Automated ML Image Multi-Label Classification job YAML schema. -+
machine-learning Reference Automl Images Cli Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-cli-object-detection.md
Title: 'CLI (v2) Automated ML Image Object Detection job YAML schema'
description: Reference documentation for the CLI (v2) Automated ML Image Object Detection job YAML schema. -+
machine-learning Reference Automl Nlp Cli Ner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-nlp-cli-ner.md
Title: 'CLI (v2) Automated ML NLP text NER job YAML schema'
description: Reference documentation for the CLI (v2) automated ML NLP text NER job YAML schema. -+
machine-learning Reference Automl Nlp Cli Text Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-nlp-cli-text-classification.md
Title: 'CLI (v2) Automated ML text classification job YAML schema'
description: Reference documentation for the CLI (v2) automated ML text classification job YAML schema. -+
machine-learning Reference Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-kubernetes.md
-+
machine-learning Reference Machine Learning Cloud Parity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-machine-learning-cloud-parity.md
Title: Feature availability across cloud regions
description: This article lists feature availability differences between public cloud and the Azure Government, Azure Germany, and Azure operated by 21Vianet regions. -+
machine-learning Reference Managed Online Endpoints Vm Sku List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-managed-online-endpoints-vm-sku-list.md
Title: Managed online endpoints VM SKU list
description: Lists the VM SKUs that can be used for managed online endpoints in Azure Machine Learning. -+
machine-learning Reference Migrate Sdk V1 Mlflow Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-migrate-sdk-v1-mlflow-tracking.md
Title: Migrate logging from SDK v1 to MLflow
description: Comparison of SDK v1 logging APIs and MLflow tracking -+
machine-learning Reference Model Inference Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-model-inference-api.md
Title: Azure AI Model Inference API
description: Learn about how to use the Azure AI Model Inference API -+ Last updated 05/03/2024
machine-learning Reference Model Inference Chat Completions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-model-inference-chat-completions.md
Title: Azure AI Model Inference Chat Completions
description: Reference for Azure AI Model Inference Chat Completions API -+ Last updated 05/03/2024
machine-learning Reference Model Inference Completions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-model-inference-completions.md
Title: Azure AI Model Inference Completions
description: Reference for Azure AI Model Inference Completions API -+ Last updated 05/03/2024
machine-learning Reference Model Inference Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-model-inference-embeddings.md
Title: Azure AI Model Inference Embeddings API
description: Reference for Azure AI Model Inference Embeddings API -+ Last updated 05/03/2024
machine-learning Reference Model Inference Images Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-model-inference-images-embeddings.md
Title: Azure AI Model Inference Image Embeddings
description: Reference for Azure AI Model Inference Image Embeddings API -+ Last updated 05/03/2024
machine-learning Reference Model Inference Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-model-inference-info.md
Title: Azure AI Model Inference Get Info
description: Reference for Azure AI Model Inference Get Info API -+ Last updated 05/03/2024
machine-learning Reference Yaml Component Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-component-command.md
Title: 'CLI (v2) command component YAML schema'
description: Reference documentation for the CLI (v2) command component YAML schema. -+
machine-learning Reference Yaml Component Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-component-pipeline.md
Title: 'CLI (v2) pipeline component YAML schema'
description: Reference documentation for the CLI (v2) pipeline component YAML schema. -+
machine-learning Reference Yaml Component Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-component-spark.md
Title: 'CLI (v2) Spark component YAML schema'
description: Reference documentation for the CLI (v2) Spark component YAML schema. -+
machine-learning Reference Yaml Compute Aml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-compute-aml.md
Title: 'CLI (v2) compute cluster (AmlCompute) YAML schema'
description: Reference documentation for the CLI (v2) compute cluster (AmlCompute) YAML schema. -+
machine-learning Reference Yaml Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-compute-instance.md
Title: 'CLI (v2) compute instance YAML schema'
description: Reference documentation for the CLI (v2) compute instance YAML schema. -+
machine-learning Reference Yaml Compute Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-compute-kubernetes.md
Title: 'CLI (v2) Attached Kubernetes cluster (KubernetesCompute) YAML schema'
description: Reference documentation for the CLI (v2) Attached Azure Arc-enabled Kubernetes cluster (KubernetesCompute) YAML schema. -+
machine-learning Reference Yaml Compute Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-compute-vm.md
Title: 'CLI (v2) attached Virtual Machine YAML schema'
description: Reference documentation for the CLI (v2) attached Virtual Machine schema. -+
machine-learning Reference Yaml Connection Ai Content Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-ai-content-safety.md
Title: 'CLI (v2) AI Content Safety connection YAML schema'
description: Reference documentation for the CLI (v2) AI Content Safety connections YAML schema. -+ - build-2024
machine-learning Reference Yaml Connection Ai Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-ai-search.md
Title: 'CLI (v2) AI Search connection YAML schema'
description: Reference documentation for the CLI (v2) AI Search connections YAML schema. -+ - build-2024
machine-learning Reference Yaml Connection Ai Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-ai-services.md
Title: 'CLI (v2) AI Services connection YAML schema'
description: Reference documentation for the CLI (v2) Azure AI Services connections YAML schema. -+ - build-2024
machine-learning Reference Yaml Connection Api Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-api-key.md
Title: 'CLI (v2) API key connection YAML schema'
description: Reference documentation for the CLI (v2) API key connections YAML schema. -+ - build-2024
machine-learning Reference Yaml Connection Azure Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-azure-openai.md
Title: 'CLI (v2) Azure OpenAI connection YAML schema'
description: Reference documentation for the CLI (v2) Azure OpenAI connections YAML schema. -+ - build-2024
machine-learning Reference Yaml Connection Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-blob.md
Title: 'CLI (v2) blob store connection YAML schema'
description: Reference documentation for the CLI (v2) blob store connections YAML schema. -+ - build-2024
machine-learning Reference Yaml Connection Container Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-container-registry.md
Title: 'CLI (v2) Azure Container Registry connection YAML schema'
description: Reference documentation for the CLI (v2) Azure Container Registry connections YAML schema. -+ - build-2024
machine-learning Reference Yaml Connection Custom Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-custom-key.md
Title: 'CLI (v2) custom key connection YAML schema'
description: Reference documentation for the CLI (v2) custom key connections YAML schema. -+ - build-2024
machine-learning Reference Yaml Connection Data Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-data-lake.md
Title: 'CLI (v2) Data Lake Store Gen 2 connection YAML schema'
description: Reference documentation for the CLI (v2) Azure Data Lake Store Gen 2 connections YAML schema. -+ - build-2024
machine-learning Reference Yaml Connection Git https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-git.md
Title: 'CLI (v2) Git connection YAML schema'
description: Reference documentation for the CLI (v2) Git connections YAML schema. -+ - build-2024
machine-learning Reference Yaml Connection Onelake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-onelake.md
Title: 'CLI (v2) OneLake connection YAML schema'
description: Reference documentation for the CLI (v2) OneLake connections YAML schema. -+ - build-2024
machine-learning Reference Yaml Connection Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-openai.md
Title: 'CLI (v2) OpenAI connection YAML schema'
description: Reference documentation for the CLI (v2) OpenAI connections YAML schema. -+ - build-2024
machine-learning Reference Yaml Connection Python Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-python-feed.md
Title: 'CLI (v2) Python feed connection YAML schema'
description: Reference documentation for the CLI (v2) Python feed connections YAML schema. -+
machine-learning Reference Yaml Connection Serp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-serp.md
Title: 'CLI (v2) Serp connection YAML schema'
description: Reference documentation for the CLI (v2) Serp connections YAML schema. -+ - build-2024
machine-learning Reference Yaml Connection Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-serverless.md
Title: 'CLI (v2) serverless connection YAML schema'
description: Reference documentation for the CLI (v2) serverless connections YAML schema. -+ - build-2024
machine-learning Reference Yaml Connection Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-connection-speech.md
Title: 'CLI (v2) AI Speech Services connection YAML schema'
description: Reference documentation for the CLI (v2) AI Speech Services connections YAML schema. -+ - build-2024
machine-learning Reference Yaml Core Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-core-syntax.md
Title: 'CLI (v2) core YAML syntax'
description: Overview CLI (v2) core YAML syntax. -+
machine-learning Reference Yaml Deployment Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-batch.md
Title: 'CLI (v2) batch deployment YAML schema'
description: Reference documentation for the CLI (v2) batch deployment YAML schema. -+
machine-learning Reference Yaml Endpoint Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-endpoint-batch.md
Title: 'CLI (v2) batch endpoint YAML schema'
description: Reference documentation for the CLI (v2) batch endpoint YAML schema. -+
machine-learning Reference Yaml Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-environment.md
Title: 'CLI (v2) environment YAML schema' description: Reference documentation for the CLI (v2) environment YAML schema.-+
machine-learning Reference Yaml Job Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-command.md
Title: 'CLI (v2) command job YAML schema'
description: Reference documentation for the CLI (v2) command job YAML schema. -+
machine-learning Reference Yaml Job Parallel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-parallel.md
Title: 'CLI (v2) parallel job YAML schema'
description: Reference documentation for the CLI (v2) parallel job YAML schema. -+
machine-learning Reference Yaml Job Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-pipeline.md
Title: 'CLI (v2) pipeline job YAML schema'
description: Reference documentation for the CLI (v2) pipeline job YAML schema. -+
machine-learning Reference Yaml Job Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-spark.md
Title: 'CLI (v2) Spark job YAML schema'
description: Reference documentation for the CLI (v2) Spark job YAML schema. -+
machine-learning Reference Yaml Job Sweep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-sweep.md
Title: 'CLI (v2) sweep job YAML schema'
description: Reference documentation for the CLI (v2) sweep job YAML schema. -+
machine-learning Reference Yaml Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-model.md
Title: 'CLI (v2) model YAML schema'
description: Reference documentation for the CLI (v2) model YAML schema. -+
machine-learning Reference Yaml Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-overview.md
Title: 'CLI (v2) YAML schema overview'
description: Overview and index of CLI (v2) YAML schemas. -+
machine-learning Reference Yaml Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-registry.md
Title: 'CLI (v2) registry YAML schema'
description: Reference documentation for the CLI (v2) registry YAML schema. -+
machine-learning Reference Yaml Schedule Data Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-schedule-data-import.md
Title: 'CLI (v2) schedule YAML schema for data import (preview)'
description: Reference documentation for the CLI (v2) data import schedule YAML schema. -+
machine-learning Reference Yaml Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-schedule.md
Title: 'CLI (v2) schedule YAML schema'
description: Reference documentation for the CLI (v2) job schedule YAML schema. -+
machine-learning Reference Yaml Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-workspace.md
Title: 'CLI (v2) workspace YAML schema'
description: Reference documentation for the CLI (v2) workspace YAML schema. -+
machine-learning Resource Azure Container For Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-azure-container-for-pytorch.md
-+
machine-learning Resource Curated Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-curated-environments.md
-+ Last updated 09/10/2023
machine-learning Resource Limits Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-limits-capacity.md
-+ Last updated 09/13/2023 ms.metadata: product-dependency
machine-learning Samples Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/samples-notebooks.md
Title: Example Jupyter Notebooks (v2)
description: Learn how to find and use the Jupyter Notebooks designed to help you explore the SDK (v2) and serve as models for your own machine learning projects. -+
machine-learning Tutorial Azure Ml In A Day https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-azure-ml-in-a-day.md
Title: "Quickstart: Get started with Azure Machine Learning"
description: Use Azure Machine Learning to train and deploy a model in a cloud-based Python Jupyter Notebook. -+
machine-learning Tutorial Cloud Workstation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-cloud-workstation.md
Title: "Tutorial: Model development on a cloud workstation"
description: Learn how to get started prototyping and developing machine learning models on an Azure Machine Learning cloud workstation. -+
machine-learning Tutorial Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-deploy-model.md
Title: "Tutorial: Deploy a model"
description: This tutorial covers how to deploy a model to production using Azure Machine Learning Python SDK v2. -+
machine-learning Tutorial Develop Feature Set With Custom Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-develop-feature-set-with-custom-source.md
Title: "Tutorial 5: Develop a feature set with a custom source"
description: This is part 5 of the managed feature store tutorial series -+
machine-learning Tutorial Enable Recurrent Materialization Run Batch Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-enable-recurrent-materialization-run-batch-inference.md
Title: "Tutorial 3: Enable recurrent materialization and run batch inference"
description: This is part of a tutorial series on managed feature store. -+
machine-learning Tutorial Experiment Train Models Using Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-experiment-train-models-using-features.md
Title: "Tutorial 2: Experiment and train models by using features"
description: This is part of a tutorial series about managed feature store. -+
machine-learning Tutorial Explore Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-explore-data.md
Title: "Tutorial: upload, access, and explore your data"
description: Upload data to cloud storage, create an Azure Machine Learning data asset, create new versions for data assets, and use the data for interactive development -+
machine-learning Tutorial Feature Store Domain Specific Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-feature-store-domain-specific-language.md
Title: "Tutorial 7: Develop a feature set using Domain Specific Language (previe
description: This is part 7 of the managed feature store tutorial series. -+
machine-learning Tutorial Get Started With Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-get-started-with-feature-store.md
Title: "Tutorial 1: Develop and register a feature set with managed feature stor
description: This is the first part of a tutorial series on managed feature store. -+
machine-learning Tutorial Network Isolation For Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-network-isolation-for-feature-store.md
Title: "Tutorial 6: Network isolation for feature store"
description: This is part 6 of the managed feature store tutorial series -+
machine-learning Tutorial Online Materialization Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-online-materialization-inference.md
Title: "Tutorial 4: Enable online materialization and run online inference"
description: This is a part of a tutorial series on managed feature store. -+
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-pipeline-python-sdk.md
Title: "Tutorial: ML pipelines with Python SDK v2"
description: Use Azure Machine Learning to create your production-ready ML project in a cloud-based Python Jupyter Notebook using Azure Machine Learning Python SDK v2. -+
machine-learning Tutorial Train Deploy Image Classification Model Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-train-deploy-image-classification-model-vscode.md
Title: "Tutorial: Train image classification model: VS Code (preview)"
description: Learn how to train an image classification model using TensorFlow and the Azure Machine Learning Visual Studio Code Extension -+
machine-learning Tutorial Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-train-model.md
Title: "Tutorial: Train a model"
description: Dive in to the process of training a model -+ - build-2023
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/azure-machine-learning-release-notes.md
Title: Python SDK release notes
description: Learn about the latest updates to Azure Machine Learning Python SDK. -+
machine-learning Concept Azure Machine Learning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-azure-machine-learning-architecture.md
Title: 'Architecture & key concepts (v1)'
description: This article gives you a high-level understanding of the architecture, terms, and concepts that make up Azure Machine Learning. -+
machine-learning Concept Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-designer.md
Title: What is Designer (v1)?
description: Learn about how the drag-and-drop Designer (v1) UI in Azure Machine Learning studio enables model training and deployment tasks. -+
machine-learning Concept Train Machine Learning Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-train-machine-learning-model.md
Title: 'Build & train models (v1)'
description: Learn how to train models with Azure Machine Learning (v1). Explore the different training methods and choose the right one for your project. -+
machine-learning How To Configure Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-environment.md
description: Set up Azure Machine Learning (v1) Python development environments
-+ Last updated 09/30/2022
machine-learning How To Consume Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-consume-web-service.md
az ml service show -n <service-name>
From Azure Machine Learning studio, select __Endpoints__, __Real-time endpoints__, and then the endpoint name. In details for the endpoint, the __REST endpoint__ field contains the scoring URI. The __Swagger URI__ contains the swagger URI.
+> [!NOTE]
+> Although you can retrieve scoring URI, swagger URI and other information from Azure Machine Learning studio (UI), using Test tab on Azure Machine Learning studio isn't supported for Azure Container Instance or Azure Kubernetes Service based web services. Instead, use code based approach to consume the web service as described in the later section of this article. To fully utilize Test tab to test the deployments, consider [migrating to v2 Managed online endpoint](../migrate-to-v2-deploy-endpoints.md). For more, see [endpoints for inferencing](../concept-endpoints.md).
+ The following table shows what these URIs look like:
machine-learning How To Create Attach Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-attach-kubernetes.md
Title: Create and attach Azure Kubernetes Service
description: 'Learn how to create a new Azure Kubernetes Service cluster through Azure Machine Learning, or how to attach an existing AKS cluster to your workspace.' -+
machine-learning How To Deploy Local Container Notebook Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-local-container-notebook-vm.md
Title: Deploy models to compute instances
description: 'Learn how to deploy your Azure Machine Learning models as a web service using compute instances.' -+
machine-learning How To Deploy Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-local.md
Title: How to run and deploy locally
description: 'This article describes how to use your local computer as a target for training, debugging, or deploying models created in Azure Machine Learning.' -+
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-mlflow-models.md
-+ Last updated 07/29/2024
machine-learning How To Deploy Model Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-model-cognitive-search.md
Title: Deploy a model for use with Azure AI Search
description: Learn how to use Azure Machine Learning to deploy a model for use with Azure AI Search. The model is used as a custom skill to enrich the search experience. -+
machine-learning How To Deploy Model Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-model-designer.md
Title: Use the studio to deploy models trained in the designer
description: Use Azure Machine Learning studio to deploy machine learning models without writing a single line of code. -+
machine-learning How To Designer Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-designer-python.md
Title: Execute Python Script in the designer
description: Learn how to use the Execute Python Script model in Azure Machine Learning designer to run custom operations written in Python. -+
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-log-view-metrics.md
-+ Last updated 10/26/2022
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-manage-workspace-cli.md
Title: Create workspaces with Azure CLI extension v1
description: Learn how to use the Azure CLI extension v1 for machine learning to create a new Azure Machine Learning workspace. -+
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-manage-workspace.md
Title: Manage workspaces in portal or Python SDK (v1)
description: Learn how to manage Azure Machine Learning workspaces in the Azure portal or with the SDK for Python (v1). -+
machine-learning How To Save Write Experiment Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-save-write-experiment-files.md
-+ Last updated 05/31/2024
machine-learning How To Track Designer Experiments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-track-designer-experiments.md
-+ Last updated 10/21/2021
machine-learning How To Train Distributed Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-distributed-gpu.md
description: Learn the best practices for performing distributed training with A
-+ Last updated 10/21/2021
machine-learning How To Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-model.md
Title: Train models with the Azure Machine Learning Python SDK (v1) (preview)
+ Title: Train models with the Azure Machine Learning Python SDK (v1)
description: Add compute resources (compute targets) to your workspace to use for machine learning training and inference with SDK v1. -+ Last updated 10/21/2021
Azure Machine Learning also supports attaching an Azure Virtual Machine. The VM
> > Azure Machine Learning also requires the virtual machine to have a __public IP address__.
-1. **Attach**: To attach an existing virtual machine as a compute target, you must provide the resource ID, user name, and password for the virtual machine. The resource ID of the VM can be constructed using the subscription ID, resource group name, and VM name using the following string format: `/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.Compute/virtualMachines/<vm_name>`
-
- ```python
- from azureml.core.compute import RemoteCompute, ComputeTarget
-
- # Create the compute config
- compute_target_name = "attach-dsvm"
-
- attach_config = RemoteCompute.attach_configuration(resource_id='<resource_id>',
- ssh_port=22,
- username='<username>',
- password="<password>")
-
- # Attach the compute
- compute = ComputeTarget.attach(ws, compute_target_name, attach_config)
-
- compute.wait_for_completion(show_output=True)
- ```
-
- Or you can attach the DSVM to your workspace [using Azure Machine Learning studio](../how-to-create-attach-compute-studio.md#other-compute-targets).
+1. **Attach**: Attach the DSVM to your workspace [using Azure Machine Learning studio](../how-to-create-attach-compute-studio.md#other-compute-targets).
> [!WARNING] > Do not create multiple, simultaneous attachments to the same DSVM from your workspace. Each new attachment will break the previous existing attachment(s).
Azure HDInsight is a popular platform for big-data analytics. The platform provi
After the cluster is created, connect to it with the hostname \<clustername>-ssh.azurehdinsight.net, where \<clustername> is the name that you provided for the cluster.
-1. **Attach**: To attach an HDInsight cluster as a compute target, you must provide the resource ID, user name, and password for the HDInsight cluster. The resource ID of the HDInsight cluster can be constructed using the subscription ID, resource group name, and HDInsight cluster name using the following string format: `/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.HDInsight/clusters/<cluster_name>`
-
- ```python
- from azureml.core.compute import ComputeTarget, HDInsightCompute
- from azureml.exceptions import ComputeTargetException
-
- try:
- # if you want to connect using SSH key instead of username/password you can provide parameters private_key_file and private_key_passphrase
-
- attach_config = HDInsightCompute.attach_configuration(resource_id='<resource_id>',
- ssh_port=22,
- username='<ssh-username>',
- password='<ssh-pwd>')
- hdi_compute = ComputeTarget.attach(workspace=ws,
- name='myhdi',
- attach_configuration=attach_config)
-
- except ComputeTargetException as e:
- print("Caught = {}".format(e.message))
-
- hdi_compute.wait_for_completion(show_output=True)
- ```
-
- Or you can attach the HDInsight cluster to your workspace [using Azure Machine Learning studio](../how-to-create-attach-compute-studio.md#other-compute-targets).
+1. **Attach**: Attach the HDInsight cluster to your workspace [using Azure Machine Learning studio](../how-to-create-attach-compute-studio.md#other-compute-targets).
> [!WARNING] > Do not create multiple, simultaneous attachments to the same HDInsight from your workspace. Each new attachment will break the previous existing attachment(s).
machine-learning How To Train With Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-with-custom-image.md
Title: Train a model by using a custom Docker image
description: Learn how to use your own Docker images, or curated ones from Microsoft, to train models in Azure Machine Learning. -+
machine-learning How To Troubleshoot Serialization Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-troubleshoot-serialization-error.md
Title: Troubleshoot SerializationError
description: Troubleshooting steps when you get the "cannot import name 'SerializationError'" message. -+
machine-learning How To Tune Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-tune-hyperparameters.md
description: Automate hyperparameter tuning for deep learning and machine learni
-+ Last updated 05/30/2024
machine-learning How To Use Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-environments.md
description: Create and manage environments for model training and deployment wi
-+ Last updated 04/19/2022
machine-learning How To Use Private Python Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-private-python-packages.md
-+ Last updated 07/09/2024
machine-learning Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/introduction.md
Title: SDK & CLI (v1)
description: Learn about Azure Machine Learning SDK & CLI (v1). -+
machine-learning Reference Azure Machine Learning Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/reference-azure-machine-learning-cli.md
Title: 'Install and set up the CLI (v1)' description: Learn how to use the Azure CLI extension (v1) for ML to create & manage resources such as your workspace, datastores, datasets, pipelines, models, and deployments. -+
machine-learning Reference Pipeline Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/reference-pipeline-yaml.md
Title: Machine Learning pipeline YAML (v1)
description: Learn how to define a machine learning pipeline using a YAML file. YAML pipeline definitions are used with the machine learning extension for the Azure CLI (v1). -+
machine-learning Samples Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/samples-designer.md
Title: Example pipelines & datasets for the designer
description: Learn how to use samples in Azure Machine Learning designer to jumps-start your machine learning pipelines. -+
machine-learning Samples Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/samples-notebooks.md
Title: Example Jupyter Notebooks (v1)
description: Learn how to find and use the Juypter Notebooks designed to help you explore the SDK (v1) and serve as models for your own machine learning projects. -+
machine-learning Tutorial 1St Experiment Hello World https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-1st-experiment-hello-world.md
Title: 'Tutorial: Get started with a Python script (v1)'
description: Get started with your first Python script in Azure Machine Learning, with SDK v1. This is part 1 of a two-part getting-started series. -+
machine-learning Tutorial Designer Automobile Price Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-designer-automobile-price-deploy.md
-+ Last updated 03/04/2024
machine-learning Tutorial Designer Automobile Price Train Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-designer-automobile-price-train-score.md
-+ Last updated 05/10/2022
machine-learning Tutorial Train Deploy Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-train-deploy-notebook.md
Title: "Tutorial: Train and deploy an example in Jupyter Notebook"
description: Use Azure Machine Learning to train and deploy an image classification model with scikit-learn in a cloud-based Python Jupyter Notebook. -+
modeling-simulation-workbench Modeling Simulation Workbench Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/modeling-simulation-workbench-overview.md
Last updated 03/15/2024
# What is Azure Modeling and Simulation Workbench?
-The Azure Modeling and Simulation Workbench is a secure, on-demand service that provides a fully managed engineering design and simulation environment for safe and efficient user collaboration. The service incorporates many infrastructure services required to build a successful environment for engineering development, such as: workload specific VMs, scheduler, orchestration, license server, remote connectivity, high performance storage, network configurations, security, and access controls.
+Azure Modeling and Simulation Workbench is a secure, on-demand service that provides a fully managed engineering design and simulation environment for safe and efficient user collaboration. The service incorporates many infrastructure services required to build a successful environment for engineering development, such as: workload specific VMs, scheduler, orchestration, license server, remote connectivity, high performance storage, network configurations, security, and access controls.
- A chamber environment enables primary development teams to onboard their collaborators (customers, partners, ISVs, service/IP providers) for joint analysis/debug activity within the same chamber. - Multi-layered security and access controls allow users to monitor, scale, and optimize the compute and storage capacity as needed.
mysql Connect Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-azure-cli.md
Last updated 06/18/2024-+
mysql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-csharp.md
Last updated 06/18/2024-+
mysql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-nodejs.md
Last updated 06/18/2024-+
mysql Connect Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-php.md
Last updated 06/18/2024-+
mysql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-python.md
Last updated 06/18/2024-+
mysql How To Deploy On Azure Free Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-deploy-on-azure-free-account.md
Last updated 06/18/2024-+
mysql How To Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-troubleshoot-common-connection-issues.md
Last updated 06/18/2024-+
mysql Quickstart Create Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-arm-template.md
Last updated 06/18/2024-+
mysql Quickstart Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-bicep.md
Last updated 06/18/2024-+
mysql Quickstart Create Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-server-cli.md
Last updated 06/18/2024-+
mysql Quickstart Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-terraform.md
Last updated 06/18/2024-+
mysql Sample Scripts Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/sample-scripts-azure-cli.md
Last updated 06/18/2024-+
mysql Sample Cli Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-audit-logs.md
Last updated 06/18/2024-+
mysql Sample Cli Change Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-change-server-parameters.md
Last updated 06/18/2024-+
mysql Sample Cli Create Connect Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-create-connect-private-access.md
Last updated 06/18/2024-+
mysql Sample Cli Create Connect Public Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-create-connect-public-access.md
Last updated 06/18/2024-+
mysql Sample Cli Monitor And Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-monitor-and-scale.md
Last updated 06/18/2024-+
mysql Sample Cli Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-read-replicas.md
Last updated 06/18/2024-+
mysql Sample Cli Restart Stop Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-restart-stop-start.md
Last updated 06/18/2024-+
mysql Sample Cli Restore Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-restore-server.md
Last updated 06/18/2024-+
mysql Sample Cli Same Zone Ha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-same-zone-ha.md
Last updated 06/18/2024-+
mysql Sample Cli Slow Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-slow-query-logs.md
Last updated 06/18/2024-+
mysql Sample Cli Zone Redundant Ha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-zone-redundant-ha.md
Last updated 06/18/2024-+
mysql Tutorial Deploy Springboot On Aks Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-deploy-springboot-on-aks-vnet.md
Last updated 06/18/2024-+
mysql Tutorial Php Database App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-php-database-app.md
Last updated 06/18/2024-+
mysql Tutorial Simple Php Mysql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-simple-php-mysql-app.md
Last updated 06/18/2024-+
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in the Azure Database for MySQ
> [!NOTE] > This article references the term slave, which Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+## August 2024
+
+- **Major version upgrade support for Burstable compute tier**
+
+ Azure Database for MySQL now offers major version upgrades for Burstable SKU compute tiers. This support automatically upgrades the compute tier to General Purpose SKU before performing the upgrade, ensuring sufficient resources. Customers can choose to revert back to Burstable SKU after the upgrade. Additional costs may apply. [Learn more](how-to-upgrade.md#perform-a-planned-major-version-upgrade-from-mysql-57-to-mysql-80-using-the-azure-portal-for-burstable-sku-servers)
+
## July 2024 - **Move from private access (virtual network integrated) network to public access or private link**
operator-nexus Concepts Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-compute.md
# Azure Operator Nexus compute
-Azure Operator Nexus is built on basic constructs like compute servers, storage appliances, and network fabric devices. These compute servers, also called bare-metal machines (BMMs), represent the physical machines on the rack. They run the CBL-Mariner operating system and provide closed integration support for high-performance workloads.
+Azure Operator Nexus is built on basic constructs like compute servers, storage appliances, and network fabric devices. These compute servers, also called bare-metal machines (BMMs), represent the physical machines on the rack. They run the Azure Linux (formerly CBL-Mariner) operating system and provide closed integration support for high-performance workloads.
These BMMs are deployed as part of the Azure Operator Nexus automation suite. They exist as nodes in a Kubernetes cluster to serve various virtualized and containerized workloads in the ecosystem.
Each BMM in an Azure Operator Nexus instance is represented as an Azure resource
Nonuniform memory access (NUMA) alignment is a technique to optimize performance and resource utilization in multiple-socket servers. It involves aligning memory and compute resources to reduce latency and improve data access within a server system.
-Through the strategic placement of software components and workloads in a NUMA-aware way, Operators can enhance the performance of network functions, such as virtualized routers and firewalls. This placement leads to improved service delivery and responsiveness in their telco cloud environments.
+Through the strategic placement of software components and workloads in a NUMA-aware way, Operators can enhance the performance of network functions, such as virtualized routers and firewalls. This placement leads to improved service delivery and responsiveness in their cloud environments.
By default, all the workloads deployed in an Azure Operator Nexus instance are NUMA aligned.
Azure Operator Nexus reserves a small set of CPUs for the host operating system
### Huge page support
-Huge page usage in telco workloads refers to the utilization of large memory pages, typically 2 MB or 1 GB in size, instead of the standard 4-KB pages. This approach helps reduce memory overhead and improves the overall system performance. It reduces the translation look-aside buffer (TLB) miss rate and improves memory access efficiency.
+Huge page usage in workloads refers to the utilization of large memory pages, typically 2 MB or 1 GB in size, instead of the standard 4-KB pages. This approach helps reduce memory overhead and improves the overall system performance. It reduces the translation look-aside buffer (TLB) miss rate and improves memory access efficiency.
-Telco workloads that involve large data sets or intensive memory operations, such as network packet processing, can benefit from huge page usage because it enhances memory performance and reduces memory-related bottlenecks. As a result, users see improved throughput and reduced latency.
+Workloads that involve large data sets or intensive memory operations, such as network packet processing, can benefit from huge page usage because it enhances memory performance and reduces memory-related bottlenecks. As a result, users see improved throughput and reduced latency.
All virtual machines created on Azure Operator Nexus can make use of either 2-MB or 1-GB huge pages, depending on the type of virtual machine.
operator-nexus Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/overview.md
Here are important points about the architecture:
Here are some key features of Azure Operator Nexus.
-### CBL-Mariner
+### Azure Linux
-Azure Operator Nexus runs Microsoft's own Linux distribution called [CBL-Mariner](https://github.com/microsoft/CBL-Mariner) on the bare-metal hosts in the operator's facilities. The same Linux distribution supports Azure cloud infrastructure and edge services. It includes a small set of core packages by default.
+Azure Operator Nexus runs Microsoft's own Linux distribution called [Azure Linux (formerly CBL-Mariner)](https://github.com/microsoft/azurelinux) on the bare-metal hosts in the operator's facilities. The same Linux distribution supports Azure cloud infrastructure and edge services. It includes a small set of core packages by default.
-CBL-Mariner is a lightweight operating system. It consumes limited system resources and is engineered to be efficient. For example, it has a fast startup time with a small footprint and locked-down packages to reduce the threat landscape.
+Azure Linux is a lightweight operating system. It consumes limited system resources and is engineered to be efficient. For example, it has a fast startup time with a small footprint and locked-down packages to reduce the threat landscape.
When Microsoft identifies a security vulnerability, it makes the latest security patches and fixes available with the goal of fast turnaround time. Running the infrastructure on Linux aligns with NF needs, telecommunication industry trends, and relevant open-source communications.
One important component of the service is the [cluster manager](./howto-cluster-
Azure Operator Nexus includes [network fabric automation](./howto-configure-network-fabric-controller.md), which enables operators to build, operate, and manage carrier-grade network fabrics.
-The reliable and distributed cloud services model supports the operators' telco network functions. Operators can interact with Azure Operator Nexus to provision the network fabric via zero-touch provisioning (ZTP). They can also perform complex network implementations via a workflow-driven API model.
+The reliable and distributed cloud services model supports the operators' network functions. Operators can interact with Azure Operator Nexus to provision the network fabric via zero-touch provisioning (ZTP). They can also perform complex network implementations via a workflow-driven API model.
### Network packet broker
operator-nexus Reference Near Edge Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-near-edge-compute.md
Azure Operator Nexus offers a group of on-premises cloud solutions. One of the on-premises offerings allows telco operators to run the network functions in a near-edge environment.
-In a near-edge environment (also known as an instance), the compute servers (also known as bare-metal machines) represent the physical machines on the rack. They run the CBL-Mariner operating system and provide support for running high-performance workloads.
+In a near-edge environment (also known as an instance), the compute servers (also known as bare-metal machines) represent the physical machines on the rack. They run the Azure Linux operating system and provide support for running high-performance workloads.
<!-- ## Available SKUs
operator-nexus Reference Nexus Kubernetes Cluster Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-nexus-kubernetes-cluster-supported-versions.md
Note the following important changes to make before you upgrade to any of the av
| Kubernetes Version | Version Bundle | Components | OS components | Breaking Changes | Notes | |--|-|--|||--|
-| 1.25.6 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Azure Linux 2.0 | No breaking changes | |
-| 1.25.6 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | |
-| 1.25.6 | 3 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | |
-| 1.25.6 | 4 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, cluster nodes are Azure Arc-enabled |
-| 1.25.6 | 5 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.25.6 | 6 | Calico v3.27.2<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.10.0-60<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.25.6 | 7 |Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted |
-| 1.25.11 | 1 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | |
-| 1.25.11 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, cluster nodes are Azure Arc-enabled |
-| 1.25.11 | 3 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.25.11 | 4 | Calico v3.27.2<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.10.0-60<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.25.11 | 5 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted |
-| 1.26.3 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Azure Linux 2.0 | No breaking changes | |
-| 1.26.3 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | |
-| 1.26.3 | 3 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | |
-| 1.26.3 | 4 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, cluster nodes are Azure Arc-enabled |
-| 1.26.3 | 5 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.26.3 | 6 | Calico v3.27.2<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.10.0-60<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.26.3 | 7 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted |
-| 1.26.6 | 1 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | |
-| 1.26.6 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, cluster nodes are Azure Arc-enabled |
-| 1.26.6 | 3 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.26.6 | 4 | Calico v3.27.2<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.10.0-60<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.26.6 | 5 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted |
-| 1.26.12 | 1 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted and cluster nodes are Azure Arc-enabled |
-| 1.27.1 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Azure Linux 2.0 | Cgroupv2 | Steps to disable cgroupv2 can be found [here](./howto-disable-cgroupsv2.md) |
-| 1.27.1 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | Cgroupv2 | Steps to disable cgroupv2 can be found [here](./howto-disable-cgroupsv2.md) |
-| 1.27.1 | 3 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | Cgroupv2 | Steps to disable cgroupv2 can be found [here](./howto-disable-cgroupsv2.md) |
-| 1.27.1 | 4 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, cluster nodes are Azure Arc-enabled |
-| 1.27.1 | 5 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.27.1 | 6 | Calico v3.27.2<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.10.0-60<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.27.1 | 7 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted |
-| 1.27.3 | 1 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | Cgroupv2 | Steps to disable cgroupv2 can be found [here](./howto-disable-cgroupsv2.md) |
-| 1.27.3 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, cluster nodes are Azure Arc-enabled |
-| 1.27.3 | 3 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.27.3 | 4 | Calico v3.27.2<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.10.0-60<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.27.3 | 5 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted |
-| 1.27.9 | 1 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted and cluster nodes are Azure Arc-enabled |
-| 1.28.0 | 1 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | |
-| 1.28.0 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, cluster nodes are Azure Arc-enabled |
-| 1.28.0 | 3 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.28.0 | 4 | Calico v3.27.2<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.10.0-60<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
-| 1.28.0 | 5 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted |
| 1.28.9 | 1 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted and cluster nodes are Azure Arc-enabled |
+| 1.28.0 | 5 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted |
+| 1.28.0 | 4 | Calico v3.27.2<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.10.0-60<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
+| 1.27.9 | 1 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted and cluster nodes are Azure Arc-enabled |
+| 1.27.3 | 5 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted |
+| 1.27.3 | 4 | Calico v3.27.2<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.10.0-60<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
+| 1.26.12 | 1 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted and cluster nodes are Azure Arc-enabled |
+| 1.26.6 | 5 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted |
+| 1.26.6 | 4 | Calico v3.27.2<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.10.0-60<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
+| 1.25.11 | 5 | Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted |
+| 1.25.11 | 4 | Calico v3.27.2<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.10.0-60<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
+| 1.25.6 | 7 |Calico v3.27.3<br>metrics-server v0.7.1<br>Multus v4.0.0<br>azure-arc-servers v1.1.0<br>CoreDNS v1.9.4<br>etcd v3.5.13<br>sriov-dp v3.11.0-68<br>Csi-nfs v4.7.0<br>csi-volume v0.1.0 | Azure Linux 2.0 | No breaking changes | Beginning with this version bundle, volume orchestration connectivity is TLS encrypted |
+| 1.25.6 | 6 | Calico v3.27.2<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.10.0-60<br>Csi-nfs v4.6.0 | Azure Linux 2.0 | No breaking changes | |
## Upgrading Kubernetes versions
operator-nexus Troubleshoot Hardware Validation Failure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/troubleshoot-hardware-validation-failure.md
# Troubleshoot hardware validation failure in Nexus Cluster
-This article describes how to troubleshoot a failed server hardware validation. Hardware validation is run as part of cluster deploy action.
+This article describes how to troubleshoot a failed server hardware validation. Hardware validation (HWV) is run as part of cluster deploy action and a bare metal replace action. HWV validates a bare metal machine (BMM) by executing test cases against the baseboard management controller (BMC). The Azure Operator Nexus platform is deployed on Dell servers. Dell servers use the integrated Dell remote access controller (iDRAC) which is the equivalent of a BMC.
## Prerequisites -- Gather the following information:
- - Subscription ID
- - Cluster name and resource group
-- The user needs access to the Cluster's Log Analytics Workspace (LAW)
+1. Collect the following information:
+ - Subscription ID
+ - Cluster name
+ - Resource group
+2. Request access to the Cluster's Log Analytics Workspace (LAW).
+3. Access to BMC webui and/or jumpbox that allows running of racadm utility.
## Locating hardware validation results 1. Navigate to cluster resource group in the subscription 2. Expand the cluster Log Analytics Workspace (LAW) resource for the cluster 3. Navigate to the Logs tab
-4. Hardware validation results can be fetched with a query against the HWVal_CL table as per the following example
+4. Hardware validation results can be fetched with a query against the `HWVal_CL` table as per the following example
:::image type="content" source="media\hardware-validation-cluster-law.png" alt-text="Screenshot of cluster LAW custom table query." lightbox="media\hardware-validation-cluster-law.png":::
Expanding `result_detail` for a given category shows detailed results.
### System info category
-* Memory/RAM related failure (memory_capacity_GB)
- * Memory specs are defined in the SKU.
- * Memory below threshold value indicates missing or failed DIMM(s). Failed DIMM(s) would also be reflected in the `health_info` category.
+* Memory/RAM Related Failure (memory_capacity_GB)
+ * Memory specs are defined in the SKU. Memory below threshold value indicates missing or failed Dual In-Line Memory Module (DIMM). A failed DIMM would also be reflected in the `health_info` category. The following example shows a failed memory check.
```json {
Expanding `result_detail` for a given category shows detailed results.
} ```
+ * To check memory information in BMC webui:
+
+ `BMC` -> `System` -> `Memory`
+
+ * To check memory information with racadm:
+
+ ```bash
+ racadm --nocertwarn -r $IP -u $BMC_USR -p $BMC_PWD hwinventory | grep SysMemTotalSize
+ ```
+
+ * To troubleshoot a memory problem engage vendor.
+ * CPU Related Failure (cpu_sockets)
- * CPU specs are defined in the SKU.
- * Failed `cpu_sockets` check indicates a failed CPU or CPU count mismatch.
+ * CPU specs are defined in the SKU. Failed `cpu_sockets` check indicates a failed CPU or CPU count mismatch. The following example shows a failed CPU check.
```json {
Expanding `result_detail` for a given category shows detailed results.
} ```
+ * To check CPU information in BMC webui:
+
+ `BMC` -> `System` -> `CPU`
+
+ * To check CPU information with racadm:
+
+ ```bash
+ racadm --nocertwarn -r $IP -u $BMC_USR -p $BMC_PWD hwinventory | grep PopulatedCPUSockets
+ ```
+
+ * To troubleshoot a CPU problem engage vendor.
+ * Model Check Failure (Model)
- * Failed `Model` check indicates that wrong server is racked in the slot or there's a cabling mismatch.
+ * Failed `Model` check indicates that wrong server is racked in the slot or there's a cabling mismatch. The following example shows a failed model check.
```json {
Expanding `result_detail` for a given category shows detailed results.
} ```
+ * To check model information in BMC webui:
+
+ `BMC` -> `Dashboard` - Shows Model
+
+ * To check model information with racadm:
+
+ ```bash
+ racadm --nocertwarn -r $IP -u $BMC_USR -p $BMC_PWD getsysinfo | grep Model
+ ```
+
+ * To troubleshoot this problem, ensure that server is racked in the correct location, cabled accordingly, and that the correct IP is assigned.
+
+* Serial Number Check Failure (Serial_Number)
+ * The server's serial number, also referred as the service tag, is defined in the cluster. Failed `Serial_Number` check indicates a mismatch between the serial number in the cluster and the actual serial number of the machine. The following example shows a failed serial number check.
+
+ ```json
+ {
+ "field_name": "Serial_Number",
+ "comparison_result": "Fail",
+ "expected": "1234567",
+ "fetched": "7654321"
+ }
+ ```
+
+ * To check serial number information in BMC webui:
+
+ `BMC` -> `Dashboard` - Shows Service Tag
+
+ * To check serial number information with racadm:
+
+ ```bash
+ racadm --nocertwarn -r $IP -u $BMC_USR -p $BMC_PWD getsysinfo | grep "Service Tag"
+ ```
+
+ * To troubleshoot this problem, ensure that server is racked in the correct location, cabled accordingly, and that the correct IP is assigned.
+
+* iDRAC License Check Failure
+ * All iDRACs require a perpetual/production iDRAC datacenter or enterprise license. Trial licenses are valid for only 30 days. A failed `iDRAC License Check` indicates that the required iDRAC license is missing. The following examples show a failed iDRAC license check for a trial license and missing license respectively.
+
+ ```json
+ {
+ "field_name": "iDRAC License Check",
+ "comparison_result": "Fail",
+ "expected": "idrac9 x5 datacenter license or idrac9 x5 enterprise license - perpetual or production",
+ "fetched": "iDRAC9 x5 Datacenter Trial License - Trial"
+ }
+ ```
+
+ ```json
+ {
+ "field_name": "iDRAC License Check",
+ "comparison_result": "Fail",
+ "expected": "idrac9 x5 datacenter license or idrac9 x5 enterprise license - perpetual or production",
+ "fetched": ""
+ }
+ ```
+
+ * To troubleshoot this problem engage vendor to obtain the correct license. Apply the license using the iDRAC webui in the following location:
+
+ `BMC` -> `Configuration` -> `Licenses`
+
+* Firmware Version Checks
+ * Firmware version checks were introduced in release 3.9. The following example shows the expected log for release versions before 3.9.
+
+ ```json
+ {
+ "system_info": {
+ "system_info_result": "Pass",
+ "result_log": [
+ "Firmware validation not supported in release 3.8"
+ ]
+ },
+ }
+ ```
+
+ * Firmware versions are determined based on the `cluster version` value in the cluster object. The following example shows a failed check due to indeterminate cluster version. If this problem is encountered, verify the version in the cluster object.
+
+ ```json
+ {
+ "system_info": {
+ "system_info_result": "Fail",
+ "result_log": [
+ "Unable to determine firmware release"
+ ]
+ },
+ }
+ ```
+ ### Drive info category * Disk Check Failure
- * Drive specs are defined in the SKU
- * Mismatched capacity values indicate incorrect drives or drives inserted in to incorrect slots.
- * Missing capacity and type fetched values indicate drives that are failed, missing or inserted in to incorrect slots.
+ * Drive specs are defined in the SKU. Mismatched capacity values indicate incorrect drives or drives inserted in to incorrect slots. Missing capacity and type fetched values indicate drives that are failed, missing, or inserted in to incorrect slots.
```json {
Expanding `result_detail` for a given category shows detailed results.
} ```
+ * To check disk information in BMC webui:
+
+ `BMC` -> `Storage` -> `Physical Disks`
+
+ * To check disk information with racadm:
+
+ ```bash
+ racadm --nocertwarn -r $IP -u $BMC_USR -p $BMC_PWD raid get pdisks -o -p State,Size
+ ```
+
+ * To troubleshoot, ensure that disks are inserted in the correct slots. If the problem persists engage vendor.
+ ### Network info category
-* NIC Check Failure
- * Dell server NIC specs are defined in the SKU.
- * Mismatched link status indicates loose or faulty cabling or crossed cables.
- * Mismatched model indicates incorrect NIC card is inserted in to slot.
- * Missing link/model fetched values indicate NICs that are failed, missing or inserted in to incorrect slots.
+* Network Interface Cards (NIC) Check Failure
+ * Dell server NIC specs are defined in the SKU. A mismatched link status indicates loose or faulty cabling or crossed cables. A mismatched model indicates incorrect NIC card is inserted in to slot. Missing link/model fetched values indicate NICs that are failed, missing, or inserted in to incorrect slots.
```json {
Expanding `result_detail` for a given category shows detailed results.
} ```
+ * To check NIC information in BMC webui:
+
+ `BMC` -> `System` -> `Network Devices`
+
+ * To check all NIC information with racadm:
+
+ ```bash
+ racadm --nocertwarn -r $IP -u $BMC_USR -p $BMC_PWD hwinventory NIC
+ ```
+
+ * To check a specific NIC with racadm provide the Fully Qualified Device Descriptor (FQDD):
+
+ ```bash
+ racadm --nocertwarn -r $IP -u $BMC_USR -p $BMC_PWD hwinventory NIC.Embedded.1-1-1
+ ```
+
+ * To troubleshoot, ensure that servers are cabled correctly and that ports are linked up. Bounce port on the fabric. Perform flea drain. If the problem persists engage vendor.
+ * NIC Check L2 Switch Information
- * HW Validation reports L2 switch information for each of the server interfaces.
- * The switch connection ID (switch interface MAC) and switch port connection ID (switch interface label) are informational.
+ * HWV reports L2 switch information for each of the server interfaces. The switch connection ID (switch interface MAC) and switch port connection ID (switch interface label) are informational.
```json { "field_name": "NIC.Slot.3-1-1_SwitchConnectionID",
+ "comparison_result": "Info",
"expected": "unknown",
- "fetched": "c0:d6:82:23:0c:7d",
- "comparison_result": "Info"
+ "fetched": "c0:d6:82:23:0c:7d"
} ``` ```json { "field_name": "NIC.Slot.3-1-1_SwitchPortConnectionID",
+ "comparison_result": "Info",
"expected": "unknown",
- "fetched": "Ethernet10/1",
- "comparison_result": "Info"
+ "fetched": "Ethernet10/1"
} ```
-* Release 3.6 introduced cable checks for bonded interfaces.
- * Mismatched cabling is reported in the result_log.
- * Cable check validates that that bonded NICs connect to switch ports with same Port ID. In the following example PCI 3/1 and 3/2 connect to "Ethernet1/1" and "Ethernet1/3" respectively on TOR, triggering a failure for HWV.
+* Cabling Checks for Bonded Interfaces
+ * Mismatched cabling is reported in the result_log. Cable check validates that that bonded NICs connect to switch ports with same Port ID. In the following example Peripheral Component Interconnect (PCI) 3/1 and 3/2 connect to "Ethernet1/1" and "Ethernet1/3" respectively on TOR, triggering a failure for HWV.
```json {
Expanding `result_detail` for a given category shows detailed results.
} ], "result_log": [
- "Cabling problem detected on PCI Slot 3"
+ "Cabling problem detected on PCI Slot 3 - server NIC.Slot.3-1-1 connected to switch Ethernet1/1 - server NIC.Slot.3-2-1 connected to switch Ethernet1/3"
] }, } ```
+ * To fix the issue insert cables in to the correct interfaces.
+
+* iDRAC (BMC) MAC Address Check Failure
+ * The iDRAC MAC address is defined in the cluster for each BMM. A failed `iDRAC_MAC` check indicates a mismatch between the iDRAC/BMC MAC in the cluster and the actual MAC address retrieved from the machine.
+
+ ```json
+ {
+ "field_name": "iDRAC_MAC",
+ "comparison_result": "Fail",
+ "expected": "aa:bb:cc:dd:ee:ff",
+ "fetched": "aa:bb:cc:dd:ee:gg"
+ }
+ ```
+
+ * To troubleshoot this problem, ensure that correct MAC address is defined in the cluster. If MAC is correct in the cluster object, attempt a flea drain. If problem persists ensure that server is racked in the correct location, cabled accordingly, and that the correct IP is assigned.
+
+* Preboot execution environment (PXE) MAC Address Check Failure
+ * The PXE MAC address is defined in the cluster for each BMM. A failed `PXE_MAC` check indicates a mismatch between the PXE MAC in the cluster and the actual MAC address retrieved from the machine.
+
+ ```json
+ {
+ "field_name": "NIC.Embedded.1-1_PXE_MAC",
+ "comparison_result": "Fail",
+ "expected": "aa:bb:cc:dd:ee:ff",
+ "fetched": "aa:bb:cc:dd:ee:gg"
+ }
+ ```
+
+ * To troubleshoot this problem, ensure that correct MAC address is defined in the cluster. If MAC is correct in the cluster object, attempt a flea drain. If problem persists ensure that server is racked in the correct location, cabled accordingly, and that the correct IP is assigned.
+ ### Health info category * Health Check Sensor Failure
- * Server health checks cover various hardware component sensors.
- * A failed health sensor indicates a problem with the corresponding hardware component.
- * The following examples indicate fan, drive and CPU failures respectively.
+ * Server health checks cover various hardware component sensors. A failed health sensor indicates a problem with the corresponding hardware component. The following examples indicate fan, drive, and CPU failures respectively.
```json {
Expanding `result_detail` for a given category shows detailed results.
} ```
-* Health Check Lifecycle Log (LC Log) Failures
- * Dell server health checks fail for recent Critical LC Log Alarms.
- * The hardware validation plugin logs the alarm ID, name, and timestamp.
- * Recent LC Log critical alarms indicate need for further investigation.
- * The following example shows a failure for a critical Backplane voltage alarm.
+ * To check health information in BMC webui:
+
+ `BMC` -> `Dashboard` - Shows Health Information
+
+ * To check health information with racadm:
+
+ ```bash
+ racadm --nocertwarn -r $IP -u $BMC_USR -p $BMC_PWD getsensorinfo
+ ```
+
+ * To troubleshoot a server health failure engage vendor.
+
+* Health Check LifeCycle (LC) Log Failures
+ * Dell server health checks fail for recent Critical LC Log Alarms. The hardware validation plugin logs the alarm ID, name, and timestamp. Recent LC Log critical alarms indicate need for further investigation. The following example shows a failure for a critical backplane voltage alarm.
+
+ ```json
+ {
+ "field_name": "LCLog_Critical_Alarms",
+ "comparison_result": "Fail",
+ "expected": "No Critical Errors",
+ "fetched": "53539 2023-07-22T23:44:06-05:00 The system board BP1 PG voltage is outside of range."
+ }
+ ```
+
+ * Virtual disk errors typically indicate a RAID cleanup false positive condition and are logged due to the timing of raid cleanup and system power off pre HWV. The following example shows an LC log critical error on virtual disk 238. If multiple errors are encountered blocking deployment, delete cluster, wait two hours, then reattempt cluster deployment. If the failures aren't deployment blocking, wait two hours then run BMM replace.
```json { "field_name": "LCLog_Critical_Alarms",
+ "comparison_result": "Fail",
"expected": "No Critical Errors",
- "fetched": "53539 2023-07-22T23:44:06-05:00 The system board BP1 PG voltage is outside of range.",
- "comparison_result": "Fail"
+ "fetched": "104473 2024-07-26T16:05:19-05:00 Virtual Disk 238 on RAID Controller in SL 3 has failed."
} ```
+ * To check LC logs in BMC webui:
+
+ `BMC` -> `Maintenance` -> `Lifecycle Log`
+
+ * To check LC log critical alarms with racadm:
+
+ ```bash
+ racadm --nocertwarn -r $IP -u $BMC_USR -p $BMC_PWD lclog view -s critical
+ ```
+
+ * If `Backplane Comm` critical errors are logged, perform flea drain. Engage vendor to troubleshoot any other LC log critical failures.
+ * Health Check Server Power Action Failures
- * Dell server health check fail for failed server power-up or failed iDRAC reset.
- * A failed server control action indicates an underlying hardware issue.
- * The following example shows failed power on attempt.
+ * Dell server health checks fail for failed server power-up or failed iDRAC reset. A failed server control action indicates an underlying hardware issue. The following example shows failed power on attempt.
```json { "field_name": "Server Control Actions",
+ "comparison_result": "Fail",
"expected": "Success",
- "fetched": "Failed",
- "comparison_result": "Fail"
+ "fetched": "Failed"
} ```
Expanding `result_detail` for a given category shows detailed results.
] ```
+ * To power server on in BMC webui:
+
+ `BMC` -> `Dashboard` -> `Power On System`
+
+ * To power server on with racadm:
+
+ ```bash
+ racadm --nocertwarn -r $IP -u $BMC_USR -p $BMC_PWD serveraction powerup
+ ```
+
+ * To troubleshoot server power-on failure attempt a flea drain. If problem persists engage vendor.
+ * Health Check Power Supply Failure and Redundancy Considerations
- * Dell server health checks warn when one power supply is missing or failed.
- * Power supply "field_name" might be displayed as 0/PS0/Power Supply 0 and 1/PS1/Power Supply 1 for the first and second power supplies respectively.
- * A failure of one power supply doesn't trigger an HW validation device failure.
+ * Dell server health checks warn when one power supply is missing or failed. Power supply "field_name" might be displayed as 0/PS0/Power Supply 0 and 1/PS1/Power Supply 1 for the first and second power supplies respectively. A failure of one power supply doesn't trigger an HWV device failure.
```json { "field_name": "Power Supply 1",
+ "comparison_result": "Warning",
"expected": "Enabled-OK",
- "fetched": "UnavailableOffline-Critical",
- "comparison_result": "Warning"
+ "fetched": "UnavailableOffline-Critical"
} ``` ```json { "field_name": "System Board PS Redundancy",
+ "comparison_result": "Warning",
"expected": "Enabled-OK",
- "fetched": "Enabled-Critical",
- "comparison_result": "Warning"
+ "fetched": "Enabled-Critical"
} ```
+ * To check power supplies in BMC webui:
+
+ `BMC` -> `System` -> `Power`
+
+ * To check power supplies with racadm:
+
+ ```bash
+ racadm --nocertwarn -r $IP -u $BMC_USR -p $BMC_PWD getsensorinfo | grep PS
+ ```
+
+ * Reseating the power supply might fix the problem. If alarms persist engage vendor.
+ ### Boot info category
-* Boot Device Check Considerations
+* Boot Device Name Check Considerations
* The `boot_device_name` check is currently informational. * Mismatched boot device name shouldn't trigger a device failure. ```json {
+ "field_name": "boot_device_name",
"comparison_result": "Info", "expected": "NIC.PxeDevice.1-1",
- "fetched": "NIC.PxeDevice.1-1",
- "field_name": "boot_device_name"
+ "fetched": "NIC.PxeDevice.1-1"
} ```
Expanding `result_detail` for a given category shows detailed results.
```json { "field_name": "pxe_device_1_name",
+ "comparison_result": "Fail",
"expected": "NIC.Embedded.1-1-1",
- "fetched": "NIC.Embedded.1-2-1",
- "comparison_result": "Fail"
+ "fetched": "NIC.Embedded.1-2-1"
} ``` ```json { "field_name": "pxe_device_1_state",
+ "comparison_result": "Fail",
"expected": "Enabled",
- "fetched": "Disabled",
- "comparison_result": "Fail"
+ "fetched": "Disabled"
+ }
+ ```
+
+ * To update the PXE device state and name in BMC webui, set the value then select `Apply` followed by `Apply And Reboot`:
+
+ `BMC` -> `Configuration` -> `BIOS Settings` -> `Network Settings` -> `PXE Device1` -> `Enabled`
+ `BMC` -> `Configuration` -> `BIOS Settings` -> `Network Settings` -> `PXE Device1 Settings` -> `Interface` -> `Embedded NIC 1 Port 1 Partition 1`
+
+ * To update the PXE device state and name with racadm run the following commands:
+
+ ```bash
+ racadm --nocertwarn -r $IP -u $BMC_USR -p $BMC_PWD set bios.NetworkSettings.PxeDev1EnDis Enabled
+ racadm --nocertwarn -r $IP -u $BMC_USR -p $BMC_PWD set bios.PxeDev1Settings.PxeDev1Interface NIC.Embedded.1-1-1
+ racadm --nocertwarn -r $IP -u $BMC_USR -p $BMC_PWD jobqueue create BIOS.Setup.1-1
+ racadm --nocertwarn -r $IP -u $BMC_USR -p $BMC_PWD serveraction powercycle
+ ```
+
+### Device login check
+
+* Device Login Check Considerations
+ * The `device_login` check fails if the iDRAC isn't accessible or if the hardware validation plugin isn't able to sign-in.
+
+ ```json
+ {
+ "device_login": "Fail"
} ```
+ * To set password in BMC webui:
+
+ `BMC` -> `iDRAC Settings` -> `Users` -> `Local Users` -> `Edit`
+
+ * To set password with racadm:
+
+ ```bash
+ racadm -r $BMC_IP -u $BMC_USER -p $CURRENT_PASSWORD set iDRAC.Users.2.Password $BMC_PWD
+ ```
+
+ * To troubleshoot, ping the iDRAC from a jumpbox with access to the BMC network. If iDRAC pings check that passwords match.
+
+### Special considerations
+
+* Servers Failing Multiple Health and Network Checks
+ * Raid deletion is performed during cluster deploy and cluster delete actions for all releases inclusive of 3.12.
+ * If we observe servers getting powered off during hardware validation with multiple failed health and network checks, we need to reattempt cluster deployment.
+ * If issues persist, raid deletion needs to be performed manually on `control` nodes in the cluster.
+
+ * To clear raid in BMC webui:
+
+ `BMC` -> `Storage` -> `Virtual Disks` -> `Action` -> `Delete` -> `Apply Now`
+
+ * To clear raid with racadm:
+
+ ```bash
+ racadm --nocertwarn -r $IP -u $BMC_USR -p $BMC_PWD raid deletevd:Disk.Virtual.239:RAID.SL.3-1
+ racadm --nocertwarn -r $IP -u $BMC_USR -p $BMC_PWD jobqueue create RAID.SL.3-1 --realtime
+ ```
+ ## Adding servers back into the Cluster after a repair After Hardware is fixed, run BMM Replace following instructions from the following page [BMM actions](howto-baremetal-functions.md).
payment-hsm Certification Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/certification-compliance.md
# Certification and compliance
-Azure maintains the largest compliance portfolio in the industry. For details, see [Microsoft Azure Compliance Offerings](https://azure.microsoft.com/resources/microsoft-azure-compliance-offerings/). Each offering description provides an up to-date-scope statement and links to useful downloadable resources.
+Azure maintains the largest compliance portfolio in the industry. For details, see [Microsoft Azure Compliance Offerings](/compliance/regulatory/offering-home). Each offering description provides an up to-date-scope statement and links to useful downloadable resources.
Azure payment HSM meets following compliance standards:
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md
Last updated 06/27/2024-+
postgresql Concepts Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-logging.md
description: Describes logging configuration, storage and analysis in Azure Data
Last updated 7/11/2024-+
postgresql Concepts Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-maintenance.md
Last updated 06/27/2024-+
postgresql Concepts Major Version Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-major-version-upgrade.md
description: Learn how to use Azure Database for PostgreSQL - Flexible Server to
Last updated 7/15/2024-+
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-monitoring.md
Last updated 07/31/2024-+
postgresql Concepts Pgbouncer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-pgbouncer.md
description: This article provides an overview of the built-in PgBouncer feature
Last updated 06/27/2024-+
postgresql Concepts Query Performance Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-performance-insight.md
Last updated 04/27/2024-+
postgresql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-store.md
Last updated 07/25/2024-+
postgresql Concepts Scaling Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-scaling-resources.md
description: This article describes the resource scaling in Azure Database for P
Last updated 07/23/2024-+
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-supported-versions.md
Last updated 06/18/2024-+
postgresql Create Automation Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/create-automation-tasks.md
Last updated 04/27/2024-+
postgresql How To Alert On Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-alert-on-metrics.md
Last updated 04/27/2024-+
postgresql How To Configure And Access Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-and-access-logs.md
Last updated 04/27/2024-+
postgresql How To Configure Server Parameters Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-server-parameters-using-cli.md
Last updated 04/27/2024-+
postgresql How To Configure Server Parameters Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-server-parameters-using-portal.md
Last updated 04/27/2024-+
postgresql How To Cost Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-cost-optimization.md
Last updated 04/27/2024-+
postgresql How To Perform Major Version Upgrade Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-perform-major-version-upgrade-cli.md
Last updated 04/27/2024-+
postgresql How To Server Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-server-logs-cli.md
Last updated 04/27/2024-+
postgresql How To Server Logs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-server-logs-portal.md
Last updated 04/27/2024-+
postgresql How To Stop Start Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-stop-start-server-portal.md
Last updated 04/30/2024-+ #customer intent: As a user, I want to learn how to stop/start an Azure Database for PostgreSQL flexible server instance using the Azure portal so that I can manage my server efficiently.
postgresql Overview Postgres Choose Server Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview-postgres-choose-server-options.md
Last updated 04/27/2024-+
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
Title: Release notes for Azure DB for PostgreSQL - Flexible Server
description: Release notes for Azure DB for PostgreSQL - Flexible Server, including feature additions, engine versions support, extensions, and other announcements. -+ Previously updated : 7/12/2024 Last updated : 8/5/2024 #customer intent: As a reader, I want the title and description to meet the required length and include the relevant information about the release notes for Azure DB for PostgreSQL - Flexible Server.
Last updated 7/12/2024
This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant to Azure Database for PostgreSQL flexible server.
+## Release: Aug 2024
+* General availability of [Database Size Metrics](./concepts-monitoring.md) for Azure Database for PostgreSQL flexible server.
+ ## Release: July 2024 * General availability of [Major Version Upgrade Support for PostgreSQL 16](concepts-major-version-upgrade.md) for Azure Database for PostgreSQL flexible server. * General availability of [Pgvector 0.7.0](concepts-extensions.md) extension.
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
>| Azure SQL Managed Instance (Microsoft.Sql/managedInstances) | managedInstance | privatelink.{dnsPrefix}.database.usgovcloudapi.net | {instanceName}.{dnsPrefix}.database.usgovcloudapi.net | >| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | Sql | privatelink.documents.azure.us | documents.azure.us | >| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | MongoDB | privatelink.mongo.cosmos.azure.us | mongo.cosmos.azure.us |
+>| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | Cassandra | privatelink.cassandra.cosmos.azure.us | cassandra.cosmos.azure.us |
+>| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | Gremlin | privatelink.gremlin.cosmos.azure.us | gremlin.cosmos.azure.us |
+>| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | Table | privatelink.table.cosmos.azure.us | table.cosmos.azure.us |
>| Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) | postgresqlServer | privatelink.postgres.database.usgovcloudapi.net | postgres.database.usgovcloudapi.net | >| Azure Database for PostgreSQL - Flexible server (Microsoft.DBforPostgreSQL/flexibleServers) | postgresqlServer | privatelink.postgres.database.usgovcloudapi.net | postgres.database.usgovcloudapi.net | >| Azure Database for MySQL - Single Server (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.usgovcloudapi.net | mysql.database.usgovcloudapi.net |
sentinel Cef Name Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/cef-name-mapping.md
- Title: Common Event Format (CEF) key and CommonSecurityLog field mapping
-description: This article maps CEF keys to the corresponding field names in the CommonSecurityLog in Microsoft Sentinel.
--- Previously updated : 11/09/2021--
-# CEF and CommonSecurityLog field mapping
-
-The following tables map Common Event Format (CEF) field names to the names they use in Microsoft Sentinel's CommonSecurityLog, and may be helpful when you are working with a CEF data source in Microsoft Sentinel.
-
-For more information, see [Connect your external solution using Common Event Format](connect-common-event-format.md).
-
-> [!IMPORTANT]
->
-> On **February 28th 2023**, we introduced changes to the CommonSecurityLog table schema. Following this change, you might need to review and update custom queries. For more details, see the [recommended actions section](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/upcoming-changes-to-the-commonsecuritylog-table/ba-p/3643232) in this blog post. Out-of-the-box content (detections, hunting queries, workbooks, parsers, etc.) has been updated by Microsoft Sentinel.
-
-> [!NOTE]
-> A Microsoft Sentinel workspace is required in order to [ingest CEF data](connect-common-event-format.md#prerequisites) into Log Analytics.
->
-
-## A - C
-
-|CEF key name |CommonSecurityLog field name |Description |
-||||
-| act | <a name="deviceaction"></a> DeviceAction | The action mentioned in the event. |
-| app | ApplicationProtocol | The protocol used in the application, such as HTTP, HTTPS, SSHv2, Telnet, POP, IMPA, IMAPS, and so on. |
-| cat | DeviceEventCategory | Represents the category assigned by the originating device. Devices often use their own categorization schema to classify event. For example: `/Monitor/Disk/Read`. |
-| cnt | EventCount | A count associated with the event, showing how many times the same event was observed. |
--
-## D
-
-|CEF key name |CommonSecurityLog name |Description |
-||||
-|Device Vendor | DeviceVendor | String that, together with device product and version definitions, uniquely identifies the type of sending device. |
-|Device Product | DeviceProduct | String that, together with device vendor and version definitions, uniquely identifies the type of sending device. |
-|Device Version | DeviceVersion | String that, together with device product and vendor definitions, uniquely identifies the type of sending device. |
-| destinationDnsDomain | DestinationDnsDomain | The DNS part of the fully qualified domain name (FQDN). |
-| destinationServiceName | DestinationServiceName | The service that is targeted by the event. For example, `sshd`.|
-| destinationTranslatedAddress | DestinationTranslatedAddress | Identifies the translated destination referred to by the event in an IP network, as an IPv4 IP address. |
-| destinationTranslatedPort | DestinationTranslatedPort | Port, after translation, such as a firewall. <br>Valid port numbers: `0` - `65535` |
-| deviceDirection | <a name="communicationdirection"></a> CommunicationDirection | Any information about the direction the observed communication has taken. Valid values: <br>- `0` = Inbound <br>- `1` = Outbound |
-| deviceDnsDomain | DeviceDnsDomain | The DNS domain part of the full qualified domain name (FQDN) |
-|DeviceEventClassID | DeviceEventClassID | String or integer that serves as a unique identifier per event type. |
-| deviceExternalId | deviceExternalId | A name that uniquely identifies the device generating the event. |
-| deviceFacility | DeviceFacility | The facility generating the event.|
-| deviceInboundInterface | DeviceInboundInterface |The interface on which the packet or data entered the device. |
-| deviceNtDomain | DeviceNtDomain | The Windows domain of the device address |
-| deviceOutboundInterface | DeviceOutboundInterface |Interface on which the packet or data left the device. |
-| devicePayloadId |DevicePayloadId |Unique identifier for the payload associated with the event. |
-| deviceProcessName | ProcessName | Process name associated with the event. <br><br>For example, in UNIX, the process generating the syslog entry. |
-| deviceTranslatedAddress | DeviceTranslatedAddress | Identifies the translated device address that the event refers to, in an IP network. <br><br>The format is an Ipv4 address. |
-| dhost |DestinationHostName | The destination that the event refers to in an IP network. <br>The format should be an FQDN associated with the destination node, when a node is available. For example, `host.domain.com` or `host`. |
-| dmac | DestinationMacAddress | The destination MAC address (FQDN) |
-| dntdom | DestinationNTDomain | The Windows domain name of the destination address.|
-| dpid | DestinationProcessId |The ID of the destination process associated with the event.|
-| dpriv | DestinationUserPrivileges | Defines the destination use's privileges. <br>Valid values: `Admninistrator`, `User`, `Guest` |
-| dproc | DestinationProcessName | The name of the eventΓÇÖs destination process, such as `telnetd` or `sshd.` |
-| dpt | DestinationPort | Destination port. <br>Valid values: `*0` - `65535` |
-| dst | DestinationIP | The destination IpV4 address that the event refers to in an IP network. |
-| dtz | DeviceTimeZone | Timezone of the device generating the event |
-| duid |DestinationUserId | Identifies the destination user by ID. |
-| duser | DestinationUserName |Identifies the destination user by name.|
-| dvc | DeviceAddress | The IPv4 address of the device generating the event. |
-| dvchost | DeviceName | The FQDN associated with the device node, when a node is available. For example, `host.domain.com` or `host`.|
-| dvcmac | DeviceMacAddress | The MAC address of the device generating the event. |
-| dvcpid | Process ID | Defines the ID of the process on the device generating the event. |
-
-## E - I
-
-|CEF key name |CommonSecurityLog name |Description |
-||||
-|externalId | ExternalID | An ID used by the originating device. Typically, these values have increasing values that are each associated with an event. |
-|fileCreateTime | FileCreateTime | Time when the file was created. |
-|fileHash | FileHash | Hash of a file. |
-|fileId | FileID |An ID associated with a file, such as the inode. |
-| fileModificationTime | FileModificationTime |Time when the file was last modified. |
-| filePath | FilePath | Full path to the file, including the filename. For example: `C:\ProgramFiles\WindowsNT\Accessories\wordpad.exe` or `/usr/bin/zip`.|
-| filePermission |FilePermission |The file's permissions. |
-| fileType | FileType | File type, such as pipe, socket, and so on.|
-|fname | FileName| The file's name, without the path. |
-| fsize | FileSize | The size of the file. |
-|Host | Computer | Host, from Syslog |
-|in | ReceivedBytes |Number of bytes transferred inbound. |
--
-## M - P
-
-|CEF key name |CommonSecurityLog name |Description |
-||||
-|msg | Message | A message that gives more details about the event. |
-|Name | Activity | A string that represents a human-readable and understandable description of the event. |
-|oldFileCreateTime | OldFileCreateTime | Time when the old file was created. |
-|oldFileHash | OldFileHash | Hash of the old file. |
-|oldFileId | OldFileId | And ID associated with the old file, such as the inode. |
-| oldFileModificationTime | OldFileModificationTime |Time when the old file was last modified. |
-| oldFileName | OldFileName |Name of the old file. |
-| oldFilePath | OldFilePath | Full path to the old file, including the filename. <br>For example, `C:\ProgramFiles\WindowsNT\Accessories\wordpad.exe` or `/usr/bin/zip`.|
-| oldFilePermission | OldFilePermission |Permissions of the old file. |
-|oldFileSize | OldFileSize | Size of the old file.|
-| oldFileType | OldFileType | File type of the old file, such as a pipe, socket, and so on.|
-| out | SentBytes | Number of bytes transferred outbound. |
-| outcome | EventOutcome | Outcome of the event, such as `success` or `failure`.|
-|proto | Protocol | Transport protocol that identifies the Layer-4 protocol used. <br><br>Possible values include protocol names, such as `TCP` or `UDP`. |
--
-## R - T
-
-|CEF key name |CommonSecurityLog name |Description |
-||||
-| reason | Reason | The reason an audit event was generated. For example `badd password` or `unknown user`. This could also be an error or return code. For example: `0x1234`. |
-|Request | RequestURL | The URL accessed for an HTTP request, including the protocol. For example, `http://www/secure.com` |
-|requestClientApplication | RequestClientApplication | The user agent associated with the request. |
-| requestContext | RequestContext | Describes the content from which the request originated, such as the HTTP Referrer. |
-| requestCookies | RequestCookies |Cookies associated with the request. |
-| requestMethod | RequestMethod | The method used to access a URL. <br><br>Valid values include methods such as `POST`, `GET`, and so on. |
-| rt | ReceiptTime | The time at which the event related to the activity was received. |
-|Severity | <a name="logseverity"></a> LogSeverity | A string or integer that describes the importance of the event.<br><br> Valid string values: `Unknown` , `Low`, `Medium`, `High`, `Very-High` <br><br>Valid integer values are:<br> - `0`-`3` = Low <br>- `4`-`6` = Medium<br>- `7`-`8` = High<br>- `9`-`10` = Very-High |
-| shost | SourceHostName |Identifies the source that event refers to in an IP network. Format should be a fully qualified domain name (DQDN) associated with the source node, when a node is available. For example, `host` or `host.domain.com`. |
-| smac | SourceMacAddress | Source MAC address. |
-| sntdom | SourceNTDomain | The Windows domain name for the source address. |
-| sourceDnsDomain | SourceDnsDomain | The DNS domain part of the complete FQDN. |
-| sourceServiceName | SourceServiceName | The service responsible for generating the event. |
-| sourceTranslatedAddress | SourceTranslatedAddress | Identifies the translated source that the event refers to in an IP network. |
-| sourceTranslatedPort | SourceTranslatedPort | Source port after translation, such as a firewall. <br>Valid port numbers are `0` - `65535`. |
-| spid | SourceProcessId | The ID of the source process associated with the event.|
-| spriv | SourceUserPrivileges | The source user's privileges. <br><br>Valid values include: `Administrator`, `User`, `Guest` |
-| sproc | SourceProcessName | The name of the event's source process.|
-| spt | SourcePort | The source port number. <br>Valid port numbers are `0` - `65535`. |
-| src | SourceIP |The source that an event refers to in an IP network, as an IPv4 address. |
-| suid | SourceUserID | Identifies the source user by ID. |
-| suser | SourceUserName | Identifies the source user by name. |
-| type | EventType | Event type. Value values include: <br>- `0`: base event <br>- `1`: aggregated <br>- `2`: correlation event <br>- `3`: action event <br><br>**Note**: This event can be omitted for base events. |
--
-## Custom fields
-
-The following tables map the names of CEF keys and CommonSecurityLog fields that are available for customers to use for data that does not apply to any of the built-in fields.
-
-### Custom IPv6 address fields
-
-The following table maps CEF key and CommonSecurityLog names for the *IPv6* address fields available for custom data.
-
-|CEF key name |CommonSecurityLog name |
-|||
-| c6a1 | DeviceCustomIPv6Address1 |
-| c6a1Label | DeviceCustomIPv6Address1Label |
-| c6a2 | DeviceCustomIPv6Address2 |
-| c6a2Label | DeviceCustomIPv6Address2Label |
-| c6a3 | DeviceCustomIPv6Address3 |
-| c6a3Label | DeviceCustomIPv6Address3Label |
-| c6a4 | DeviceCustomIPv6Address4 |
-| c6a4Label | DeviceCustomIPv6Address4Label |
-| cfp1 | DeviceCustomFloatingPoint1 |
-| cfp1Label | deviceCustomFloatingPoint1Label |
-| cfp2 | DeviceCustomFloatingPoint2 |
-| cfp2Label | deviceCustomFloatingPoint2Label |
-| cfp3 | DeviceCustomFloatingPoint3 |
-| cfp3Label | deviceCustomFloatingPoint3Label |
-| cfp4 | DeviceCustomFloatingPoint4 |
-| cfp4Label | deviceCustomFloatingPoint4Label |
--
-### Custom number fields
-
-The following table maps CEF key and CommonSecurityLog names for the *number* fields available for custom data.
-
-|CEF key name |CommonSecurityLog name |
-|||
-| cn1 | DeviceCustomNumber1 |
-| cn1Label | DeviceCustomNumber1Label |
-| cn2 | DeviceCustomNumber2 |
-| cn2Label | DeviceCustomNumber2Label |
-| cn3 | DeviceCustomNumber3 |
-| cn3Label | DeviceCustomNumber3Label |
--
-### Custom string fields
-
-The following table maps CEF key and CommonSecurityLog names for the *string* fields available for custom data.
-
-|CEF key name |CommonSecurityLog name |
-|||
-| cs1 | DeviceCustomString1 <sup>[1](#use-sparingly)</sup> |
-| cs1Label | DeviceCustomString1Label <sup>[1](#use-sparingly)</sup> |
-| cs2 | DeviceCustomString2 <sup>[1](#use-sparingly)</sup> |
-| cs2Label | DeviceCustomString2Label <sup>[1](#use-sparingly)</sup> |
-| cs3 | DeviceCustomString3 <sup>[1](#use-sparingly)</sup> |
-| cs3Label | DeviceCustomString3Label <sup>[1](#use-sparingly)</sup> |
-| cs4 | DeviceCustomString4 <sup>[1](#use-sparingly)</sup> |
-| cs4Label | DeviceCustomString4Label <sup>[1](#use-sparingly)</sup> |
-| cs5 | DeviceCustomString5 <sup>[1](#use-sparingly)</sup> |
-| cs5Label | DeviceCustomString5Label <sup>[1](#use-sparingly)</sup> |
-| cs6 | DeviceCustomString6 <sup>[1](#use-sparingly)</sup> |
-| cs6Label | DeviceCustomString6Label <sup>[1](#use-sparingly)</sup> |
-| flexString1 | FlexString1 |
-| flexString1Label | FlexString1Label |
-| flexString2 | FlexString2 |
-| flexString2Label | FlexString2Label |
--
-> [!TIP]
-> <a name="use-sparingly"></a><sup>1</sup> We recommend that you use the **DeviceCustomString** fields sparingly and use more specific, built-in fields when possible.
->
-
-### Custom timestamp fields
-
-The following table maps CEF key and CommonSecurityLog names for the *timestamp* fields available for custom data.
-
-|CEF key name |CommonSecurityLog name |
-|||
-| deviceCustomDate1 | DeviceCustomDate1 |
-| deviceCustomDate1Label | DeviceCustomDate1Label |
-| deviceCustomDate2 | DeviceCustomDate2 |
-| deviceCustomDate2Label | DeviceCustomDate2Label |
-| flexDate1 | FlexDate1 |
-| flexDate1Label | FlexDate1Label |
--
-### Custom integer data fields
-
-The following table maps CEF key and CommonSecurityLog names for the *integer* fields available for custom data.
-
-|CEF key name |CommonSecurityLog name |
-|||
-| flexNumber1 | FlexNumber1 |
-| flexNumber1Label | FlexNumber1Label |
-| flexNumber2 | FlexNumber2 |
-| flexNumber2Label | FlexNumber2Label |
--
-## Enrichment fields
-
-The following **CommonSecurityLog** fields are added by Microsoft Sentinel to enrich the original events received from the source devices, and don't have mappings in CEF keys:
-
-### Threat intelligence fields
-
-|CommonSecurityLog field name |Description |
-|||
-| **IndicatorThreatType** | The [MaliciousIP](#MaliciousIP) threat type, according to the threat intelligence feed. |
-| <a name="MaliciousIP"></a>**MaliciousIP** | Lists any IP addresses in the message that correlates with the current threat intelligence feed. |
-| **MaliciousIPCountry** | The [MaliciousIP](#MaliciousIP) country/region, according to the geographic information at the time of the record ingestion. |
-| **MaliciousIPLatitude** | The [MaliciousIP](#MaliciousIP) longitude, according to the geographic information at the time of the record ingestion. |
-| **MaliciousIPLongitude** | The [MaliciousIP](#MaliciousIP) longitude, according to the geographic information at the time of the record ingestion. |
-| **ReportReferenceLink** | Link to the threat intelligence report. |
-| **ThreatConfidence** | The [MaliciousIP](#MaliciousIP) threat confidence, according to the threat intelligence feed. |
-| **ThreatDescription** | The [MaliciousIP](#MaliciousIP) threat description, according to the threat intelligence feed. |
-| **ThreatSeverity** | The threat severity for the [MaliciousIP](#MaliciousIP), according to the threat intelligence feed at the time of the record ingestion. |
--
-### Additional enrichment fields
-
-|CommonSecurityLog field name |Description |
-|||
-|**OriginalLogSeverity** | Always empty, supported for integration with CiscoASA. <br>For details about log severity values, see the [LogSeverity](#logseverity) field. |
-|**RemoteIP** | The remote IP address. <br>This value is based on [CommunicationDirection](#communicationdirection) field, if possible. |
-|**RemotePort** | The remote port. <br>This value is based on [CommunicationDirection](#communicationdirection) field, if possible. |
-|**SimplifiedDeviceAction** | Simplifies the [DeviceAction](#deviceaction) value to a static set of values, while keeping the original value in the [DeviceAction](#deviceaction) field. <br>For example: `Denied` > `Deny`. |
-|**SourceSystem** | Always defined as **OpsManager**. |
--
-## Next steps
-
-For more information, see [Connect your external solution using Common Event Format](connect-common-event-format.md).
sentinel Monitor Automation Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-automation-health.md
For the **Automation rule run** status, you may see the following statuses:
- **Success**: rule executed successfully, triggering all actions. - **Partial success**: rule executed and triggered at least one action, but some actions failed.-- *Failure*: automation rule did not run any action due to one of the following reasons:
+- **Failure**: automation rule did not run any action due to one of the following reasons:
- Conditions evaluation failed. - Conditions met, but the first action failed.
For the **Playbook was triggered** status, you may see the following statuses:
| Error description | Suggested actions | | | -- | | **Could not add task: *\<TaskName>*.**<br>Incident/alert was not found. | Make sure the incident/alert exists and try again. |
+| **Could not add task: *\<TaskName>*.**<br>Incident already contains the maximum allowed number of tasks. | If this task is required, see if there are any tasks that can be removed or consolidated, then try again. |
| **Could not modify property: *\<PropertyName>*.**<br>Incident/alert was not found. | Make sure the incident/alert exists and try again. | | **Could not modify property: *\<PropertyName>*.**<br>Too many requests, exceeding throttling limits. | | | **Could not trigger playbook: *\<PlaybookName>*.**<br>Incident/alert was not found. | If the error occurred when trying to trigger a playbook on demand, make sure the incident/alert exists and try again. |
sentinel Collect Sap Hana Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/collect-sap-hana-audit-logs.md
This article explains how to collect audit logs from your SAP HANA database.
> [!IMPORTANT] > Microsoft Sentinel SAP HANA support is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - ## Prerequisites SAP HANA logs are sent over Syslog. Make sure that your AMA agent or your Log Analytics agent (legacy) is configured to collect Syslog files. For more information, see: For more information, see [Ingest syslog and CEF messages to Microsoft Sentinel with the Azure Monitor Agent](../connect-cef-syslog-ama.md). - ## Collect SAP HANA audit logs 1. Make sure that the SAP HANA audit log trail is configured to use Syslog, as described in *SAP Note 0002624117*, which is accessible from the [SAP Launchpad support site](https://launchpad.support.sap.com/#/notes/0002624117). For more information, see: - [SAP HANA Audit Trail - Best Practice](https://help.sap.com/docs/SAP_HANA_PLATFORM/b3ee5778bc2e4a089d3299b82ec762a7/35eb4e567d53456088755b8131b7ed1d.html?version=2.0.03) - [Recommendations for Auditing](https://help.sap.com/viewer/742945a940f240f4a2a0e39f93d3e2d4/2.0.05/en-US/5c34ecd355e44aa9af3b3e6de4bbf5c1.html)
+ - [SAP HANA Security Guide for SAP HANA Platform](https://help.sap.com/docs/SAP_HANA_PLATFORM/b3ee5778bc2e4a089d3299b82ec762a7/4f7cde1125084ea3b8206038530e96ce.html)
-1. Check your operating system Syslog files for any relevant HANA database events.
+2. Check your operating system Syslog files for any relevant HANA database events.
-1. Sign into your HANA database operating system as a user with sudo privileges.
+3. Sign into your HANA database operating system as a user with sudo privileges.
-1. Install an agent on your machine and confirm that your machine is connected. For more information, see:
+4. Install an agent on your machine and confirm that your machine is connected. For more information, see:
- [Azure Monitor Agent](/azure/azure-monitor/agents/azure-monitor-agent-manage?tabs=azure-portal) - [Log Analytics Agent](../../azure-monitor/agents/agent-linux.md) (legacy)
-1. Configure your agent to collect Syslog data. For more information, see:
+5. Configure your agent to collect Syslog data. For more information, see:
- [Azure Monitor Agent](/azure/azure-monitor/agents/data-collection-syslog) - [Log Analytics Agent](/azure/azure-monitor/agents/data-sources-syslog) (legacy) > [!TIP] > Because the facilities where HANA database events are saved can change between different distributions, we recommend that you add all facilities. Check them against your Syslog logs, and then remove any that aren't relevant.
- >
## Verify your configuration
-In Microsoft Sentinel, check to confirm that HANA database events are now shown in the ingested logs. For example, run the following query:
+Use the following steps in both Microsoft Sentinel and your SAP HANA database to verify that your system is configured as expected.
+
+### Microsoft Sentinel
+In Microsoft Sentinel's **Logs** page, check to confirm that HANA database events are now shown in the ingested logs. For example, run the following query:
```Kusto //generated function structure for custom log Syslog
TimeGenerated = column_ifexists('TimeGenerated', '1000-01-01T00:00:00Z')
T_Syslog | union isfuzzy= true (D_Syslog | where TimeGenerated != '1000-01-01T00:00:00Z') ```
+### SAP HANA
+
+In your SAP HANA database, check your configured audit policies. For more information on the required SQL statements, see [SAP Note 3016478](https://me.sap.com/notes/3016478/E).
-## Add analytics rules for SAP HANA
+## Add analytics rules for SAP HANA in Microsoft Sentinel
Use the following built-in analytics rules to have Microsoft Sentinel start triggering alerts on related SAP HANA activity:
For more information, see [Microsoft Sentinel solution for SAP® applications: s
## Related content
+Learn more about the Microsoft Sentinel Solution for SAP BTP:
+
+- [Deploy Microsoft Sentinel solution for SAP® applications](deploy-sap-btp-solution.md)
+- [Microsoft Sentinel Solution for SAP BTP: security content reference](sap-btp-security-content.md)
+ Learn more about the Microsoft Sentinel solution for SAP® applications: - [Deploy Microsoft Sentinel solution for SAP® applications](deployment-overview.md)
Learn more about the Microsoft Sentinel solution for SAP® applications:
Troubleshooting: - [Troubleshoot your Microsoft Sentinel solution for SAP® applications deployment](sap-deploy-troubleshoot.md)
+- [HANA audit log is not generated in SYSLOG | SAP note](https://me.sap.com/notes/3305033/E)
+- [How to Redirect syslog Auditing for HANA to an alternate location | SAP note](https://me.sap.com/notes/2386609)
Reference files:
Reference files:
- [Systemconfig.ini file reference](reference-systemconfig.md) For more information, see [Microsoft Sentinel solutions](../sentinel-solutions.md).-
sentinel Skill Up Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/skill-up-resources.md
Title: Microsoft Sentinel skill-up training description: This article walks you through a level 400 training to help you skill up on Microsoft Sentinel. The training comprises 21 modules that present relevant product documentation, blog posts, and other resources.-+ Last updated 05/16/2024-+
storage Data Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-protection-overview.md
Azure Storage provides data protection for Blob Storage and Azure Data Lake Storage Gen2 to help you to prepare for scenarios where you need to recover data that has been deleted or overwritten. It's important to think about how to best protect your data before an incident occurs that could compromise it. This guide can help you decide in advance which data protection features your scenario requires, and how to implement them. If you should need to recover data that has been deleted or overwritten, this overview also provides guidance on how to proceed, based on your scenario.
-In the Azure Storage documentation, *data protection* refers to strategies for protecting the storage account and data within it from being deleted or modified, or for restoring data after it has been deleted or modified. Azure Storage also offers options for *disaster recovery*, including multiple levels of redundancy to protect your data from service outages due to hardware problems or natural disasters, and customer-managed failover in the event that the data center in the primary region becomes unavailable. For more information about how your data is protected from service outages, see [Disaster recovery](#disaster-recovery).
+In the Azure Storage documentation, *data protection* refers to strategies for protecting the storage account and data within it from being deleted or modified, or for restoring data after it has been deleted or modified. Azure Storage also offers options for *disaster recovery*, including multiple levels of redundancy to protect your data from service outages due to hardware problems or natural disasters. Customer-managed (unplanned) failover is another disaster recovery option that allows you to fail over to a secondary region if the primary region becomes unavailable. For more information about how your data is protected from service outages, see [Disaster recovery](#disaster-recovery).
## Recommendations for basic data protection
The following table summarizes the cost considerations for the various data prot
Azure Storage always maintains multiple copies of your data so that it's protected from planned and unplanned events, including transient hardware failures, network or power outages, and massive natural disasters. Redundancy ensures that your storage account meets its availability and durability targets even in the face of failures. For more information about how to configure your storage account for high availability, see [Azure Storage redundancy](../common/storage-redundancy.md).
-If a failure occurs in a data center, if your storage account is redundant across two geographical regions (geo-redundant), then you have the option to fail over your account from the primary region to the secondary region. For more information, see [Disaster recovery and storage account failover](../common/storage-disaster-recovery-guidance.md).
+If your storage account is configured for geo-redundancy, you have the option to initiate an unplanned failover from the primary to the secondary region during a data center failure. For more information, see [Disaster recovery planning and failover](../common/storage-disaster-recovery-guidance.md#customer-managed-unplanned-failover).
-Customer-managed failover isn't currently supported for storage accounts with a hierarchical namespace enabled. For more information, see [Blob storage features available in Azure Data Lake Storage Gen2](./storage-feature-support-in-storage-accounts.md).
+Customer-managed failover currently supports storage accounts with a hierarchical namespace enabled in preview status only. For more information, see [Disaster recovery planning and failover](../common/storage-disaster-recovery-guidance.md#plan-for-failover).
## Next steps
storage Sas Service Create Dotnet Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-dotnet-container.md
Previously updated : 06/22/2023 Last updated : 08/05/2024 ms.devlang: csharp
storage Sas Service Create Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-dotnet.md
Previously updated : 06/22/2023 Last updated : 08/05/2024 ms.devlang: csharp
storage Sas Service Create Java Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-java-container.md
Previously updated : 06/23/2023 Last updated : 08/05/2024 ms.devlang: java
storage Sas Service Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-java.md
Previously updated : 06/23/2023 Last updated : 08/05/2024 ms.devlang: java
storage Sas Service Create Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-javascript.md
Previously updated : 01/19/2023 Last updated : 08/05/2024 ms.devlang: javascript
storage Sas Service Create Python Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-python-container.md
Previously updated : 06/09/2023 Last updated : 08/05/2024 ms.devlang: python
storage Sas Service Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-python.md
Previously updated : 06/09/2023 Last updated : 08/05/2024 ms.devlang: python
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
To learn more, see [SFTP permission model](secure-file-transfer-protocol-support
- Maximum file upload size via the SFTP endpoint is 500 GB. -- Customer-managed account failover is supported at the preview level in select regions. For more information, see [Azure storage disaster recovery planning and failover](../common/storage-disaster-recovery-guidance.md#azure-data-lake-storage-gen2).
+- Customer-managed account failover is supported at the preview level in select regions. For more information, see [Azure storage disaster recovery planning and failover](../common/storage-disaster-recovery-guidance.md#hierarchical-namespace-hns).
- To change the storage account's redundancy/replication settings, SFTP must be disabled. SFTP may be re-enabled once the conversion has completed.
storage Snapshots Manage Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/snapshots-manage-dotnet.md
Previously updated : 08/27/2020 Last updated : 08/05/2024 ms.devlang: csharp
storage Storage Blob Account Delegation Sas Create Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-account-delegation-sas-create-javascript.md
Previously updated : 09/21/2023 Last updated : 08/05/2024
storage Storage Blob Append https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-append.md
description: Learn how to append data to an append blob in Azure Storage by usin
Previously updated : 09/01/2023 Last updated : 08/05/2024 ms.devlang: csharp
storage Storage Blob Client Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-client-management.md
Previously updated : 02/08/2023 Last updated : 08/05/2024 ms.devlang: csharp # ms.devlang: csharp, java, javascript, python
storage Storage Blob Container Create Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-go.md
Previously updated : 05/22/2024 Last updated : 08/05/2024 ms.devlang: golang
storage Storage Blob Container Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-java.md
Previously updated : 08/02/2023 Last updated : 08/05/2024 ms.devlang: java
storage Storage Blob Container Create Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-javascript.md
Previously updated : 11/30/2022 Last updated : 08/05/2024 ms.devlang: javascript
storage Storage Blob Container Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-python.md
Previously updated : 12/07/2023 Last updated : 08/05/2024 ms.devlang: python
storage Storage Blob Container Create Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-typescript.md
Previously updated : 03/21/2023 Last updated : 08/05/2024 ms.devlang: typescript
storage Storage Blob Container Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create.md
Previously updated : 07/25/2022 Last updated : 08/05/2024 ms.devlang: csharp
storage Storage Blob Container Delete Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-go.md
Previously updated : 05/22/2024 Last updated : 08/05/2024 ms.devlang: golang
storage Storage Blob Container Delete Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-java.md
Previously updated : 08/02/2023 Last updated : 08/05/2024 ms.devlang: java
storage Storage Blob Container Delete Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-javascript.md
Previously updated : 11/30/2022 Last updated : 08/05/2024 ms.devlang: javascript
storage Storage Blob Container Delete Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-python.md
Previously updated : 12/20/2023 Last updated : 08/05/2024 ms.devlang: python
storage Storage Blob Container Delete Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-typescript.md
Previously updated : 03/21/2023 Last updated : 08/05/2024 ms.devlang: typescript
storage Storage Blob Container Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete.md
Previously updated : 03/28/2022 Last updated : 08/05/2024 ms.devlang: csharp
storage Storage Blob Container Lease Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-java.md
Previously updated : 08/02/2023 Last updated : 08/05/2024 ms.devlang: java
storage Storage Blob Container Lease Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-javascript.md
Previously updated : 05/01/2023 Last updated : 08/05/2024 ms.devlang: javascript
storage Storage Blob Container Lease Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-python.md
Previously updated : 12/19/2023 Last updated : 08/05/2024 ms.devlang: python
storage Storage Blob Container Lease Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-typescript.md
Previously updated : 05/01/2023 Last updated : 08/05/2024 ms.devlang: typescript
storage Storage Blob Container Lease https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease.md
Previously updated : 04/10/2023 Last updated : 08/05/2024 ms.devlang: csharp
storage Storage Blob Container Properties Metadata Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-go.md
Previously updated : 05/22/2024 Last updated : 08/05/2024 ms.devlang: golang
storage Storage Blob Container Properties Metadata Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-java.md
Previously updated : 08/02/2023 Last updated : 08/05/2024 ms.devlang: java
storage Storage Blob Container Properties Metadata Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-javascript.md
Previously updated : 11/30/2022 Last updated : 08/05/2024 ms.devlang: javascript
storage Storage Blob Container Properties Metadata Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-python.md
Previously updated : 12/20/2023 Last updated : 08/05/2024 ms.devlang: python
storage Storage Blob Container Properties Metadata Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-typescript.md
Previously updated : 03/21/2023 Last updated : 08/05/2024 ms.devlang: typescript
storage Storage Blob Container Properties Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata.md
Previously updated : 03/28/2022 Last updated : 08/05/2024 ms.devlang: csharp
storage Storage Blob Container User Delegation Sas Create Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-user-delegation-sas-create-dotnet.md
Previously updated : 06/22/2023 Last updated : 08/05/2024 ms.devlang: csharp
storage Storage Blob Container User Delegation Sas Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-user-delegation-sas-create-java.md
Previously updated : 06/12/2023 Last updated : 08/05/2024 ms.devlang: java
storage Storage Blob Container User Delegation Sas Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-user-delegation-sas-create-python.md
Previously updated : 06/09/2023 Last updated : 08/05/2024 ms.devlang: python
storage Storage Blob Containers List Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-go.md
Previously updated : 05/22/2024 Last updated : 08/05/2024 ms.devlang: golang
storage Storage Blob Containers List Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-java.md
Previously updated : 10/23/2023 Last updated : 08/05/2024 ms.devlang: java
storage Storage Blob Containers List Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-javascript.md
Previously updated : 11/30/2022 Last updated : 08/05/2024 ms.devlang: javascript
storage Storage Blob Containers List Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-python.md
Previously updated : 12/07/2023 Last updated : 08/05/2024 ms.devlang: python
storage Storage Blob Containers List Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-typescript.md
Previously updated : 03/21/2023 Last updated : 08/05/2024 ms.devlang: typescript
storage Storage Blob Containers List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list.md
Previously updated : 10/23/2023 Last updated : 08/05/2024 ms.devlang: csharp
storage Storage Blob Copy Async Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-dotnet.md
description: Learn how to copy a blob with asynchronous scheduling in Azure Stor
Previously updated : 04/11/2023 Last updated : 08/05/2024 ms.devlang: csharp
storage Storage Blob Copy Async Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-go.md
description: Learn how to copy a blob with asynchronous scheduling in Azure Stor
Previously updated : 07/25/2024 Last updated : 08/05/2024 ms.devlang: golang
storage Storage Blob Copy Async Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-java.md
description: Learn how to copy a blob with asynchronous scheduling in Azure Stor
Previously updated : 08/02/2023 Last updated : 08/05/2024 ms.devlang: java
storage Storage Blob Copy Async Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-javascript.md
description: Learn how to copy a blob with asynchronous scheduling in Azure Stor
Previously updated : 05/08/2023 Last updated : 08/05/2024 ms.devlang: javascript
storage Storage Blob Copy Async Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-python.md
description: Learn how to copy a blob with asynchronous scheduling in Azure Stor
Previously updated : 08/02/2023 Last updated : 08/05/2024 ms.devlang: python
storage Storage Blob Copy Async Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-typescript.md
description: Learn how to copy a blob with asynchronous scheduling in Azure Stor
Previously updated : 05/08/2023 Last updated : 08/05/2024 ms.devlang: typescript
storage Storage Blob Copy Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-go.md
Previously updated : 07/25/2024 Last updated : 08/05/2024 ms.devlang: golang
storage Storage Blob Copy Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-java.md
Previously updated : 04/18/2023 Last updated : 08/05/2024 ms.devlang: java
storage Storage Blob Copy Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-javascript.md
description: Learn how to copy a blob in Azure Storage by using the JavaScript c
Previously updated : 05/08/2023 Last updated : 08/05/2024 ms.devlang: javascript
storage Storage Blob Copy Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-python.md
Previously updated : 04/28/2023 Last updated : 08/05/2024 ms.devlang: python
storage Storage Blob Copy Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-typescript.md
description: Learn how to copy a blob with TypeScript in Azure Storage by using
Previously updated : 05/08/2023 Last updated : 08/05/2024 ms.devlang: typescript
storage Storage Blob Copy Url Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-dotnet.md
description: Learn how to copy a blob from a source object URL in Azure Storage
Previously updated : 04/11/2023 Last updated : 08/05/2024 ms.devlang: csharp
storage Storage Blob Copy Url Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-go.md
description: Learn how to copy a blob from a source object URL in Azure Storage
Previously updated : 07/25/2024 Last updated : 08/05/2024 ms.devlang: golang
storage Storage Blob Copy Url Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-java.md
description: Learn how to copy a blob from a source object URL in Azure Storage
Previously updated : 08/02/2023 Last updated : 08/05/2024 ms.devlang: java
storage Storage Blob Copy Url Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-javascript.md
description: Learn how to copy a blob from a source object URL in Azure Storage
Previously updated : 05/08/2023 Last updated : 08/05/2024 ms.devlang: javascript
storage Storage Blob Copy Url Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-python.md
description: Learn how to copy a blob from a source object URL in Azure Storage
Previously updated : 11/20/2023 Last updated : 08/05/2024 ms.devlang: python
storage Storage Blob Copy Url Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-typescript.md
description: Learn how to copy a blob from a source object URL in Azure Storage
Previously updated : 05/08/2023 Last updated : 08/05/2024 ms.devlang: typescript
storage Storage Blob Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy.md
description: Learn how to copy blobs in Azure Storage using the .NET client libr
Previously updated : 04/14/2023 Last updated : 08/05/2024 ms.devlang: csharp
storage Storage Blob Create User Delegation Sas Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-create-user-delegation-sas-javascript.md
Previously updated : 07/15/2022 Last updated : 08/05/2024
storage Storage Blob Delete Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-go.md
Previously updated : 05/22/2024 Last updated : 08/05/2024 ms.devlang: golang
storage Storage Blob Delete Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-java.md
Previously updated : 08/02/2023 Last updated : 08/05/2024 ms.devlang: java
storage Storage Blob Delete Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-javascript.md
description: Learn how to delete and restore a blob in your Azure Storage accoun
Previously updated : 11/30/2022 Last updated : 08/05/2024 ms.devlang: javascript
storage Storage Blob Delete Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-python.md
Previously updated : 11/20/2023 Last updated : 08/05/2024 ms.devlang: python
storage Storage Blob Delete Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-typescript.md
description: Learn how to delete and restore a blob with TypeScript in your Azur
Previously updated : 03/21/2023 Last updated : 08/05/2024 ms.devlang: typescript
storage Storage Blob Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete.md
Previously updated : 05/11/2023 Last updated : 08/05/2024 ms.devlang: csharp
storage Storage Blob Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-dotnet-get-started.md
Previously updated : 07/12/2023 Last updated : 08/05/2024 ms.devlang: csharp
storage Storage Blob Download Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-go.md
Previously updated : 05/22/2024 Last updated : 08/05/2024 ms.devlang: golang
storage Storage Blob Download Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-java.md
Previously updated : 09/08/2023 Last updated : 08/05/2024 ms.devlang: java
storage Storage Blob Download Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-javascript.md
Previously updated : 04/21/2023 Last updated : 08/05/2024 ms.devlang: javascript
storage Storage Blob Download Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-python.md
Previously updated : 11/20/2023 Last updated : 08/05/2024 ms.devlang: python
storage Storage Blob Download Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-typescript.md
Previously updated : 06/21/2023 Last updated : 08/05/2024 ms.devlang: typescript
storage Storage Blob Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download.md
Previously updated : 05/23/2023 Last updated : 08/05/2024 ms.devlang: csharp
storage Storage Blob Get Url Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-get-url-javascript.md
description: Learn how to get a container or blob URL in Azure Storage by using
Previously updated : 09/13/2022 Last updated : 08/05/2024 ms.devlang: javascript
storage Storage Blob Get Url Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-get-url-typescript.md
description: Learn how to get a container or blob URL with TypeScript in Azure S
Previously updated : 03/21/2023 Last updated : 08/05/2024 ms.devlang: typescript
storage Storage Blob Go Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-go-get-started.md
Previously updated : 06/26/2024 Last updated : 08/05/2024 ms.devlang: golang
storage Storage Blob Java Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-java-get-started.md
Previously updated : 01/19/2024 Last updated : 08/05/2024
storage Storage Blob Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-javascript-get-started.md
Previously updated : 11/30/2022 Last updated : 08/05/2024
storage Storage Blob Lease Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-java.md
Previously updated : 08/02/2023 Last updated : 08/05/2024 ms.devlang: java
storage Storage Blob Lease Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-javascript.md
Previously updated : 05/01/2023 Last updated : 08/05/2024 ms.devlang: javascript
storage Storage Blob Lease Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-python.md
Previously updated : 12/19/2023 Last updated : 08/05/2024 ms.devlang: python
storage Storage Blob Lease Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-typescript.md
Previously updated : 05/01/2023 Last updated : 08/05/2024 ms.devlang: typescript
storage Storage Blob Lease https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease.md
Previously updated : 04/10/2023 Last updated : 08/05/2024 ms.devlang: csharp
storage Storage Blob Object Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-object-model.md
Previously updated : 03/07/2023 Last updated : 08/05/2024
storage Storage Blob Properties Metadata Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-go.md
Previously updated : 05/22/2024 Last updated : 08/05/2024 ms.devlang: golang
storage Storage Blob Properties Metadata Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-java.md
Previously updated : 08/02/2023 Last updated : 08/05/2024 ms.devlang: java
storage Storage Blob Properties Metadata Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-javascript.md
description: Learn how to set and retrieve system properties and store custom me
Previously updated : 11/30/2022 Last updated : 08/05/2024 ms.devlang: javascript
storage Storage Blob Properties Metadata Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-python.md
Previously updated : 11/29/2023 Last updated : 08/05/2024 ms.devlang: python
storage Storage Blob Properties Metadata Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-typescript.md
description: Learn how to set and retrieve system properties and store custom me
Previously updated : 03/21/2023 Last updated : 08/05/2024 ms.devlang: typescript
storage Storage Blob Properties Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata.md
description: Learn how to set and retrieve system properties and store custom me
Previously updated : 03/28/2022 Last updated : 08/05/2024 ms.devlang: csharp
storage Storage Blob Python Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-python-get-started.md
Previously updated : 11/14/2023 Last updated : 08/05/2024 ai-usage: ai-assisted
storage Storage Blob Query Endpoint Srp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-query-endpoint-srp.md
Previously updated : 06/07/2023 Last updated : 08/05/2024
storage Storage Blob Tags Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-go.md
Previously updated : 06/26/2024 Last updated : 08/05/2024 ms.devlang: golang
storage Storage Blob Tags Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-java.md
Previously updated : 02/02/2024 Last updated : 08/05/2024 ms.devlang: java
storage Storage Blob Tags Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-javascript.md
description: Learn how to categorize, manage, and query for blob objects by usin
Previously updated : 02/02/2024 Last updated : 08/05/2024 ms.devlang: javascript
storage Storage Blob Tags Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-python.md
Previously updated : 02/02/2024 Last updated : 08/05/2024 ms.devlang: python
storage Storage Blob Tags Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-typescript.md
description: Learn how to categorize, manage, and query for blob objects with Ty
Previously updated : 02/02/2024 Last updated : 08/05/2024 ms.devlang: typescript
storage Storage Blob Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags.md
description: Learn how to categorize, manage, and query for blob objects by usin
Previously updated : 03/28/2022 Last updated : 08/05/2024 ms.devlang: csharp
storage Storage Blob Typescript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-typescript-get-started.md
Previously updated : 03/21/2023 Last updated : 08/05/2024
storage Storage Blob Upload Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-go.md
Previously updated : 05/22/2024 Last updated : 08/05/2024 ms.devlang: golang
storage Storage Blob Upload Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-java.md
Previously updated : 08/02/2023 Last updated : 08/05/2024 ms.devlang: java
storage Storage Blob Upload Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-javascript.md
description: Learn how to upload a blob to your Azure Storage account using the
Previously updated : 06/20/2023 Last updated : 08/05/2024 ms.devlang: javascript
storage Storage Blob Upload Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-python.md
Previously updated : 11/14/2023 Last updated : 08/05/2024 ms.devlang: python
storage Storage Blob Upload Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-typescript.md
description: Learn how to upload a blob with TypeScript to your Azure Storage ac
Previously updated : 06/21/2023 Last updated : 08/05/2024 ms.devlang: typescript
storage Storage Blob Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload.md
description: Learn how to upload a blob to your Azure Storage account using the
Previously updated : 08/28/2023 Last updated : 08/05/2024 ms.devlang: csharp
storage Storage Blob Use Access Tier Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-dotnet.md
Previously updated : 07/03/2023 Last updated : 08/05/2024 ms.devlang: csharp
storage Storage Blob Use Access Tier Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-java.md
Previously updated : 08/02/2023 Last updated : 08/05/2024 ms.devlang: java
storage Storage Blob Use Access Tier Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-javascript.md
Previously updated : 06/28/2023 Last updated : 08/05/2024 ms.devlang: javascript
storage Storage Blob Use Access Tier Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-python.md
Previously updated : 12/20/2023 Last updated : 08/05/2024 ms.devlang: python
storage Storage Blob Use Access Tier Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-typescript.md
Previously updated : 06/28/2023 Last updated : 08/05/2024 ms.devlang: typescript
storage Storage Blob User Delegation Sas Create Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-dotnet.md
Previously updated : 06/22/2023 Last updated : 08/05/2024 ms.devlang: csharp
storage Storage Blob User Delegation Sas Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-java.md
Previously updated : 06/12/2023 Last updated : 08/05/2024 ms.devlang: java
storage Storage Blob User Delegation Sas Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-python.md
Previously updated : 06/06/2023 Last updated : 08/05/2024 ms.devlang: python
storage Storage Blobs List Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-go.md
Previously updated : 05/01/2024 Last updated : 08/05/2024 ms.devlang: golang
storage Storage Blobs List Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-java.md
Previously updated : 08/16/2023 Last updated : 08/05/2024 ms.devlang: java
storage Storage Blobs List Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-javascript.md
Previously updated : 08/16/2023 Last updated : 08/05/2024 ms.devlang: javascript
storage Storage Blobs List Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-python.md
Previously updated : 11/20/2023 Last updated : 08/05/2024 ms.devlang: python
storage Storage Blobs List Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-typescript.md
Previously updated : 08/16/2023 Last updated : 08/05/2024 ms.devlang: typescript
storage Storage Blobs List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list.md
Previously updated : 08/16/2023 Last updated : 08/05/2024 ms.devlang: csharp
storage Storage Blobs Tune Upload Download Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download-java.md
Previously updated : 09/22/2023 Last updated : 08/05/2024 ms.devlang: java
storage Storage Blobs Tune Upload Download Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download-javascript.md
Previously updated : 06/04/2024 Last updated : 08/05/2024 ms.devlang: javascript
storage Storage Blobs Tune Upload Download Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download-python.md
Previously updated : 07/07/2023 Last updated : 08/05/2024 ms.devlang: python
storage Storage Blobs Tune Upload Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download.md
Previously updated : 12/09/2022 Last updated : 08/05/2024 ms.devlang: csharp
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
The following table describes whether a feature is supported in a standard gener
| [Blobfuse](storage-how-to-mount-container-linux.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Change feed](storage-blob-change-feed.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Custom domains](storage-custom-domain-name.md) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; |
-| [Customer-managed account failover](../common/storage-disaster-recovery-guidance.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24; | &#x1F7E6; |
+| [Customer-managed planned failover (preview)](../common/storage-disaster-recovery-guidance.md#customer-managed-planned-failover-preview) | &#x1F7E6; | &#x1F7E6; | &nbsp;&#x2B24; | &nbsp;&#x1F7E6; |
+| [Customer-managed (unplanned) failover](../common/storage-disaster-recovery-guidance.md#customer-managed-unplanned-failover) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24; | &nbsp;&#x1F7E6; |
| [Customer-managed keys with key vault in the same tenant](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Customer-managed keys with key vault in a different tenant (cross-tenant)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Customer-provided keys](encryption-customer-provided-keys.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
The following table describes whether a feature is supported in a premium block
| [Blobfuse](storage-how-to-mount-container-linux.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Change feed](storage-blob-change-feed.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Custom domains](storage-custom-domain-name.md) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; |
-| [Customer-managed account failover](../common/storage-disaster-recovery-guidance.md?toc=/azure/storage/blobs/toc.json) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Customer-managed planned failover](../common/storage-failover-customer-managed-planned.md?toc=/azure/storage/blobs/toc.json) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Customer-managed unplanned failover](../common/storage-failover-customer-managed-unplanned.md?toc=/azure/storage/blobs/toc.json) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
| [Customer-managed keys with key vault in the same tenant](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Customer-managed keys with key vault in a different tenant (cross-tenant)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Customer-provided keys](encryption-customer-provided-keys.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
storage Storage Retry Policy Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-retry-policy-go.md
Previously updated : 06/26/2024 Last updated : 08/05/2024
storage Storage Retry Policy Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-retry-policy-java.md
Previously updated : 05/03/2024 Last updated : 08/05/2024
storage Storage Retry Policy Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-retry-policy-javascript.md
Previously updated : 05/22/2024 Last updated : 08/05/2024
storage Storage Retry Policy Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-retry-policy-python.md
Previously updated : 04/29/2024 Last updated : 08/05/2024
storage Storage Retry Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-retry-policy.md
Previously updated : 04/29/2024 Last updated : 08/05/2024
storage Versions Manage Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/versions-manage-dotnet.md
Title: Create and list blob versions in .NET description: Learn how to use the .NET client library to create a previous version of a blob.-+ -+ Previously updated : 02/14/2023 Last updated : 08/05/2024 ms.devlang: csharp
storage Last Sync Time Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/last-sync-time-get.md
# Check the Last Sync Time property for a storage account
-When you configure a storage account, you can specify that your data is copied to a secondary region that is hundreds of miles from the primary region. Geo-replication offers durability for your data in the event of a significant outage in the primary region, such as a natural disaster. If you additionally enable read access to the secondary region, your data remains available for read operations if the primary region becomes unavailable. You can design your application to switch seamlessly to reading from the secondary region if the primary region is unresponsive.
+Geo-replication offers durability for your data, even during natural disasters and other significant outages in your primary region. When you configure a storage account, you can choose to have your data copied to a secondary region that is hundreds of miles from the primary region. In addition, you can choose to enable read access to the secondary region, ensuring that your data remains available for read operations if the primary region becomes unavailable. This approach enables you to [design your highly available application](../blobs/storage-create-geo-redundant-storage.md) to switch seamlessly to reading from the secondary region if the primary region is unresponsive.
Geo-redundant storage (GRS) and geo-zone-redundant storage (GZRS) both replicate your data asynchronously to a secondary region. For read access to the secondary region, enable read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS). For more information about the various options for redundancy offered by Azure Storage, see [Azure Storage redundancy](storage-redundancy.md).
This article describes how to check the **Last Sync Time** property for your sto
## About the Last Sync Time property
-Because geo-replication is asynchronous, it is possible that data written to the primary region has not yet been written to the secondary region at the time an outage occurs. The **Last Sync Time** property indicates the most recent time that data from the primary region is guaranteed to have been written to the secondary region. For accounts that have a hierarchical namespace, the same **Last Sync Time** property also applies to the metadata managed by the hierarchical namespace, including ACLs. All data and metadata written prior to the last sync time is available on the secondary, while data and metadata written after the last sync time may not have been written to the secondary, and may be lost. Use this property in the event of an outage to estimate the amount of data loss you may incur by initiating an account failover.
+Because geo-replication is asynchronous, it's possible that data written to the primary region hasn't yet been written to the secondary region at the time an outage occurs. The **Last Sync Time** property indicates the most recent time that data from the primary region is guaranteed to have been written to the secondary region. For accounts that have a hierarchical namespace, the same **Last Sync Time** property also applies to the metadata managed by the hierarchical namespace, including ACLs. All data and metadata written prior to the last sync time is available on the secondary, while data and metadata written after the last sync time may not have been written to the secondary, and may be lost. Use this property in the event of an outage to estimate the amount of data loss you may incur by initiating a customer-managed (unplanned) failover.
The **Last Sync Time** property is a GMT date/time value.
storage Redundancy Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/redundancy-migration.md
<!-- Initial: 81 (3717/68)
-Current: 98 (3765/4)
+Current: 98 (3761/0)
-->
-# Change the redundancy configuration for a storage account
+# Change how a storage account is replicated
Azure Storage always stores multiple copies of your data to protect it in the face of both planned and unplanned events. These events include transient hardware failures, network or power outages, and massive natural disasters. Data redundancy ensures that your storage account meets the [Service-Level Agreement (SLA) for Azure Storage](https://azure.microsoft.com/support/legal/sla/storage/), even in the face of failures.
-This article describes the process of changing replication setting(s) for an existing storage account.
+This article describes the process of changing replication settings for an existing storage account.
## Options for changing the replication type
Set-AzStorageAccount -ResourceGroupName <resource_group> `
-Name <storage_account> ` -SkuName <sku> ```
+<!--
+You can also add or remove zone redundancy to your storage account. To change between locally redundant and zone-redundant storage with PowerShell, call the [Start-AzStorageAccountMigration](/powershell/module/az.storage/start-azstorageaccountmigration) command and specify the `-TargetSku` parameter:
+
+```powershell
+Start-AzStorageAccountMigration
+ -AccountName <String>
+ -ResourceGroupName <String>
+ -TargetSku <String>
+ -AsJob
+```
+
+To track the current migration status of the conversion initiated on your storage account, call the [Get-AzStorageAccountMigration](/powershell/module/az.storage/get-azstorageaccountmigration) cmdlet:
+
+```powershell
+Get-AzStorageAccountMigration
+ -AccountName <String>
+ -ResourceGroupName <String>
+```
+-->
# [Azure CLI](#tab/azure-cli)
az storage account update \
--sku <sku> ```
+<!--
+You can also add or remove zone redundancy to your storage account. To change between locally redundant and zone-redundant storage with Azure CLI, call the [az storage account migration start](/cli/azure/storage/account/migration#az-storage-account-migration-start) command and specify the `--sku` parameter:
+
+```azurecli-interactive
+az storage account migration start \
+ -- account-name <string> \
+ -- g <string> \
+ --sku <string> \
+ --no-wait
+```
+
+To track the current migration status of the conversion initiated on your storage account, use the [az storage account migration show](/cli/azure/storage/account/migration#az-storage-account-migration-show) command:
+
+```azurecli-interactive
+az storage account migration show \
+ --account-name <string> \
+ - g <sting> \
+ -n "default"
+```
+-->
+ ### Perform a conversion
Get-AzStorageAccountMigration
# [Azure CLI](#tab/azure-cli)
-To track the current migration status of the conversion initiated on your storage account, call the [Get-AzStorageAccountMigration](/powershell/module/az.storage/get-azstorageaccountmigration) cmdlet:
+To track the current migration status of the conversion initiated on your storage account, use the [az storage account migration show](/cli/azure/storage/account/migration#az-storage-account-migration-show) command:
-```powershell
-Get-AzStorageAccountMigration
- -AccountName <String>
- -ResourceGroupName <String>
+```azurecli-interactive
+az storage account migration show \
+ --account-name <string> \
+ - g <sting> \
+ -n "default"
```
storage Storage Account Sas Create Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-sas-create-dotnet.md
Previously updated : 09/21/2023 Last updated : 08/05/2024
storage Storage Account Sas Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-sas-create-java.md
Previously updated : 09/21/2023 Last updated : 08/05/2024
storage Storage Account Sas Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-sas-create-python.md
Previously updated : 09/21/2023 Last updated : 08/05/2024
storage Storage Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-disaster-recovery-guidance.md
Previously updated : 01/11/2024 Last updated : 08/05/2024
+<!--
+Initial: 83 (3428/69)
+Current: 99 (3694/0)
+-->
+ # Azure storage disaster recovery planning and failover
-Microsoft strives to ensure that Azure services are always available. However, unplanned service outages may occur. Key components of a good disaster recovery plan include strategies for:
+Microsoft strives to ensure that Azure services are always available. However, unplanned service outages might occasionally occur. Key components of a good disaster recovery plan include strategies for:
- [Data protection](../blobs/data-protection-overview.md) - [Backup and restore](../../backup/index.yml) - [Data redundancy](storage-redundancy.md)-- [Failover](#plan-for-storage-account-failover)
+- [Failover](#plan-for-failover)
- [Designing applications for high availability](#design-for-high-availability)
-This article focuses on failover for globally redundant storage accounts (GRS, GZRS, and RA-GZRS), and how to design your applications to be highly available if there's an outage and subsequent failover.
+This article describes the options available for globally redundant storage accounts, and provides recommendations for developing highly available applications and testing your disaster recovery plan.
## Choose the right redundancy option
-Azure Storage maintains multiple copies of your storage account to ensure durability and high availability. Which redundancy option you choose for your account depends on the degree of resiliency you need for your applications.
+Azure Storage maintains multiple copies of your storage account to ensure that availability and durability targets are met, even in the face of failures. The way in which data is replicated provides differing levels of protection. Each option offers its own benefits, so the option you choose depends upon the degree of resiliency your applications require.
+
+Locally redundant storage (LRS), the lowest-cost redundancy option, automatically stores and replicates three copies of your storage account within a single datacenter. Although LRS protects your data against server rack and drive failures, it doesn't account for disasters such as fire or flooding within a datacenter. In the face of such disasters, all replicas of a storage account configured to use LRS might be lost or unrecoverable.
+
+By comparison, zone-redundant storage (ZRS) retains a copy of a storage account and replicates it in each of three separate availability zones within the same region. For more information about availability zones, see [Azure availability zones](../../availability-zones/az-overview.md).
+
+<!--Recovery of a single copy of a storage account occurs automatically with both LRS and ZRS.-->
+
+### Geo-redundant storage and failover
+
+Geo-redundant storage (GRS), geo-zone-redundant storage (GZRS), and read-access geo-zone-redundant storage (RA-GZRS) are examples of globally redundant storage options. When configured to use globally redundant storage (GRS, GZRS, and RA-GZRS), Azure copies your data asynchronously to a secondary geographic region. These regions are located hundreds, or even thousands of miles away. This level of redundancy allows you to recover your data if there's an outage throughout the entire primary region.
+
+Unlike LRS and ZRS, globally redundant storage also provides support for an unplanned failover to a secondary region if there's an outage in the primary region. During the failover process, DNS (Domain Name System) entries for your storage account service endpoints are automatically updated such that the secondary region's endpoints become the new primary endpoints. Once the unplanned failover is complete, clients can begin writing to the new primary endpoints.
+
+Read-access geo-redundant storage (RA-GRS) and read-access geo-zone-redundant storage (RA-GZRS) also provide geo-redundant storage, but offer the added benefit of read access to the secondary endpoint. These options are ideal for applications designed for high availability business-critical applications. If the primary endpoint experiences an outage, applications configured for read access to the secondary region can continue to operate. Microsoft recommends RA-GZRS for maximum availability and durability of your storage accounts.
+
+For more information about redundancy for Azure Storage, see [Azure Storage redundancy](storage-redundancy.md).
+
+## Plan for failover
+
+Azure Storage accounts support three types of failover:
+
+- [**Customer-managed planned failover (preview)**](#customer-managed-planned-failover-preview) - Customers can manage storage account failover to test their disaster recovery plan.
+- [**Customer-managed (unplanned) failover**](#customer-managed-unplanned-failover) - Customers can manage storage account failover if there's an unexpected service outage.
+- [**Microsoft-managed failover**](#customer-managed-unplanned-failover) - Potentially initiated by Microsoft due to a severe disaster in the primary region. <sup>1,2</sup>
+
+<sup>1</sup> Microsoft-managed failover can't be initiated for individual storage accounts, subscriptions, or tenants. For more information, see [Microsoft-managed failover](#microsoft-managed-failover).<br/>
+<sup>2</sup> Use customer-managed failover options to develop, test, and implement your disaster recovery plans. **Do not** rely on Microsoft-managed failover, which would only be used in extreme circumstances.
+
+Each type of failover has a unique set of use cases, corresponding expectations for data loss, and support for accounts with a hierarchical namespace enabled (Azure Data Lake Storage Gen2). This table summarizes those aspects of each type of failover:
+
+| Type | Failover Scope | Use case | Expected data loss | Hierarchical Namespace (HNS) supported |
+|-|--|-|--|-|
+| Customer-managed planned failover (preview) | Storage account | The storage service endpoints for the primary and secondary regions are available, and you want to perform disaster recovery testing. <br></br> The storage service endpoints for the primary region are available, but another service is preventing your workloads from functioning properly.<br><br>To proactively prepare for large-scale disasters, such as a hurricane, that may impact a region. | [No](#anticipate-data-loss-and-inconsistencies) | [Yes <br> *(In preview)*](#hierarchical-namespace-hns) |
+| Customer-managed (unplanned) failover | Storage account | The storage service endpoints for the primary region become unavailable, but the secondary region is available. <br></br> You received an Azure Advisory in which Microsoft advises you to perform a failover operation of storage accounts potentially affected by an outage. | [Yes](#anticipate-data-loss-and-inconsistencies) | [Yes <br> *(In preview)*](#hierarchical-namespace-hns) |
+| Microsoft-managed | Entire region | The primary region becomes unavailable due to a significant disaster, but the secondary region is available. | [Yes](#anticipate-data-loss-and-inconsistencies) | [Yes](#hierarchical-namespace-hns) |
-With locally redundant storage (LRS), three copies of your storage account are automatically stored and replicated within a single datacenter. With zone-redundant storage (ZRS), a copy is stored and replicated in each of three separate availability zones within the same region. For more information about availability zones, see [Azure availability zones](../../availability-zones/az-overview.md).
+The following table compares a storage account's redundancy state after each type of failover:
-Recovery of a single copy of a storage account occurs automatically with LRS and ZRS.
+| Result of failover on... | Customer-managed planned failover (preview) | Customer-managed (unplanned) failover |
+|--|-|-|
+| ...the secondary region | The secondary region becomes the new primary | The secondary region becomes the new primary |
+| ...the original primary region | The original primary region becomes the new secondary |The copy of the data in the original primary region is deleted |
+| ...the account redundancy configuration | The storage account is converted to GRS | The storage account is converted to LRS |
+| ...the geo-redundancy configuration | Geo-redundancy is retained | Geo-redundancy is lost |
-### Globally redundant storage and failover
+The following table summarizes the resulting redundancy configuration at every stage of the failover and failback process for each type of failover:
-With globally redundant storage (GRS, GZRS, and RA-GZRS), Azure copies your data asynchronously to a secondary geographic region at least hundreds of miles away. This allows you to recover your data if there's an outage in the primary region. A feature that distinguishes globally redundant storage from LRS and ZRS is the ability to fail over to the secondary region if there's an outage in the primary region. The process of failing over updates the DNS entries for your storage account service endpoints such that the endpoints for the secondary region become the new primary endpoints for your storage account. Once the failover is complete, clients can begin writing to the new primary endpoints.
+| Original <br> configuration | After <br> failover | After re-enabling <br> geo redundancy | After <br> failback | After re-enabling <br> geo redundancy |
+||||||
+| **Customer-managed planned failover** | | | | |
+| GRS | GRS | n/a <sup>1</sup> | GRS | n/a <sup>1</sup> |
+| GZRS | GRS | n/a <sup>1</sup> | GZRS | n/a <sup>1</sup> |
+| **Customer-managed (unplanned) failover** | | | | |
+| GRS | LRS | GRS | LRS | GRS |
+| GZRS | LRS | GRS | ZRS | GZRS |
-RA-GRS and RA-GZRS redundancy configurations provide geo-redundant storage with the added benefit of read access to the secondary endpoint if there is an outage in the primary region. If an outage occurs in the primary endpoint, applications configured for read access to the secondary region and designed for high availability can continue to read from the secondary endpoint. Microsoft recommends RA-GZRS for maximum availability and durability of your storage accounts.
+<sup>1</sup> Geo-redundancy is retained during a planned failover and doesn't need to be manually reconfigured.
-For more information about redundancy in Azure Storage, see [Azure Storage redundancy](storage-redundancy.md).
+### Customer-managed planned failover (preview)
-## Plan for storage account failover
+Planned failover can be utilized in multiple scenarios including planned disaster recovery testing, a proactive approach to large scale disasters, or to recover from non-storage related outages.
-Azure Storage accounts support two types of failover:
+During the planned failover process, the primary and secondary regions are swapped. The original primary region is demoted and becomes the new secondary region. At the same time, the original secondary region is promoted and becomes the new primary. After the failover completes, users can proceed to access data in the new primary region and administrators can validate their disaster recovery plan. The storage account must be available in both the primary and secondary regions before a planned failover can be initiated.
-- [**Customer-managed failover**](#customer-managed-failover) - Customers can manage storage account failover if there's an unexpected service outage.-- [**Microsoft-managed failover**](#microsoft-managed-failover) - Potentially initiated by Microsoft only in the case of a severe disaster in the primary region. <sup>1,2</sup>
+Data loss isn't expected during the planned failover and failback process as long as the primary and secondary regions are available throughout the entire process. For more detail, see the [Anticipating data loss and inconsistencies](#anticipate-data-loss-and-inconsistencies) section.
-<sup>1</sup>Microsoft-managed failover can't be initiated for individual storage accounts, subscriptions, or tenants. For more details see [Microsoft-managed failover](#microsoft-managed-failover). <br/>
-<sup>2</sup> Your disaster recovery plan should be based on customer-managed failover. **Do not** rely on Microsoft-managed failover, which would only be used in extreme circumstances. <br/>
+To understand the effect of this type of failover on your users and applications, it's helpful to know what happens during every step of the planned failover and failback processes. For details about how this process works, see [How customer-managed (planned) failover works](storage-failover-customer-managed-planned.md).
-Each type of failover has a unique set of use cases, corresponding expectations for data loss, and support for accounts with a hierarchical namespace enabled (Azure Data Lake Storage Gen2). This table summarizes those aspects of each type of failover :
-| Type | Failover Scope | Use case | Expected data loss | HNS supported |
-||--|-|||
-| Customer-managed | Storage account | The storage service endpoints for the primary region become unavailable, but the secondary region is available. <br></br> You received an Azure Advisory in which Microsoft advises you to perform a failover operation of storage accounts potentially affected by an outage. | [Yes](#anticipate-data-loss-and-inconsistencies) | [Yes ](#azure-data-lake-storage-gen2)*[(In preview)](#azure-data-lake-storage-gen2)* |
-| Microsoft-managed | Entire region or scale unit | The primary region becomes completely unavailable due to a significant disaster, but the secondary region is available. | [Yes](#anticipate-data-loss-and-inconsistencies) | [Yes](#azure-data-lake-storage-gen2) |
-### Customer-managed failover
+### Customer-managed (unplanned) failover
-If the data endpoints for the storage services in your storage account become unavailable in the primary region, you can fail over to the secondary region. After the failover is complete, the secondary region becomes the new primary and users can proceed to access data in the new primary region.
+If the data endpoints for the storage services in your storage account become unavailable in the primary region, you can initiate an unplanned failover to the secondary region. After the failover is complete, the secondary region becomes the new primary and users can proceed to access data there.
-To fully understand the impact that customer-managed account failover would have on your users and applications, it is helpful to know what happens during every step of the failover and failback process. For details about how the process works, see [How customer-managed storage account failover works](storage-failover-customer-managed-unplanned.md).
+To understand the effect of this type of failover on your users and applications, it's helpful to know what happens during every step of the unplanned failover and failback process. For details about how the process works, see [How customer-managed (unplanned) failover works](storage-failover-customer-managed-unplanned.md).
### Microsoft-managed failover
-In extreme circumstances where the original primary region is deemed unrecoverable within a reasonable amount of time due to a major disaster, Microsoft **may** initiate a regional failover. In this case, no action on your part is required. Until the Microsoft-managed failover has completed, you won't have write access to your storage account. Your applications can read from the secondary region if your storage account is configured for RA-GRS or RA-GZRS.
+Microsoft may initiate a regional failover in extreme circumstances, such as a catastrophic disaster that impacts an entire geo region. During these events, no action on your part is required. If your storage account is configured for RA-GRS or RA-GZRS, your applications can read from the secondary region during a Microsoft-managed failover. However, you don't have write access to your storage account until the failover process is complete.
> [!IMPORTANT]
-> Your disaster recovery plan should be based on customer-managed failover. **Do not** rely on Microsoft-managed failover, which might only be used in extreme circumstances.
-> A Microsoft-managed failover would be initiated for an entire physical unit, such as a region or scale unit. It can't be initiated for individual storage accounts, subscriptions, or tenants. For the ability to selectively failover your individual storage accounts, use [customer-managed account failover](#customer-managed-failover).
+> Use customer-managed failover options to develop, test, and implement your disaster recovery plans. **Do not** rely on Microsoft-managed failover, which might only be used in extreme circumstances.
+> A Microsoft-managed failover would be initiated for an entire physical unit, such as a region or a datacenter. It can't be initiated for individual storage accounts, subscriptions, or tenants. If you need the ability to selectively failover your individual storage accounts, use [customer-managed planned failover](#customer-managed-planned-failover-preview).
+ ### Anticipate data loss and inconsistencies > [!CAUTION]
-> Storage account failover usually involves some data loss, and potentially file and data inconsistencies. In your disaster recovery plan, it's important to consider the impact that an account failover would have on your data before initiating one.
+> Customer-managed unplanned failover usually involves some amount of data loss, and can also potentially introduce file and data inconsistencies. In your disaster recovery plan, it's important to consider the impact that an account failover would have on your data before initiating one.
-Because data is written asynchronously from the primary region to the secondary region, there's always a delay before a write to the primary region is copied to the secondary. If the primary region becomes unavailable, the most recent writes may not yet have been copied to the secondary.
+Because data is written asynchronously from the primary region to the secondary region, there's always a delay before a write to the primary region is copied to the secondary. If the primary region becomes unavailable, it's possible that the most recent writes might not yet be copied to the secondary.
-When a failover occurs, all data in the primary region is lost as the secondary region becomes the new primary. All data already copied to the secondary is maintained when the failover happens. However, any data written to the primary that hasn't also been copied to the secondary region is lost permanently.
+When an unplanned failover occurs, all data in the primary region is lost as the secondary region becomes the new primary. All data already copied to the secondary region is maintained when the failover happens. However, any data written to the primary that doesn't yet exist within the secondary region is lost permanently.
The new primary region is configured to be locally redundant (LRS) after the failover.
You also might experience file or data inconsistencies if your storage accounts
#### Last sync time
-The **Last Sync Time** property indicates the most recent time that data from the primary region is guaranteed to have been written to the secondary region. For accounts that have a hierarchical namespace, the same **Last Sync Time** property also applies to the metadata managed by the hierarchical namespace, including ACLs. All data and metadata written prior to the last sync time is available on the secondary, while data and metadata written after the last sync time may not have been written to the secondary, and may be lost. Use this property if there's an outage to estimate the amount of data loss you may incur by initiating an account failover.
+The **Last Sync Time** property indicates the most recent time at which data from the primary region was also written to the secondary region. For accounts that have a hierarchical namespace, the same **Last Sync Time** property also applies to the metadata managed by the hierarchical namespace, including access control lists (ACLs). All data and metadata written before the last sync time is available on the secondary. By contrast, data and metadata written after the last sync time might not yet be copied to the secondary and could potentially be lost. During an outage, use this property to estimate the amount of data loss you might incur when initiating an account failover.
-As a best practice, design your application so that you can use the last sync time to evaluate expected data loss. For example, if you're logging all write operations, then you can compare the time of your last write operations to the last sync time to determine which writes haven't been synced to the secondary.
+As a best practice, design your application so that you can use **Last Sync Time** to evaluate expected data loss. For example, logging all write operations allows you to compare the times of your last write operation to the last sync time. This method enables you to determine which writes aren't yet synced to the secondary and are in danger of being lost.
For more information about checking the **Last Sync Time** property, see [Check the Last Sync Time property for a storage account](last-sync-time-get.md). #### File consistency for Azure Data Lake Storage Gen2
-Replication for storage accounts with a [hierarchical namespace enabled (Azure Data Lake Storage Gen2)](../blobs/data-lake-storage-introduction.md) occurs at the file level. This means if an outage in the primary region occurs, it is possible that only some of the files in a container or directory might have successfully replicated to the secondary region. Consistency for all files in a container or directory after a storage account failover is not guaranteed.
+Replication for storage accounts with a [hierarchical namespace enabled (Azure Data Lake Storage Gen2)](../blobs/data-lake-storage-introduction.md) occurs at the file level. Because replication occurs at this level, an outage in the primary region might prevent some of the files within a container or directory from successfully replicating to the secondary region. Consistency for all files within a container or directory after a storage account failover isn't guaranteed.
#### Change feed and blob data inconsistencies
-Storage account failover of geo-redundant storage accounts with [change feed](../blobs/storage-blob-change-feed.md) enabled may result in inconsistencies between the change feed logs and the blob data and/or metadata. Such inconsistencies can result from the asynchronous nature of both updates to the change logs and the replication of blob data from the primary to the secondary region. The only situation in which inconsistencies would not be expected is when all of the current log records have been successfully flushed to the log files, and all of the storage data has been successfully replicated from the primary to the secondary region.
+Customer-managed (unplanned) failover of storage accounts with [change feed](../blobs/storage-blob-change-feed.md) enabled could result in inconsistencies between the change feed logs and the blob data and/or metadata. Such inconsistencies can result from the asynchronous nature of change log updates and data replication between the primary and secondary regions. You can avoid inconsistencies by taking the following precautions:
-For information about how change feed works see [How the change feed works](../blobs/storage-blob-change-feed.md#how-the-change-feed-works).
+- Ensure that all log records are flushed to the log files.
+- Ensure that all storage data is replicated from the primary to the secondary region.
-Keep in mind that other storage account features require the change feed to be enabled such as [operational backup of Azure Blob Storage](../../backup/blob-backup-support-matrix.md#limitations), [Object replication](../blobs/object-replication-overview.md) and [Point-in-time restore for block blobs](../blobs/point-in-time-restore-overview.md).
+For more information about change feed, see [How the change feed works](../blobs/storage-blob-change-feed.md#how-the-change-feed-works).
-#### Point-in-time restore inconsistencies
+Keep in mind that other storage account features also require the change feed to be enabled. These features include [operational backup of Azure Blob Storage](../../backup/blob-backup-support-matrix.md#limitations), [Object replication](../blobs/object-replication-overview.md), and [Point-in-time restore for block blobs](../blobs/point-in-time-restore-overview.md).
-Customer-managed failover is supported for general-purpose v2 standard tier storage accounts that include block blobs. However, performing a customer-managed failover on a storage account resets the earliest possible restore point for the account. Data for [Point-in-time restore for block blobs](../blobs/point-in-time-restore-overview.md) is only consistent up to the failover completion time. As a result, you can only restore block blobs to a point in time no earlier than the failover completion time. You can check the failover completion time in the redundancy tab of your storage account in the Azure Portal.
+#### Point-in-time restore inconsistencies
-For example, suppose you have set the retention period to 30 days. If more than 30 days have elapsed since the failover, then you can restore to any point within that 30 days. However, if fewer than 30 days have elapsed since the failover, then you can't restore to a point prior to the failover, regardless of the retention period. For example, if it's been 10 days since the failover, then the earliest possible restore point is 10 days in the past, not 30 days in the past.
+Customer-managed failover is supported for general-purpose v2 standard tier storage accounts that include block blobs. However, performing a customer-managed failover on a storage account resets the earliest possible restore point for the account. Data for [Point-in-time restore for block blobs](../blobs/point-in-time-restore-overview.md) is only consistent up to the failover completion time. As a result, you can only restore block blobs to a point in time no earlier than the failover completion time. You can check the failover completion time in the redundancy tab of your storage account in the Azure portal.
### The time and cost of failing over
-The time it takes for failover to complete after being initiated can vary, although it typically takes less than one hour.
+The time it takes for a customer-managed failover to complete after being initiated can vary, although it typically takes less than one hour.
+
+A planned customer-managed failover doesn't lose its geo-redundancy after a failover and subsequent failback, but an unplanned customer-managed failover does.
-A customer-managed failover loses its geo-redundancy after a failover (and failback). Your storage account is automatically converted to locally redundant storage (LRS) in the new primary region during a failover, and the storage account in the original primary region is deleted.
+Initiating a customer-managed unplanned failover automatically converts your storage account to locally redundant storage (LRS) within a new primary region, and deletes the storage account in the original primary region.
-You can re-enable geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS) for the account, but note that converting from LRS to GRS or RA-GRS incurs an additional cost. The cost is due to the network egress charges to re-replicate the data to the new secondary region. Also, all archived blobs need to be rehydrated to an online tier before the account can be configured for geo-redundancy, which will incur a cost. For more information about pricing, see:
+You can re-enable geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS) for the account, but re-replicating data to the new secondary region incurs a charge. Additionally, any archived blobs need to be rehydrated to an online tier before the account can be reconfigured for geo-redundancy. This rehydration also incurs an extra charge. For more information about pricing, see:
- [Bandwidth Pricing Details](https://azure.microsoft.com/pricing/details/bandwidth/) - [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/)
-After you re-enable GRS for your storage account, Microsoft begins replicating the data in your account to the new secondary region. Replication time depends on many factors, which include:
+After you re-enable GRS for your storage account, Microsoft begins replicating the data in your account to the new secondary region. The amount of time it takes for replication to complete depends on several factors. These factors include:
- The number and size of the objects in the storage account. Replicating many small objects can take longer than replicating fewer and larger objects. - The available resources for background replication, such as CPU, memory, disk, and WAN capacity. Live traffic takes priority over geo replication.-- If your storage account contains blobs, the number of snapshots per blob.-- If your storage account contains tables, the [data partitioning strategy](/rest/api/storageservices/designing-a-scalable-partitioning-strategy-for-azure-table-storage). The replication process can't scale beyond the number of partition keys that you use.
+- The number of snapshots per blob, if applicable.
+- The [data partitioning strategy](/rest/api/storageservices/designing-a-scalable-partitioning-strategy-for-azure-table-storage), if your storage account contains tables. The replication process can't scale beyond the number of partition keys that you use.
### Supported storage account types All geo-redundant offerings support Microsoft-managed failover. In addition, some account types support customer-managed account failover, as shown in the following table:
-| Type of failover | GRS/RA-GRS | GZRS/RA-GZRS |
-||||
-| **Customer-managed failover** | General-purpose v2 accounts</br> General-purpose v1 accounts</br> Legacy Blob Storage accounts | General-purpose v2 accounts |
-| **Microsoft-managed failover** | All account types | General-purpose v2 accounts |
+| Type of failover | GRS/RA-GRS | GZRS/RA-GZRS |
+|-|||
+| **Customer-managed planned failover (preview)** | General-purpose v2 accounts</br> General-purpose v1 accounts</br> Legacy Blob Storage accounts | General-purpose v2 accounts |
+| **Customer-managed (unplanned) failover** | General-purpose v2 accounts</br> General-purpose v1 accounts</br> Legacy Blob Storage accounts | General-purpose v2 accounts |
+| **Microsoft-managed failover** | All account types | General-purpose v2 accounts |
#### Classic storage accounts > [!IMPORTANT]
-> Customer-managed account failover is only supported for storage accounts deployed using the Azure Resource Manager (ARM) deployment model. The Azure Service Manager (ASM) deployment model, also known as *classic*, isn't supported. To make classic storage accounts eligible for customer-managed account failover, they must first be [migrated to the ARM model](classic-account-migration-overview.md). Your storage account must be accessible to perform the upgrade, so the primary region can't currently be in a failed state.
+> Customer-managed failover is only supported for storage accounts deployed using the Azure Resource Manager (ARM) deployment model. The Azure Service Manager (ASM) deployment model, also known as the *classic* model, isn't supported. To make classic storage accounts eligible for customer-managed account failover, they must first be [migrated to the ARM model](classic-account-migration-overview.md). Your storage account must be accessible to perform the upgrade, so the primary region can't currently be in a failed state.
>
-> if there's a disaster that affects the primary region, Microsoft will manage the failover for classic storage accounts. For more information, see [Microsoft-managed failover](#microsoft-managed-failover).
+> During a disaster that affects the primary region, Microsoft will manage the failover for classic storage accounts. For more information, see [Microsoft-managed failover](#microsoft-managed-failover).
-#### Azure Data Lake Storage Gen2
+#### Hierarchical namespace (HNS)
-> [!IMPORTANT]
-> Customer-managed account failover for accounts that have a hierarchical namespace (Azure Data Lake Storage Gen2) is currently in PREVIEW and only supported in the following regions:
->
-> - (Asia Pacific) Central India
-> - (Asia Pacific) South East Asia
-> - (Europe) North Europe
-> - (Europe) Switzerland North
-> - (Europe) Switzerland West
-> - (Europe) West Europe
-> - (North America) Canada Central
-> - (North America) East US 2
-> - (North America) South Central US
->
-> To opt in to the preview, see [Set up preview features in Azure subscription](../../azure-resource-manager/management/preview-features.md) and specify `AllowHNSAccountFailover` as the feature name.
->
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-> if there's a significant disaster that affects the primary region, Microsoft will manage the failover for accounts with a hierarchical namespace. For more information, see [Microsoft-managed failover](#microsoft-managed-failover).
### Unsupported features and services
-The following features and services aren't supported for account failover:
+The following features and services aren't supported for customer-managed failover:
-- Azure File Sync doesn't support customer initiated storage account failover. Storage accounts containing Azure file shares being used as cloud endpoints in Azure File Sync shouldn't be failed over. Doing so will cause sync to stop working and may also cause unexpected data loss in the case of newly tiered files. For more information, see [Best practices for disaster recovery with Azure File Sync](../file-sync/file-sync-disaster-recovery-best-practices.md#geo-redundancy) for details.
+- Azure File Sync doesn't support customer-managed account failover. Storage accounts used as cloud endpoints for Azure File Sync shouldn't be failed over. Failover disrupts file sync and might cause the unexpected data loss of newly tiered files. For more information, see [Best practices for disaster recovery with Azure File Sync](../file-sync/file-sync-disaster-recovery-best-practices.md#geo-redundancy) for details.
- A storage account containing premium block blobs can't be failed over. Storage accounts that support premium block blobs don't currently support geo-redundancy. - Customer-managed failover isn't supported for either the source or the destination account in an [object replication policy](../blobs/object-replication-overview.md).-- To failover an account with SSH File Transfer Protocol (SFTP) enabled, you must first [disable SFTP for the account](../blobs/secure-file-transfer-protocol-support-how-to.md#disable-sftp-support). If you want to resume using SFTP after the failover is complete, simply [re-enable it](../blobs/secure-file-transfer-protocol-support-how-to.md#enable-sftp-support). - Network File System (NFS) 3.0 (NFSv3) isn't supported for storage account failover. You can't create a storage account configured for global-redundancy with NFSv3 enabled.
-### Failover is not for account migration
+The following table can be used to reference feature support.
+
+| | Planned failover | Unplanned failover |
+|-|||
+| **ADLS Gen2** | Supported (preview) | Supported (preview) |
+| **Change Feed** | Unsupported | Supported |
+| **Object Replication** | Unsupported | Unsupported |
+| **SFTP** | Supported (preview) | Supported (preview) |
+| **NFSv3** | GRS is unsupported | GRS is unsupported |
+| **Storage Actions** | Unsupported | Unsupported |
+| **Point-in-time restore (PITR)** | Unsupported | Supported |
+
+### Failover isn't for account migration
-Storage account failover shouldn't be used as part of your data migration strategy. Failover is a temporary solution to a service outage. For information about how to migrate your storage accounts, see [Azure Storage migration overview](storage-migration-overview.md).
+Storage account failovers are a temporary solution which can be used to either help plan and test your DR plans, or to recover from a service outage. Failover shouldn't be used as part of your data migration strategy. For information about how to migrate your storage accounts, see [Azure Storage migration overview](storage-migration-overview.md).
### Storage accounts containing archived blobs
-Storage accounts containing archived blobs support account failover. However, after a [customer-managed failover](#customer-managed-failover) is complete, all archived blobs need to be rehydrated to an online tier before the account can be configured for geo-redundancy.
+Storage accounts containing archived blobs support account failover. However, after a [customer-managed failover](#customer-managed-unplanned-failover) is complete, all archived blobs must be rehydrated to an online tier before the account can be configured for geo-redundancy.
### Storage resource provider
Because the Azure Storage resource provider does not fail over, the [Location](/
### Azure virtual machines
-Azure virtual machines (VMs) don't fail over as part of an account failover. If the primary region becomes unavailable, and you fail over to the secondary region, then you will need to recreate any VMs after the failover. Also, there's a potential data loss associated with the account failover. Microsoft recommends following the [high availability](../../virtual-machines/availability.md) and [disaster recovery](../../virtual-machines/backup-recovery.md) guidance specific to virtual machines in Azure.
-
-Keep in mind that any data stored in a temporary disk is lost when the VM is shut down.
+Azure virtual machines (VMs) don't fail over as part of a storage account failover. Any VMs that failed over to a secondary region in response to an outage need to be recreated after the failover completes. Account failover can potentially result in the loss of data stored in a temporary disk when the virtual machine (VM) is shut down. Microsoft recommends following the [high availability](../../virtual-machines/availability.md) and [disaster recovery](../../virtual-machines/backup-recovery.md) guidance specific to virtual machines in Azure.
### Azure unmanaged disks
-As a best practice, Microsoft recommends converting unmanaged disks to managed disks. However, if you need to fail over an account that contains unmanaged disks attached to Azure VMs, you will need to shut down the VM before initiating the failover.
+Unmanaged disks are stored as page blobs in Azure Storage. When a VM is running in Azure, any unmanaged disks attached to the VM are leased. An account failover can't proceed when there's a lease on a blob. Before a failover can be initiated on an account containing unmanaged disks attached to Azure VMs, the disks must be shut down. For this reason, Microsoft's recommended best practices include converting any unmanaged disks to managed disks.
-Unmanaged disks are stored as page blobs in Azure Storage. When a VM is running in Azure, any unmanaged disks attached to the VM are leased. An account failover can't proceed when there's a lease on a blob. To perform the failover, follow these steps:
+To perform a failover on an account containing unmanaged disks, follow these steps:
-1. Before you begin, note the names of any unmanaged disks, their logical unit numbers (LUN), and the VM to which they are attached. Doing so will make it easier to reattach the disks after the failover.
-2. Shut down the VM.
-3. Delete the VM, but retain the VHD files for the unmanaged disks. Note the time at which you deleted the VM.
-4. Wait until the **Last Sync Time** has updated, and is later than the time at which you deleted the VM. This step is important, because if the secondary endpoint hasn't been fully updated with the VHD files when the failover occurs, then the VM may not function properly in the new primary region.
-5. Initiate the account failover.
-6. Wait until the account failover is complete and the secondary region has become the new primary region.
-7. Create a VM in the new primary region and reattach the VHDs.
-8. Start the new VM.
+1. Before you begin, note the names of any unmanaged disks, their logical unit numbers (LUN), and the VM to which they're attached. Doing so will make it easier to reattach the disks after the failover.
+1. Shut down the VM.
+1. Delete the VM, but retain the virtual hard disk (VHD) files for the unmanaged disks. Note the time at which you deleted the VM.
+1. Wait until the **Last Sync Time** updates, and ensure that it's later than the time at which you deleted the VM. This step ensures that the secondary endpoint is fully updated with the VHD files when the failover occurs, and that the VM functions properly in the new primary region.
+1. Initiate the account failover.
+1. Wait until the account failover is complete and the secondary region becomes the new primary region.
+1. Create a VM in the new primary region and reattach the VHDs.
+1. Start the new VM.
Keep in mind that any data stored in a temporary disk is lost when the VM is shut down.
-### Copying data as an alternative to failover
+### Copying data as a failover alternative
-If your storage account is configured for read access to the secondary region, then you can design your application to read from the secondary endpoint. If you prefer not to fail over if there's an outage in the primary region, you can use tools such as [AzCopy](./storage-use-azcopy-v10.md) or [Azure PowerShell](/powershell/module/az.storage/) to copy data from your storage account in the secondary region to another storage account in an unaffected region. You can then point your applications to that storage account for both read and write availability.
+As previously discussed, you can maintain high availability by configuring applications to use a storage account configured for read access to a secondary region. However, if you prefer not to fail over during an outage within the primary region, you can manually copy your data as an alternative. Tools such as [AzCopy](./storage-use-azcopy-v10.md) and [Azure PowerShell](/powershell/module/az.storage/) enable you to copy data from your storage account in the affected region to another storage account in an unaffected region. After the copy operation is complete, you can reconfigure your applications to use the storage account in the unaffected region for both read and write availability.
## Design for high availability
-It's important to design your application for high availability from the start. Refer to these Azure resources for guidance in designing your application and planning for disaster recovery:
+It's important to design your application for high availability from the start. Refer to these Azure resources for guidance when designing your application and planning for disaster recovery:
- [Designing resilient applications for Azure](/azure/architecture/framework/resiliency/app-design): An overview of the key concepts for architecting highly available applications in Azure. - [Resiliency checklist](/azure/architecture/checklist/resiliency-per-service): A checklist for verifying that your application implements the best design practices for high availability. - [Use geo-redundancy to design highly available applications](geo-redundant-design.md): Design guidance for building applications to take advantage of geo-redundant storage. - [Tutorial: Build a highly available application with Blob storage](../blobs/storage-create-geo-redundant-storage.md): A tutorial that shows how to build a highly available application that automatically switches between endpoints as failures and recoveries are simulated.
-Keep in mind these best practices for maintaining high availability for your Azure Storage data:
+Refer to these best practices to maintain high availability for your Azure Storage data:
-- **Disks:** Use [Azure Backup](https://azure.microsoft.com/services/backup/) to back up the VM disks used by your Azure virtual machines. Also consider using [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) to protect your VMs if there's a regional disaster.
+- **Disks:** Use [Azure Backup](https://azure.microsoft.com/services/backup/) to back up the VM disks used by your Azure virtual machines. Also consider using [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) to protect your VMs from a regional disaster.
- **Block blobs:** Turn on [soft delete](../blobs/soft-delete-blob-overview.md) to protect against object-level deletions and overwrites, or copy block blobs to another storage account in a different region using [AzCopy](./storage-use-azcopy-v10.md), [Azure PowerShell](/powershell/module/az.storage/), or the [Azure Data Movement library](storage-use-data-movement-library.md). - **Files:** Use [Azure Backup](../../backup/azure-file-share-backup-overview.md) to back up your file shares. Also enable [soft delete](../files/storage-files-prevent-file-share-deletion.md) to protect against accidental file share deletions. For geo-redundancy when GRS isn't available, use [AzCopy](./storage-use-azcopy-v10.md) or [Azure PowerShell](/powershell/module/az.storage/) to copy your files to another storage account in a different region.-- **Tables:** use [AzCopy](./storage-use-azcopy-v10.md) to export table data to another storage account in a different region.
+- **Tables:** Use [AzCopy](./storage-use-azcopy-v10.md) to export table data to another storage account in a different region.
## Track outages
-Customers may subscribe to the [Azure Service Health Dashboard](https://azure.microsoft.com/status/) to track the health and status of Azure Storage and other Azure services.
+Customers can subscribe to the [Azure Service Health Dashboard](https://azure.microsoft.com/status/) to track the health and status of Azure Storage and other Azure services.
Microsoft also recommends that you design your application to prepare for the possibility of write failures. Your application should expose write failures in a way that alerts you to the possibility of an outage in the primary region.
Microsoft also recommends that you design your application to prepare for the po
- [Use geo-redundancy to design highly available applications](geo-redundant-design.md) - [Tutorial: Build a highly available application with Blob storage](../blobs/storage-create-geo-redundant-storage.md) - [Azure Storage redundancy](storage-redundancy.md)-- [How customer-managed storage account failover works](storage-failover-customer-managed-unplanned.md)-
+- [How customer-managed planned failover (preview) works](storage-failover-customer-managed-planned.md)
+- [How customer-managed (unplanned) failover works](storage-failover-customer-managed-unplanned.md)
storage Storage Failover Customer Managed Planned https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-failover-customer-managed-planned.md
+
+ Title: How customer-managed planned failover works
+
+description: Azure Storage supports account failover of geo-redundant storage accounts for disaster recovery testing and planning. Learn what happens to your storage account and storage services during a customer-managed planned failover (preview) to the secondary region to perform disaster recovery testing and planning.
+++++ Last updated : 07/23/2024+++++
+<!--
+Initial: 87 (1697/22)
+Current: 98 (1470/0)
+-->
+
+# How customer-managed planned failover (preview) works
+
+Customer managed planned failover can be useful in scenarios such as disaster and recovery planning and testing, proactive remediation of anticipated large-scale disasters, and nonstorage related outages.
+
+During the planned failover process, your storage account's primary and secondary regions are swapped. The original primary region is demoted and becomes the new secondary while the original secondary region is promoted and becomes the new primary. The storage account must be available in both the primary and secondary regions before a planned failover can be initiated.
+
+This article describes what happens during a customer-managed planned failover and failback at every stage of the process. To understand how a failover due to an unexpected storage endpoint outage works, see [How customer-managed (unplanned) failover](storage-failover-customer-managed-unplanned.md).
+++
+## Redundancy management during planned failover and failback
+
+> [!TIP]
+> To understand the varying redundancy states during customer-managed failover and failback process in detail, see [Azure Storage redundancy](storage-redundancy.md) for definitions of each.
+
+During the planned failover process, the primary region's storage service endpoints become read-only while remaining updates finish replicating to the secondary region. Next, all storage service endpoint's domain name service (DNS) entries are switched. Your storage account's secondary endpoints become the new primary endpoints, and the original primary endpoints become the new secondary. Data replication within each region remains unchanged even though the primary and secondary regions are switched.
+
+The planned failback process is essentially the same as the planned failover process, but with one exception. During planned failback, Azure stores the original redundancy configuration of your storage account and restores it to its original state upon failback. For example, if your storage account was originally configured as GZRS, the storage account will be GZRS after failback.
+
+> [!NOTE]
+> Unlike [customer-managed (unplanned) failover](storage-failover-customer-managed-unplanned.md), during planned failover, replication from the primary to secondary region must be complete before the DNS entries for the endpoints are changed to the new secondary. Because of this, data loss is not expected during planned failover or failback as long as both the primary and secondary regions are available throughout the process.
+
+## How to initiate a failover
+
+To learn how to initiate a failover, see [Initiate an account failover](storage-initiate-account-failover.md).
+
+## The planned failover and failback process
+
+The following diagrams show what happens during a customer-managed planned failover and failback of a storage account.
+
+## [GRS/RA-GRS](#tab/grs-ra-grs)
+
+Under normal circumstances, a client writes data to a storage account in the primary region via storage service endpoints (1). The data is then copied asynchronously from the primary region to the secondary region (2). The following image shows the normal state of a storage account configured as GRS:
++
+### The planned failover process (GRS/RA-GRS)
+
+Begin disaster recovery testing by initiating a failover of your storage account to the secondary region. The following steps describe the failover process, and the subsequent image provides illustration:
+
+1. The original primary region becomes read only.
+1. Replication of all data from the primary region to the secondary region completes.
+1. DNS entries for storage service endpoints in the secondary region are promoted and become the new primary endpoints for your storage account.
+
+The failover typically takes about an hour.
++
+After the failover is complete, the original primary region becomes the new secondary (1), and the original secondary region becomes the new primary (2). The URIs for the storage service endpoints for blobs, tables, queues, and files remain the same, but their DNS entries are changed to point to the new primary region (3). Users can resume writing data to the storage account in the new primary region, and the data is then copied asynchronously to the new secondary (4) as shown in the following image:
++
+While in the failover state, perform your disaster recovery testing.
+
+### The planned failback process (GRS/RA-GRS)
+
+After testing is complete, perform another failover to failback to the original primary region. During the failover process, as shown in the following image:
+
+1. The original primary region becomes read only.
+1. All data finishes replicating from the current primary region to the current secondary region.
+1. The DNS entries for the storage service endpoints are changed to point back to the region that was the primary before the initial failover was performed.
+
+The failback typically takes about an hour.
++
+After the failback is complete, the storage account is restored to its original redundancy configuration. Users can resume writing data to the storage account in the original primary region (1) while replication to the original secondary (2) continues as before the failover:
++
+## [GZRS/RA-GZRS](#tab/gzrs-ra-gzrs)
+
+Under normal circumstances, a client writes data to a storage account in the primary region via storage service endpoints (1). The data is then copied asynchronously from the primary region to the secondary region (2). The following image shows the normal state of a storage account configured as GZRS:
++
+### The planned failover process (GZRS/RA-GZRS)
+
+Begin disaster recovery testing by initiating a failover of your storage account to the secondary region. The following steps describe the failover process, and the subsequent image provides illustration:
+
+1. The current primary region becomes read only.
+1. All data finishes replicating from the primary region to the secondary region.
+1. Storage service endpoint DNS entries are switched. Your storage account's endpoints in the secondary region become your new primary endpoints.
+
+The failover typically takes about an hour.
++
+After the failover is complete, the original primary region becomes the new secondary (1) and the original secondary region becomes the new primary (2). The URIs for the storage service endpoints for blobs, tables, queues, and files remain the same, but point to the new primary region (3). Users can resume writing data to the storage account in the new primary region, and the data is then copied asynchronously to the new secondary (4), as shown in the following image:
++
+While in the failover state, perform your disaster recovery testing.
+
+### The planned failback process (GZRS/RA-GZRS)
+
+When testing is complete, perform another failover to fail back to the original primary region. The following image illustrates the steps involved in the failover process.
+
+1. The current primary region becomes read only.
+1. All data finishes replicating from the current primary region to the current secondary region.
+1. The DNS entries for the storage service endpoints are changed to point back to the region that was the primary before the initial failover was performed.
+
+The failback typically takes about an hour.
++
+After the failback is complete, the storage account is restored to its original redundancy configuration. Users can resume writing data to the storage account in the original primary region (1), while replication to the original secondary (2) continues as before the failover:
++++
+## See also
+
+- [Disaster recovery and account failover](storage-disaster-recovery-guidance.md)
+- [Initiate an account failover](storage-initiate-account-failover.md)
+- [How customer-managed (unplanned) failover works](storage-failover-customer-managed-unplanned.md)
storage Storage Failover Customer Managed Unplanned https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-failover-customer-managed-unplanned.md
Title: How Azure Storage account customer-managed failover works
+ Title: How Azure Storage account customer-managed (unplanned) failover works
-description: Azure Storage supports account failover for geo-redundant storage accounts to recover from a service endpoint outage. Learn what happens to your storage account and storage services during a customer-managed failover to the secondary region if the primary endpoint becomes unavailable.
+description: Azure Storage supports failover for geo-redundant storage accounts to recover from a service endpoint outage. Learn what happens to your storage account and storage services during a customer-managed (unplanned) failover to the secondary region if the primary endpoint becomes unavailable.
Previously updated : 09/22/2023 Last updated : 07/23/2024
-# How customer-managed storage account failover works
+<!--
+Initial: 84 (2544/39)
+Current: 100 (2548/3)
+-->
-Customer-managed failover of Azure Storage accounts enables you to fail over your entire geo-redundant storage account to the secondary region if the storage service endpoints for the primary region become unavailable. During failover, the original secondary region becomes the new primary and all storage service endpoints for blobs, tables, queues and files are redirected to the new primary region. After the storage service endpoint outage has been resolved, you can perform another failover operation to *fail back* to the original primary region.
+# How customer-managed (unplanned) failover works
-This article describes what happens during a customer-managed storage account failover and failback at every stage of the process.
+Customer-managed (unplanned) failover enables you to fail over your entire geo-redundant storage account to the secondary region if the storage service endpoints for the primary region become unavailable. During failover, the original secondary region becomes the new primary region. All storage service endpoints are then redirected to the *new* primary region. After the storage service endpoint outage is resolved, you can perform another failover operation to fail *back* to the original primary region.
+
+This article describes what happens during a customer-managed (unplanned) failover and failback at every stage of the process.
[!INCLUDE [updated-for-az](../../../includes/storage-failover-unplanned-hns-preview-include.md)]
-## Redundancy management during failover and failback
+## Redundancy management during unplanned failover and failback
> [!TIP]
-> To understand the various redundancy states during the storage account failover and failback process in detail, see [Azure Storage redundancy](storage-redundancy.md) for definitions of each.
+> To understand the various redundancy states during the unplanned failover and failback process in detail, see [Azure Storage redundancy](storage-redundancy.md) for definitions of each.
-When a storage account is configured for GRS or RA-GRS redundancy, data is replicated three times locally within both the primary and secondary regions (LRS). When a storage account is configured for GZRS or RA-GZRS replication, data is zone-redundant within the primary region (ZRS) and replicated three times locally within the secondary region (LRS). If the account is configured for read access (RA), you will be able to read data from the secondary region as long as the storage service endpoints to that region are available.
+When a storage account is configured for geo-redundant storage (GRS) or read access geo-redundant storage (RA-GRS) redundancy, data is replicated three times within both the locally redundant storage (LRS) primary and secondary regions. When a storage account is configured for geo-zone-redundant storage (GZRS) or read access geo-zone-redundant storage (RA-GZRS) replication, data is zone-redundant within the zone redundant storage (ZRS) primary region and replicated three times within the LRS secondary region. If the account is configured for read access (RA), you're able to read data from the secondary region as long as the storage service endpoints to that region are available.
-During the customer-managed failover process, the DNS entries for the storage service endpoints are changed such that those for the secondary region become the new primary endpoints for your storage account. After failover, the copy of your storage account in the original primary region is deleted and your storage account continues to be replicated three times locally within the original secondary region (the new primary). At that point, your storage account becomes locally redundant (LRS).
+During the customer-managed (unplanned) failover process, the Domain Name System (DNS) entries for the storage service endpoints are switched. Your storage account's secondary endpoints become the new primary endpoints, and the original primary endpoints become the new secondary. After failover, the copy of your storage account in the original primary region is deleted and your storage account continues to be replicated three times locally within the *new* primary region. At that point, your storage account becomes locally redundant and utilizes LRS.
-The original and current redundancy configurations are stored in the properties of the storage account to allow you eventually return to your original configuration when you fail back.
+The original and current redundancy configurations are stored within the storage account's properties. This functionality allows you to return to your original configuration when you fail back. For a complete list of resulting redundancy configurations, read [Recovery planning and failover](storage-disaster-recovery-guidance.md#plan-for-failover).
-To regain geo-redundancy after a failover, you will need to reconfigure your account as GRS. (GZRS is not an option post-failover since the new primary will be LRS after the failover). After the account is reconfigured for geo-redundancy, Azure immediately begins copying data from the new primary region to the new secondary. If you configure your storage account for read access (RA) to the secondary region, that access will be available but it may take some time for replication from the primary to make the secondary current.
+To regain geo-redundancy after a failover, you need to reconfigure your account as GRS.<!--Keep in mind that GZRS isn't a post-failover option because your storage account utilizes LRS after the failover completes.--> After the account is reconfigured for geo-redundancy, Azure immediately begins copying data from the new primary region to the new secondary. If you configure your storage account for read access to the secondary region, that access is available. However, replication from the primary to the secondary region might take some time to complete.
> [!WARNING] > After your account is reconfigured for geo-redundancy, it may take a significant amount of time before existing data in the new primary region is fully copied to the new secondary. >
-> **To avoid a major data loss**, check the value of the [**Last Sync Time**](last-sync-time-get.md) property before failing back. Compare the last sync time to the last times that data was written to the new primary to evaluate potential data loss.
+> **To avoid a major data loss**, check the value of the [**Last Sync Time**](last-sync-time-get.md) property before failing back. To evaluate potential data loss, compare the last sync time to the last time at which data was written to the new primary.
-The failback process is essentially the same as the failover process except Azure restores the replication configuration to its original state before it was failed over (the replication configuration, not the data). So, if your storage account was originally configured as GZRS, the primary region after faillback becomes ZRS.
+The failback process is essentially the same as the failover process, except that the replication configuration is restored to its original, pre-failover state.
-After failback, you can configure your storage account to be geo-redundant again. If the original primary region was configured for LRS, you can configure it to be GRS or RA-GRS. If the original primary was configured as ZRS, you can configure it to be GZRS or RA-GZRS. For additional options, see [Change how a storage account is replicated](redundancy-migration.md).
+After failback, you can reconfigure your storage account to take advantage of geo-redundancy. If the original primary was configured as ZRS, you can configure it to be GZRS or RA-GZRS. For more options, see [Change how a storage account is replicated](redundancy-migration.md).
-## How to initiate a failover
+## How to initiate an unplanned failover
-To learn how to initiate a failover, see [Initiate a storage account failover](storage-initiate-account-failover.md).
+To learn how to initiate an unplanned failover, see [Initiate an account failover](storage-initiate-account-failover.md).
> [!CAUTION]
-> Storage account failover usually involves some data loss, and potentially file and data inconsistencies. It's important to understand the impact that an account failover would have on your data before initiating one.
+> Unplanned failover usually involves some data loss, and potentially file and data inconsistencies. It's important to understand the impact that an account failover would have on your data before initiating this type of failover.
> > For details about potential data loss and inconsistencies, see [Anticipate data loss and inconsistencies](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies).
-## The failover and failback process
+## The unplanned failover and failback process
-This section summarizes the failover process for a customer-managed failover.
+This section summarizes the failover process for a customer-managed (unplanned) failover.
-### Failover transition summary
+### Unplanned failover transition summary
-After a customer-managed failover:
+After a customer-managed (unplanned) failover:
- The secondary region becomes the new primary - The copy of the data in the original primary region is deleted-- The storage account is converted to LRS
+- The storage account is converted to LRS
- Geo-redundancy is lost
-This table summarizes the resulting redundancy configuration at every stage of a customer-managed failover and failback:
+This table summarizes the resulting redundancy configuration at every stage of a customer-managed (unplanned) failover and failback:
| Original <br> configuration | After <br> failover | After re-enabling <br> geo redundancy | After <br> failback | After re-enabling <br> geo redundancy | ||--|||| | GRS | LRS | GRS <sup>1</sup> | LRS |GRS <sup>1</sup> | | GZRS | LRS | GRS <sup>1</sup> | ZRS |GZRS <sup>1</sup> |
-<sup>1</sup> Geo-redundancy is lost during a customer-managed failover and must be manually reconfigured.<br>
+<sup>1</sup> Geo-redundancy is lost during a customer-managed (unplanned) failover and must be manually reconfigured.<br>
-### Failover transition details
+### Unplanned failover transition details
-The following diagrams show what happens during customer-managed failover and failback of a storage account that is configured for geo-redundancy. The transition details for GZRS and RA-GZRS are slightly different from GRS and RA-GRS.
+The following diagrams show the customer-managed (unplanned) failover and failback process for a storage account configured for geo-redundancy. The transition details for GZRS and RA-GZRS are slightly different from GRS and RA-GRS.
## [GRS/RA-GRS](#tab/grs-ra-grs)
Under normal circumstances, a client writes data to a storage account in the pri
### The storage service endpoints become unavailable in the primary region (GRS/RA-GRS)
-If the primary storage service endpoints become unavailable for any reason (1), the client is no longer able to write to the storage account. Depending on the underlying cause of the outage, replication to the secondary region may no longer be functioning (2), so [some data loss should be expected](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies). The following image shows the scenario where the primary endpoints have become unavailable, but no recovery has occurred yet:
+If the primary storage service endpoints become unavailable for any reason (1), the client is no longer able to write to the storage account. Depending on the underlying cause of the outage, replication to the secondary region might no longer be functioning (2), so [some data loss should be expected](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies). The following image shows the scenario where the primary endpoints become unavailable, but before recovery occurs:
-### The failover process (GRS/RA-GRS)
+### The unplanned failover process (GRS/RA-GRS)
-To restore write access to your data, you can [initiate a failover](storage-initiate-account-failover.md). The storage service endpoint URIs for blobs, tables, queues, and files remain the same but their DNS entries are changed to point to the secondary region (1) as show in this image:
+To restore write access to your data, you can [initiate a failover](storage-initiate-account-failover.md). The storage service endpoint URIs for blobs, tables, queues, and files remain unchanged, but their DNS entries are changed to point to the secondary region as shown:
:::image type="content" source="media/storage-failover-customer-managed-unplanned/failover-to-secondary-geo-redundant.png" alt-text="Diagram that shows how the customer initiates account failover to secondary endpoint." lightbox="media/storage-failover-customer-managed-unplanned/failover-to-secondary-geo-redundant.png":::
-Customer-managed failover typically takes about an hour.
+Customer-managed (unplanned) failover typically takes about an hour.
-After the failover is complete, the original secondary becomes the new primary (1) and the copy of the storage account in the original primary is deleted (2). The storage account is configured as LRS in the new primary region and is no longer geo-redundant. Users can resume writing data to the storage account (3) as shown in this image:
+After the failover is complete, the original secondary becomes the new primary (1), and the copy of the storage account in the original primary is deleted (2). The storage account is configured as LRS in the new primary region, and is no longer geo-redundant. Users can resume writing data to the storage account (3), as shown in this image:
:::image type="content" source="media/storage-failover-customer-managed-unplanned/post-failover-geo-redundant.png" alt-text="Diagram that shows the storage account status post-failover to secondary region." lightbox="media/storage-failover-customer-managed-unplanned/post-failover-geo-redundant.png":::
To resume replication to a new secondary region, reconfigure the account for geo
> [!IMPORTANT] > Keep in mind that converting a locally redundant storage account to use geo-redundancy incurs both cost and time. For more information, see [The time and cost of failing over](storage-disaster-recovery-guidance.md#the-time-and-cost-of-failing-over).
-After re-configuring the account as GRS, Azure begins copying your data asynchronously to the new secondary region (1) as shown in this image:
+After reconfiguring the account to utilize GRS, Azure begins copying your data asynchronously to the new secondary region (1) as shown in this image:
:::image type="content" source="media/storage-failover-customer-managed-unplanned/post-failover-geo-redundant-geo.png" alt-text="Diagram that shows the storage account status post-failover to secondary region as GRS." lightbox="media/storage-failover-customer-managed-unplanned/post-failover-geo-redundant-geo.png":::
-Read access to the new secondary region will not become available again until the issue causing the original outage has been resolved.
+Read access to the new secondary region isn't available again until the issue causing the original outage is resolved.
-### The failback process (GRS/RA-GRS)
+### The unplanned failback process (GRS/RA-GRS)
> [!WARNING]
-> After your account is reconfigured for geo-redundancy, it may take a significant amount of time before the data in the new primary region is fully copied to the new secondary.
+> After your account is reconfigured for geo-redundancy, it might take a significant amount of time before the data in the new primary region is fully copied to the new secondary.
> > **To avoid a major data loss**, check the value of the [**Last Sync Time**](last-sync-time-get.md) property before failing back. Compare the last sync time to the last times that data was written to the new primary to evaluate potential data loss.
-Once the issue causing the original outage has been resolved, you can initiate another failover to fail back to the original primary region, resulting in the following:
+After the issue causing the original outage is resolved, you can initiate failback to the original primary region. This process is described in the following image:
1. The current primary region becomes read only.
-1. With customer-initiated failover and failback, your data is not allowed to finish replicating to the secondary region during the failback process. Therefore, it is important to check the value of the [**Last Sync Time**](last-sync-time-get.md) property before failing back.
-1. The DNS entries for the storage service endpoints are changed such that those for the secondary region become the new primary endpoints for your storage account.
+1. With customer-initiated failover and failback, your data isn't allowed to finish replicating to the secondary region during the failback process. Therefore, it's important to check the value of the [**Last Sync Time**](last-sync-time-get.md) property before failing back.
+1. The DNS entries for the storage service endpoints are switched. The endpoints within the secondary region become the new primary endpoints for your storage account.
:::image type="content" source="media/storage-failover-customer-managed-unplanned/failback-to-primary-geo-redundant.png" alt-text="Diagram that shows how the customer initiates account failback to original primary region." lightbox="media/storage-failover-customer-managed-unplanned/failback-to-primary-geo-redundant.png":::
-After the failback is complete, the original primary region becomes the current one again (1) and the copy of the storage account in the original secondary is deleted (2). The storage account is configured as locally redundant in the primary region and is no longer geo-redundant. Users can resume writing data to the storage account (3) as shown in this image:
+After the failback is complete, the original primary region becomes the current one again (1), and the copy of the storage account in the original secondary is deleted (2). The storage account is configured as locally redundant in the primary region, and is no longer geo-redundant. Users can resume writing data to the storage account (3), as shown in this image:
:::image type="content" source="media/storage-failover-customer-managed-unplanned/post-failback-geo-redundant.png" alt-text="Diagram that shows the Post-failback status." lightbox="media/storage-failover-customer-managed-unplanned/post-failback-geo-redundant.png":::
-To resume replication to the original secondary region, configure the account for geo-redundancy again.
+To resume replication to the original secondary region, reconfigure the account for geo-redundancy.
> [!IMPORTANT] > Keep in mind that converting a locally redundant storage account to use geo-redundancy incurs both cost and time. For more information, see [The time and cost of failing over](storage-disaster-recovery-guidance.md#the-time-and-cost-of-failing-over).
-After re-configuring the account as GRS, replication to the original secondary region resumes as shown in this image:
+After reconfiguring the account as GRS, replication to the original secondary region resumes as shown in this image:
:::image type="content" source="media/storage-failover-customer-managed-unplanned/post-failback-geo-redundant-geo.png" alt-text="Diagram that shows how the redundancy configuration returns to its original state." lightbox="media/storage-failover-customer-managed-unplanned/post-failback-geo-redundant-geo.png":::
Under normal circumstances, a client writes data to a storage account in the pri
### The storage service endpoints become unavailable in the primary region (GZRS/RA-GZRS)
-If the primary storage service endpoints become unavailable for any reason (1), the client is no longer able to write to the storage account. Depending on the underlying cause of the outage, replication to the secondary region may no longer be functioning (2), [so some data loss should be expected](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies). The following image shows the scenario where the primary endpoints have become unavailable, but no recovery has occurred yet:
+If the primary storage service endpoints become unavailable for any reason (1), the client is no longer able to write to the storage account. Depending on the underlying cause of the outage, replication to the secondary region might no longer be taking place (2), [so some data loss should be expected](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies). The following image shows the scenario where the primary endpoints are unavailable, but before recovery occurs:
-### The failover process (GZRS/RA-GZRS)
+### The unplanned failover process (GZRS/RA-GZRS)
-To restore write access to your data, you can [initiate a failover](storage-initiate-account-failover.md). The storage service endpoint URIs for blobs, tables, queues, and files remain the same but their DNS entries are changed to point to the secondary region (1) as show in this image:
+To restore write access to your data, you can [initiate a failover](storage-initiate-account-failover.md). The storage service endpoint URIs for blobs, tables, queues, and files remain the same, but their DNS entries are changed to point to the secondary region (1), as shown in this image:
:::image type="content" source="media/storage-failover-customer-managed-unplanned/failover-to-secondary-geo-zone-redundant.png" alt-text="Diagram that shows how the customer initiates account failover to the secondary endpoint." lightbox="media/storage-failover-customer-managed-unplanned/failover-to-secondary-geo-zone-redundant.png"::: The failover typically takes about an hour.
-After the failover is complete, the original secondary becomes the new primary (1) and the copy of the storage account in the original primary is deleted (2). The storage account is configured as LRS in the new primary region and is no longer geo-redundant. Users can resume writing data to the storage account (3) as shown in this image:
+After the failover is complete, the original secondary becomes the new primary (1), and the copy of the storage account in the original primary is deleted (2). The storage account is configured as LRS in the new primary region, and is no longer geo-redundant. Users can resume writing data to the storage account (3), as shown in this image:
:::image type="content" source="media/storage-failover-customer-managed-unplanned/post-failover-geo-redundant.png" alt-text="Diagram that shows the storage account status post-failover to secondary region." lightbox="media/storage-failover-customer-managed-unplanned/post-failover-geo-redundant.png"::: To resume replication to a new secondary region, reconfigure the account for geo-redundancy.
-Since the account was originally configured as GZRS, reconfiguring geo-redundancy after failover causes the original ZRS redundancy within the new secondary region (the original primary) to be retained. However, the redundancy configuration within the current primary always determines the effective geo-redundancy of a storage account. Since the current primary in this case is LRS, the effective geo-redundancy at this point is GRS, not GZRS.
+Since the account was originally configured as GZRS, reconfiguring geo-redundancy after failover causes the original ZRS redundancy within the new secondary region (the original primary) to be retained. However, the redundancy configuration within the *current* primary always determines the effective geo-redundancy of a storage account. Since the *current* primary in this case is LRS, the effective geo-redundancy at this point is GRS, not GZRS.
> [!IMPORTANT] > Keep in mind that converting a locally redundant storage account to use geo-redundancy incurs both cost and time. For more information, see [The time and cost of failing over](storage-disaster-recovery-guidance.md#the-time-and-cost-of-failing-over).
-After re-configuring the account as GRS, Azure begins copying your data asynchronously to the new secondary region (1) as shown in this image:
+After reconfiguring the account as GRS, Azure begins copying your data asynchronously to the new secondary region (1), as shown in this image:
:::image type="content" source="media/storage-failover-customer-managed-unplanned/post-failover-geo-zone-redundant-geo.png" alt-text="Diagram that shows the storage account status post-failover to secondary region as GRS." lightbox="media/storage-failover-customer-managed-unplanned/post-failover-geo-zone-redundant-geo.png":::
-Read access to the new secondary region will not become available again until the issue causing the original outage has been resolved.
+Read access to the new secondary region isn't available again until the original outage is resolved.
-### The failback process (GZRS/RA-GZRS)
+### The unplanned failback process (GZRS/RA-GZRS)
> [!WARNING] > After your account is reconfigured for geo-redundancy, it may take a significant amount of time before the data in the new primary region is fully copied to the new secondary. > > **To avoid a major data loss**, check the value of the [**Last Sync Time**](last-sync-time-get.md) property before failing back. Compare the last sync time to the last times that data was written to the new primary to evaluate potential data loss.
-Once the issue causing the original outage has been resolved, you can initiate another failover to fail back to the original primary region, resulting in the following:
+After the issue causing the original outage is resolved, you can initiate failback to the original primary region. This process is described in the following image:
1. The current primary region becomes read only.
-1. With customer-initiated failover and failback, your data is not allowed to finish replicating to the secondary region during the failback process. Therefore, it is important to check the value of the [**Last Sync Time**](last-sync-time-get.md) property before failing back.
-1. The DNS entries for the storage service endpoints are changed such that those for the secondary region become the new primary endpoints for your storage account.
+1. During customer-initiated failover and failback, your data isn't allowed to finish replicating to the secondary region during the failback process. Therefore, it's important to check the value of the [**Last Sync Time**](last-sync-time-get.md) property before failing back.
+1. The DNS entries for the storage service endpoints are switched. The secondary endpoints become the new primary endpoints for your storage account.
:::image type="content" source="media/storage-failover-customer-managed-unplanned/failback-to-primary-geo-zone-redundant.png" alt-text="Diagram that shows the customer initiating account failback to the original primary region." lightbox="media/storage-failover-customer-managed-unplanned/failback-to-primary-geo-zone-redundant.png":::
-After the failback is complete, the original primary region becomes the current one again (1) and the copy of the storage account in the original secondary is deleted (2). The storage account is configured as ZRS in the primary region and is no longer geo-redundant. Users can resume writing data to the storage account (3) as shown in this image:
+After the failback is complete, the original primary region becomes the current one again (1), and the copy of the storage account in the original secondary is deleted (2). The storage account is configured as ZRS in the primary region, and is no longer geo-redundant. Users can resume writing data to the storage account (3), as shown in this image:
:::image type="content" source="media/storage-failover-customer-managed-unplanned/post-failback-geo-zone-redundant.png" alt-text="Diagram that shows the post-failback status." lightbox="media/storage-failover-customer-managed-unplanned/post-failback-geo-zone-redundant.png":::
-To resume replication to the original secondary region, configure the account for geo-redundancy again.
+To resume replication to the original secondary region, reconfigure the account for geo-redundancy.
> [!IMPORTANT] > Keep in mind that converting a ZRS storage account to use geo-redundancy incurs both cost and time. For more information, see [The time and cost of failing over](storage-disaster-recovery-guidance.md#the-time-and-cost-of-failing-over).
-After re-configuring the account as GZRS, replication to the original secondary region resumes as shown in this image:
+After reconfiguring the account as GZRS, replication to the original secondary region resumes as shown in this image:
:::image type="content" source="media/storage-failover-customer-managed-unplanned/post-failback-geo-zone-redundant-geo.png" alt-text="Diagram that shows the redundancy configuration returns to its original state." lightbox="media/storage-failover-customer-managed-unplanned/post-failback-geo-zone-redundant-geo.png":::
After re-configuring the account as GZRS, replication to the original secondary
- [Disaster recovery planning and failover](storage-disaster-recovery-guidance.md) - [Azure Storage redundancy](storage-redundancy.md)-- [Initiate an account failover](storage-initiate-account-failover.md)
+- [Initiate an account failover](storage-initiate-account-failover.md)
storage Storage Initiate Account Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-initiate-account-failover.md
Title: Initiate a storage account failover
-description: Learn how to initiate an account failover in the event that the primary endpoint for your storage account becomes unavailable. The failover updates the secondary region to become the primary region for your storage account.
+description: Learn how to initiate the failover process for your storage account. Failover can be initiated if the primary storage service endpoints become unavailable, or to perform disaster recovery testing. The failover process updates the secondary region to become the primary region for your storage account.
Previously updated : 09/15/2023 Last updated : 06/13/2024 # Initiate a storage account failover
-If the primary endpoint for your geo-redundant storage account becomes unavailable for any reason, you can initiate an account failover. An account failover updates the secondary endpoint to become the primary endpoint for your storage account. Once the failover is complete, clients can begin writing to the new primary region. Forced failover enables you to maintain high availability for your applications.
+Microsoft strives to ensure that Azure services are always available. However, unplanned service outages might occasionally occur. To help minimize downtime, Azure Storage supports customer-managed failover to keep your data available during both partial and complete outages.
-This article shows how to initiate an account failover for your storage account using the Azure portal, PowerShell, or Azure CLI. To learn more about account failover, see [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md).
+This article shows how to initiate an account failover for your storage account using the Azure portal, PowerShell, or the Azure CLI. To learn more about account failover, see [Azure storage disaster recovery planning and failover](storage-disaster-recovery-guidance.md).
-> [!WARNING]
-> An account failover typically results in some data loss. To understand the implications of an account failover and to prepare for data loss, review [Data loss and inconsistencies](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies).
- ## Prerequisites
-Before you can perform an account failover on your storage account, make sure that:
+Review these important topics detailed in the [disaster recovery guidance](storage-disaster-recovery-guidance.md#plan-for-failover) article before initiating a customer-managed failover.
-> [!div class="checklist"]
-> - Your storage account is configured for geo-replication (GRS, GZRS, RA-GRS or RA-GZRS). For more information about Azure Storage redundancy, see [Azure Storage redundancy](storage-redundancy.md).
-> - The type of your storage account supports customer-initiated failover. See [Supported storage account types](storage-disaster-recovery-guidance.md#supported-storage-account-types).
-> - Your storage account doesn't have any features or services enabled that are not supported for account failover. See [Unsupported features and services](storage-disaster-recovery-guidance.md#unsupported-features-and-services) for a detailed list.
+- **Potential data loss**: Data loss should be expected during an unplanned storage account failover. For details on the implications of an unplanned account failover and how to prepare for data loss, see the [Anticipate data loss and inconsistencies](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies) section.
+- **Geo-redundancy**: Before you can perform a failover, your storage account must be configured for geo-redundancy. Initial synchronization from the primary to the secondary region must complete before the failover process can begin. If your account isn't configured for geo-redundancy, you can change it by following the steps described within the [Change how a storage account is replicated](redundancy-migration.md) article. For more information about Azure storage redundancy options, see the [Azure Storage redundancy](storage-redundancy.md) article.
+- **Understand the different types of account failover**: There are two types of customer-managed failover. See the [Plan for failover](storage-disaster-recovery-guidance.md#plan-for-failover) article to learn about potential use cases for each type, and how they differ.
+- **Plan for unsupported features and services**: Review the [Unsupported features and services](storage-disaster-recovery-guidance.md#unsupported-features-and-services) article and take appropriate action before initiating a failover.
+- **Supported storage account types**: Ensure that your storage account type can be used to initiate a failover. See [Supported storage account types](storage-disaster-recovery-guidance.md#supported-storage-account-types).
+- **Set your expectations for timing and cost**: The time it takes the customer-managed failover process to complete can vary, but typically takes less than one hour. An unplanned failover results in the loss of geo-redundancy configuration. Reconfiguring geo-redundant storage (GRS) typically incurs extra time and cost. For more information, see the [time and cost of failing over](storage-disaster-recovery-guidance.md#the-time-and-cost-of-failing-over) section.
## Initiate the failover
-You can initiate an account failover from the Azure portal, PowerShell, or the Azure CLI.
+You can initiate either a planned or unplanned customer-managed failover using the Azure portal, PowerShell, or the Azure CLI.
[!INCLUDE [updated-for-az](~/reusable-content/ce-skilling/azure/includes/updated-for-az.md)] ## [Portal](#tab/azure-portal)
-To initiate an account failover from the Azure portal, follow these steps:
+Complete the following steps to initiate an account failover using the Azure portal:
1. Navigate to your storage account.
-1. Under **Settings**, select **Geo-replication**. The following image shows the geo-replication and failover status of a storage account.
- :::image type="content" source="media/storage-initiate-account-failover/portal-failover-prepare.png" alt-text="Screenshot showing geo-replication and failover status":::
+1. Select **Redundancy** from within the **Data management** group. The following image shows the geo-redundancy configuration and failover status of a storage account.
+
+ :::image type="content" source="media/storage-initiate-account-failover/portal-failover-redundancy.png" alt-text="Screenshot showing redundancy and failover status." lightbox="media/storage-initiate-account-failover/portal-failover-redundancy.png":::
+
+1. Verify that your storage account is configured for geo-redundant storage (GRS, RA-GRS, GZRS, or RA-GZRS). If it's not, select the desired redundancy configuration from the **Redundancy** drop-down and select **Save** to commit your change. After the geo-redundancy configuration is changed, your data is synchronized from the primary to the secondary region. This synchronization takes several minutes, and failover can't be initiated until all data is replicated. The following message appears until the synchronization is complete:
+
+ :::image type="content" source="media/storage-initiate-account-failover/portal-failover-repl-in-progress.png" alt-text="Screenshot showing the location of the message indicating that synchronization is still in progress." lightbox="media/storage-initiate-account-failover/portal-failover-repl-in-progress.png":::
+
+1. Select **Prepare for Customer-Managed failover** as shown in the following image:
+
+ :::image type="content" source="media/storage-initiate-account-failover/portal-failover-redundancy.png" alt-text="Screenshot showing redundancy and failover status." lightbox="media/storage-initiate-account-failover/portal-failover-redundancy.png":::
+
+1. Select the type of failover for which you're preparing. The confirmation page varies depending on the type of failover you select.
+ **If you select `Unplanned Failover`**:
+
+ A warning is displayed to alert you to the potential data loss, and to information you about the need to manually reconfigure geo-redundancy after the failover:
+
+ :::image type="content" source="media/storage-initiate-account-failover/portal-failover-prepare-failover-unplanned-sml.png" alt-text="Screenshot showing the failover option selected on the Prepare for failover window." lightbox="media/storage-initiate-account-failover/portal-failover-prepare-failover-unplanned-lrg.png":::
+
+ **If you select `Planned failover`** (preview):
-1. Verify that your storage account is configured for geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS). If it's not, then select **Configuration** under **Settings** to update your account to be geo-redundant.
-1. The **Last Sync Time** property indicates how far the secondary is behind from the primary. **Last Sync Time** provides an estimate of the extent of data loss that you will experience after the failover is completed. For more information about checking the **Last Sync Time** property, see [Check the Last Sync Time property for a storage account](last-sync-time-get.md).
-1. Select **Prepare for failover**.
-1. Review the confirmation dialog. When you are ready, enter **Yes** to confirm and initiate the failover.
+ The **Last Sync Time** value is displayed. Failover doesn't occur until after all data is synchronized to the secondary region, preventing data from being lost.
- :::image type="content" source="media/storage-initiate-account-failover/portal-failover-confirm.png" alt-text="Screenshot showing confirmation dialog for an account failover":::
+ :::image type="content" source="media/storage-initiate-account-failover/portal-failover-prepare-failover-planned-sml.png" alt-text="Screenshot showing the planned failover option selected on the Prepare for failover window." lightbox="media/storage-initiate-account-failover/portal-failover-prepare-failover-planned-lrg.png":::
+
+ Since the redundancy configuration within each region doesn't change during a planned failover or failback, there's no need to manually reconfigure geo-redundancy after a failover.
+
+1. Review the **Prepare for failover** page. When you're ready, type **yes** and select **Failover** to confirm and initiate the failover process.
+
+ A message is displayed to indicate that the failover is in progress:
+
+ :::image type="content" source="media/storage-initiate-account-failover/portal-failover-in-progress.png" alt-text="Screenshot showing the failover in-progress message." lightbox="media/storage-initiate-account-failover/portal-failover-in-progress-redundancy.png":::
## [PowerShell](#tab/azure-powershell)
-To use PowerShell to initiate an account failover, install the [Az.Storage](https://www.powershellgallery.com/packages/Az.Storage) module, version 2.0.0 or later. For more information about installing Azure PowerShell, see [Install the Azure Az PowerShell module](/powershell/azure/install-azure-powershell).
+To get the current redundancy and failover information for your storage account, and then initiate a failover, follow these steps:
+
+> [!div class="checklist"]
+> - [Install the Azure Storage preview module for PowerShell](#install-the-azure-storage-preview-module-for-powershell)
+> - [Get the current status of the storage account with PowerShell](#get-the-current-status-of-the-storage-account-with-powershell)
+> - [Initiate a failover of the storage account with PowerShell](#initiate-a-failover-of-the-storage-account-with-powershell)
+
+### Install the Azure Storage preview module for PowerShell
+
+To use PowerShell to initiate and monitor a **planned** customer-managed account failover (preview) in addition to a customer-initiated failover, install the [Az.Storage 5.2.2-preview module](https://www.powershellgallery.com/packages/Az.Storage/5.2.2-preview). Earlier versions of the module support customer-managed failover (unplanned), but not planned failover. The preview version supports the new `FailoverType` parameter. Valid values include either `planned` or `unplanned`.
+
+#### Installing and running the preview module on PowerShell 5.1
+
+Recommended best practices include the installation and use of the latest version of PowerShell. If you're having trouble installing the preview module using older PowerShell versions, you might need to [update PowerShellGet to the latest version](/powershell/gallery/powershellget/update-powershell-51) before installing the Az.Storage 5.2.2 preview module.
+
+To install the latest version of PowerShellGet and the Az.Storage preview module, perform the following steps:
-To initiate an account failover from PowerShell, call the following command:
+1. Use the following cmdlet to update PowerShellGet:
+
+ ```powershell
+ Install-Module PowerShellGet ΓÇôRepository PSGallery ΓÇôForce
+ ```
+
+1. Close and reopen PowerShell
+1. Install the Az.Storage preview module using the following cmdlet:
+
+ ```powershell
+ Install-Module -Name Az.Storage -RequiredVersion 5.2.2-preview -AllowPrerelease
+ ```
+
+1. Determine whether you already have a higher version of the Az.Storage module installed by running the command:
+
+ ```powershell
+ Get-InstalledModule Az.Storage -AllVersions
+ ```
+
+If a higher version such as 5.3.0 or 5.4.0 is also installed, you need to explicitly import the preview version before using it.
+
+1. Close and reopen PowerShell again
+1. Before running any other commands, import the preview version of the module using the following command:
+
+ ```powershell
+ Import-Module Az.Storage -RequiredVersion 5.2.2
+ ```
+
+1. Verify that the `FailoverType` parameter is supported by running the following command:
+
+ ```powershell
+ Get-Help Invoke-AzStorageAccountFailover -Parameter FailoverType
+ ```
+
+For more information about installing Azure PowerShell, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+
+### Get the current status of the storage account with PowerShell
+
+Check the status of the storage account before failing over. Examine properties that can affect failing over such as:
+
+- The primary and secondary regions and their status
+- The storage kind and access tier
+- The current failover status
+- The last sync time
+- The storage account SKU conversion status
+
+```powershell
+ # Log in first with Connect-AzAccount
+ Connect-AzAccount
+
+ # Specify the resource group name and storage account name
+ $rgName = "<your resource group name>"
+ $saName = "<your storage account name>"
+
+ # Get the storage account information
+ Get-AzStorageAccount `
+ -Name $saName `
+ -ResourceGroupName $rgName `
+ -IncludeGeoReplicationStats
+```
+
+To refine the list of properties in the display to the most relevant set, consider replacing the `Get-AzStorageAccount` command in the previous example with the following command:
```powershell
-Invoke-AzStorageAccountFailover -ResourceGroupName <resource-group-name> -Name <account-name>
+Get-AzStorageAccount `
+ -Name $saName `
+ -ResourceGroupName $rgName `
+ -IncludeGeoReplicationStats `
+ | Select-Object Location,PrimaryLocation,SecondaryLocation,StatusOfPrimary,StatusOfSecondary,@{E={$_.Kind};L="AccountType"},AccessTier,LastGeoFailoverTime,FailoverInProgress,StorageAccountSkuConversionStatus,GeoReplicationStats `
+ -ExpandProperty Sku `
+ | Select-Object Location,PrimaryLocation,SecondaryLocation,StatusOfPrimary,StatusOfSecondary,AccountType,AccessTier,@{E={$_.Name};L="RedundancyType"},LastGeoFailoverTime,FailoverInProgress,StorageAccountSkuConversionStatus `
+ -ExpandProperty GeoReplicationStats `
+ | fl
+```
+
+### Initiate a failover of the storage account with PowerShell
+
+```powershell
+Invoke-AzStorageAccountFailover `
+ -ResourceGroupName $rgName `
+ -Name $saName `
+ -FailoverType <planned|unplanned> # Specify either planned or unplanned failover
``` ## [Azure CLI](#tab/azure-cli)
-To use Azure CLI to initiate an account failover, call the following commands:
+Complete the following steps to get the current redundancy and failover information for your storage account, and then initiate a failover:
+
+> [!div class="checklist"]
+> - [Install the Azure Storage preview extension for Azure CLI](#install-the-azure-storage-preview-extension-for-azure-cli)
+> - [Get the current status of the storage account with Azure CLI](#get-the-current-status-of-the-storage-account-with-azure-cli)
+> - [Initiate a failover of the storage account with Azure CLI](#initiate-a-failover-of-the-storage-account-with-azure-cli)
+
+### Install the Azure Storage preview extension for Azure CLI
+
+1. Install the latest version of the Azure CLI. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli).
+1. Install the Azure CLI storage preview extension using the following command:
+
+ ```azurecli
+ az extension add -n storage-preview
+ ```
+
+ > [!IMPORTANT]
+ > The Azure CLI storage preview extension adds support for features or arguments that are currently in PREVIEW.
+ >
+ > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+### Get the current status of the storage account with Azure CLI
-```azurecli-interactive
-az storage account show \ --name accountName \ --expand geoReplicationStats
-az storage account failover \ --name accountName
+Run the following command to get the current geo-replication information for the storage account. Replace the placeholder values in angle brackets (**\<\>**) with your own values:
+
+```azurecli
+az storage account show \
+ --resource-group <resource-group-name> \
+ --name <storage-account-name> \
+ --expand geoReplicationStats
+```
+
+For more information about the `storage account show` command, run:
+
+```azurecli
+az storage account show --help
```
+### Initiate a failover of the storage account with Azure CLI
+
+Run the following command to initiate a failover of the storage account. Replace the placeholder values in angle brackets (**\<\>**) with your own values:
+
+```azurecli
+az storage account failover \
+ --resource-group <resource-group-name> \
+ --name <storage-account-name> \
+ --failover-type <planned|unplanned>
+```
+
+For more information about the `storage account failover` command, run:
+
+```azurecli
+az storage account failover --help
+```
+++
+## Monitor the failover
+
+You can monitor the status of the failover using the Azure portal, PowerShell, or the Azure CLI.
+
+## [Portal](#tab/azure-portal)
+
+The status of the failover is shown in the Azure portal in **Notifications**, in the activity log, and on the **Redundancy** page of the storage account.
+
+### Notifications
+
+To check the status of the failover, select the bell-shaped notification icon on the far right of the Azure portal global page header:
++
+### Activity log
+
+To view the detailed status of a failover, select the **More events in the activity log** link in the notification, or go to the **Activity log** page of the storage account:
++
+### Redundancy page
+
+Messages on the redundancy page of the storage account are displayed to provide failover status updates:
++
+If the failover is nearing completion, the redundancy page might show the original secondary region as the new primary, but still display a message indicating the failover is in progress:
++
+When the failover is complete, the redundancy page displays the last failover time and the new primary region's location. In the case of a planned failover, the new secondary region is also displayed. The following image shows the new storage account status after an unplanned failover:
++
+## [PowerShell](#tab/azure-powershell)
+
+You can use Azure PowerShell to get the current redundancy and failover information for your storage account. To check the status of the storage account failover see [Get the current status of the storage account with PowerShell](#get-the-current-status-of-the-storage-account-with-powershell).
+
+## [Azure CLI](#tab/azure-cli)
+
+You can use Azure PowerShell to get the current redundancy and failover information for your storage account. To check the status of the storage account failover see [Get the current status of the storage account with Azure CLI](#get-the-current-status-of-the-storage-account-with-azure-cli).
+
-## Important implications of account failover
+## Important implications of unplanned failover
-When you initiate an account failover for your storage account, the DNS records for the secondary endpoint are updated so that the secondary endpoint becomes the primary endpoint. Make sure that you understand the potential impact to your storage account before you initiate a failover.
+When you initiate an unplanned failover of your storage account, the DNS records for the secondary endpoint are updated so that the secondary endpoint becomes the primary endpoint. Make sure that you understand the potential impact to your storage account before you initiate a failover.
To estimate the extent of likely data loss before you initiate a failover, check the **Last Sync Time** property. For more information about checking the **Last Sync Time** property, see [Check the Last Sync Time property for a storage account](last-sync-time-get.md).
After you re-enable GRS for your storage account, Microsoft begins replicating t
- If using Blob storage, the number of snapshots per blob. - If using Table storage, the [data partitioning strategy](/rest/api/storageservices/designing-a-scalable-partitioning-strategy-for-azure-table-storage). The replication process can't scale beyond the number of partition keys that you use.
-## Next steps
+When an unplanned failover occurs, all data in the primary region is lost as the secondary region becomes the new primary. All write operations made to the primary region's storage account need to be repeated after geo-redundancy is re-enabled. For more details, refer to [Azure storage disaster recovery planning and failover](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies).
+
+The Azure Storage resource provider does not fail over during the failover process. As a result, the Azure Storage REST API's [Location](/dotnet/api/microsoft.azure.management.storage.models.trackedresource.location) property continues to return the original location after the failover is complete.
+
+Storage account failover is a temporary solution to a service outage and shouldn't be used as part of your data migration strategy. For information about how to migrate your storage accounts, see [Azure Storage migration overview](storage-migration-overview.md).
+
+## See also
- [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md) - [Check the Last Sync Time property for a storage account](last-sync-time-get.md) - [Use geo-redundancy to design highly available applications](geo-redundant-design.md)-- [Tutorial: Build a highly available application with Blob storage](../blobs/storage-create-geo-redundant-storage.md)
+- [Tutorial: Build a highly available application with Blob storage](../blobs/storage-create-geo-redundant-storage.md)
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-redundancy.md
Previously updated : 09/06/2023 Last updated : 01/19/2024
+<!--
+Initial: 92 (3354/34)
+Current: 99 (3350/0)
+-->
+ # Azure Storage redundancy
-Azure Storage always stores multiple copies of your data so that it's protected from planned and unplanned events, including transient hardware failures, network or power outages, and massive natural disasters. Redundancy ensures that your storage account meets its availability and durability targets even in the face of failures.
+Azure Storage always stores multiple copies of your data to protect it from planned and unplanned events. Examples of these events include transient hardware failures, network or power outages, and massive natural disasters. Redundancy ensures that your storage account meets its availability and durability targets even in the face of failures.
When deciding which redundancy option is best for your scenario, consider the tradeoffs between lower costs and higher availability. The factors that help determine which redundancy option you should choose include: - How your data is replicated within the primary region.-- Whether your data is replicated to a second region that is geographically distant to the primary region, to protect against regional disasters (geo-replication).-- Whether your application requires read access to the replicated data in the secondary region if the primary region becomes unavailable for any reason (geo-replication with read access).
+- Whether your data is replicated from a primary region to a second, geographically distant region, to protect against regional disasters (geo-replication).
+- Whether your application requires read access to the replicated data in the secondary region during an outage in the primary region (geo-replication with read access).
> [!NOTE] > The features and regional availability described in this article are also available to accounts that have a hierarchical namespace (Azure Blob storage). The services that comprise Azure Storage are managed through a common Azure resource called a *storage account*. The storage account represents a shared pool of storage that can be used to deploy storage resources such as blob containers (Blob Storage), file shares (Azure Files), tables (Table Storage), or queues (Queue Storage). For more information about Azure Storage accounts, see [Storage account overview](storage-account-overview.md).
-The redundancy setting for a storage account is shared for all storage services exposed by that account. All storage resources deployed in the same storage account have the same redundancy setting. You may want to isolate different types of resources in separate storage accounts if they have different redundancy requirements.
+The redundancy setting for a storage account is shared for all storage services exposed by that account. All storage resources deployed in the same storage account have the same redundancy setting. Consider isolating different types of resources in separate storage accounts if they have different redundancy requirements.
## Redundancy in the primary region
Data in an Azure Storage account is always replicated three times in the primary
Locally redundant storage (LRS) replicates your storage account three times within a single data center in the primary region. LRS provides at least 99.999999999% (11 nines) durability of objects over a given year.
-LRS is the lowest-cost redundancy option and offers the least durability compared to other options. LRS protects your data against server rack and drive failures. However, if a disaster such as fire or flooding occurs within the data center, all replicas of a storage account using LRS may be lost or unrecoverable. To mitigate this risk, Microsoft recommends using [zone-redundant storage](#zone-redundant-storage) (ZRS), [geo-redundant storage](#geo-redundant-storage) (GRS), or [geo-zone-redundant storage](#geo-zone-redundant-storage) (GZRS).
+LRS is the lowest-cost redundancy option and offers the least durability compared to other options. LRS protects your data against server rack and drive failures. However, if a disaster such as fire or flooding occurs within the data center, all replicas of a storage account using LRS might be lost or unrecoverable. To mitigate this risk, Microsoft recommends using [zone-redundant storage](#zone-redundant-storage) (ZRS), [geo-redundant storage](#geo-redundant-storage) (GRS), or [geo-zone-redundant storage](#geo-zone-redundant-storage) (GZRS).
A write request to a storage account that is using LRS happens synchronously. The write operation returns successfully only after the data is written to all three replicas.
The following diagram shows how your data is replicated within a single data cen
LRS is a good choice for the following scenarios: -- If your application stores data that can be easily reconstructed if data loss occurs, you may opt for LRS.-- If your application is restricted to replicating data only within a country or region due to data governance requirements, you may opt for LRS. In some cases, the paired regions across which the data is geo-replicated may be in another country or region. For more information on paired regions, see [Azure regions](https://azure.microsoft.com/regions/).-- If your scenario is using Azure unmanaged disks, you may opt for LRS. While it's possible to create a storage account for Azure unmanaged disks that uses GRS, it isn't recommended due to potential issues with consistency over asynchronous geo-replication.
+- If your application stores data that can be easily reconstructed if data loss occurs, consider choosing LRS.
+- If your application is restricted to replicating data only within a region due to data governance requirements, consider choosing LRS. In some cases, the paired regions across which the data is geo-replicated might be within another region. For more information on paired regions, see [Azure regions](https://azure.microsoft.com/regions/).
+- If your scenario is using Azure unmanaged disks, consider using LRS. While it's possible to create a storage account for Azure unmanaged disks that uses GRS, it isn't recommended due to potential issues with consistency over asynchronous geo-replication.
### Zone-redundant storage
-Zone-redundant storage (ZRS) replicates your storage account synchronously across three Azure availability zones in the primary region. Each availability zone is a separate physical location with independent power, cooling, and networking. ZRS offers durability for storage resources of at least 99.9999999999% (12 9's) over a given year.
+Zone-redundant storage (ZRS) replicates your storage account synchronously across three Azure availability zones in the primary region. Each availability zone is a separate physical location with independent power, cooling, and networking. ZRS offers durability for storage resources of at least 99.9999999999% (12 9s) over a given year.
-With ZRS, your data is still accessible for both read and write operations even if a zone becomes unavailable. If a zone becomes unavailable, Azure undertakes networking updates, such as DNS repointing. These updates may affect your application if you access data before the updates have completed. When designing applications for ZRS, follow practices for transient fault handling, including implementing retry policies with exponential back-off.
+When you utilize ZRS, your data remains accessible for both read and write operations even if a zone becomes unavailable. If a zone becomes unavailable, Azure undertakes networking updates such as Domain Name System (DNS) repointing. These updates could affect your application if you access data before the updates are complete. When designing applications for ZRS, follow practices for transient fault handling, including implementing retry policies with exponential back-off.
A write request to a storage account that is using ZRS happens synchronously. The write operation returns successfully only after the data is written to all replicas across the three availability zones. If an availability zone is temporarily unavailable, the operation returns successfully after the data is written to all available zones.
-Microsoft recommends using ZRS in the primary region for scenarios that require high availability. ZRS is also recommended for restricting replication of data to a particular country or region to meet data governance requirements.
+Microsoft recommends using ZRS in the primary region for scenarios that require high availability. ZRS is also recommended for restricting replication of data to a particular region to meet data governance requirements.
Microsoft recommends using ZRS for Azure Files workloads. If a zone becomes unavailable, no remounting of Azure file shares from the connected clients is required.
The following diagram shows how your data is replicated across availability zone
:::image type="content" source="media/storage-redundancy/zone-redundant-storage.png" alt-text="Diagram showing how data is replicated in the primary region with ZRS":::
-ZRS provides excellent performance, low latency, and resiliency for your data if it becomes temporarily unavailable. However, ZRS by itself may not protect your data against a regional disaster where multiple zones are permanently affected. For protection against regional disasters, Microsoft recommends using [geo-zone-redundant storage](#geo-zone-redundant-storage) (GZRS), which uses ZRS in the primary region and also geo-replicates your data to a secondary region.
+ZRS provides excellent performance, low latency, and resiliency for your data if it becomes temporarily unavailable. However, ZRS by itself might not fully protect your data against a regional disaster where multiple zones are permanently affected. [Geo-zone-redundant storage](#geo-zone-redundant-storage) (GZRS) uses ZRS in the primary region and also geo-replicates your data to a secondary region. GZRS is available in many regions, and is recommended for protection against regional disasters.
The archive tier for Blob Storage isn't currently supported for ZRS, GZRS, or RA-GZRS accounts. Unmanaged disks don't support ZRS or GZRS.
For more information about which regions support ZRS, see [Azure regions with av
ZRS is supported for all Azure Storage services through standard general-purpose v2 storage accounts, including: -- Azure Blob storage (hot and cool block blobs and append blobs, non-disk page blobs)
+- Azure Blob storage (hot and cool block blobs and append blobs, nondisk page blobs)
- Azure Files (all standard tiers: transaction optimized, hot, and cool) - Azure Table storage - Azure Queue storage
For a list of regions that support zone-redundant storage (ZRS) for managed disk
## Redundancy in a secondary region
-For applications requiring high durability, you can choose to additionally copy the data in your storage account to a secondary region that is hundreds of miles away from the primary region. If your storage account is copied to a secondary region, then your data is durable even in the case of a complete regional outage or a disaster in which the primary region isn't recoverable.
+Redundancy options can help provide high durability for your applications. In many regions, you can copy the data within your storage account to a secondary region located hundreds of miles away from the primary region. Copying your storage account to a secondary region ensures that your data remains durable during a complete regional outage or a disaster in which the primary region isn't recoverable.
When you create a storage account, you select the primary region for the account. The paired secondary region is determined based on the primary region, and can't be changed. For more information about regions supported by Azure, see [Azure regions](https://azure.microsoft.com/global-infrastructure/regions/).
Azure Storage offers two options for copying your data to a secondary region:
> [!NOTE] > The primary difference between GRS and GZRS is how data is replicated in the primary region. Within the secondary region, data is always replicated synchronously three times using LRS. LRS in the secondary region protects your data against hardware failures.
-With GRS or GZRS, the data in the secondary region isn't available for read or write access unless there's a failover to the primary region. For read access to the secondary region, configure your storage account to use read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS). For more information, see [Read access to data in the secondary region](#read-access-to-data-in-the-secondary-region).
+When you utilize GRS or GZRS, the data in the secondary region isn't available for read or write access unless there's a failover to the primary region. For read access to the secondary region, configure your storage account to use read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS). For more information, see [Read access to data in the secondary region](#read-access-to-data-in-the-secondary-region).
-If the primary region becomes unavailable, you can choose to fail over to the secondary region. After the failover has completed, the secondary region becomes the primary region, and you can again read and write data. For more information on disaster recovery and to learn how to fail over to the secondary region, see [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md).
+If the primary region becomes unavailable, you can choose to fail over to the secondary region. After the failover operation completes, the secondary region becomes the primary region and you're able to read and write data. For more information on disaster recovery and to learn how to fail over to the secondary region, see [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md).
> [!IMPORTANT] > Because data is replicated to the secondary region asynchronously, a failure that affects the primary region may result in data loss if the primary region cannot be recovered. The interval between the most recent writes to the primary region and the last write to the secondary region is known as the recovery point objective (RPO). The RPO indicates the point in time to which data can be recovered. The Azure Storage platform typically has an RPO of less than 15 minutes, although there's currently no SLA on how long it takes to replicate data to the secondary region. ### Geo-redundant storage
-Geo-redundant storage (GRS) copies your data synchronously three times within a single physical location in the primary region using LRS. It then copies your data asynchronously to a single physical location in a secondary region that is hundreds of miles away from the primary region. GRS offers durability for storage resources of at least 99.99999999999999% (16 9's) over a given year.
+Geo-redundant storage (GRS) copies your data synchronously three times within a single physical location in the primary region using LRS. It then copies your data asynchronously to a single physical location in a secondary region that is hundreds of miles away from the primary region. GRS offers durability for storage resources of at least 99.99999999999999% (16 9s) over a given year.
-A write operation is first committed to the primary location and replicated using LRS. The update is then replicated asynchronously to the secondary region. When data is written to the secondary location, it's also replicated within that location using LRS.
+A write operation is first committed to the primary location and replicated using LRS. The update is then replicated asynchronously to the secondary region. When data is written to the secondary location, it also replicates within that location using LRS.
The following diagram shows how your data is replicated with GRS or RA-GRS:
The following diagram shows how your data is replicated with GRS or RA-GRS:
### Geo-zone-redundant storage
-Geo-zone-redundant storage (GZRS) combines the high availability provided by redundancy across availability zones with protection from regional outages provided by geo-replication. Data in a GZRS storage account is copied across three [Azure availability zones](../../availability-zones/az-overview.md) in the primary region and is also replicated to a secondary geographic region for protection from regional disasters. Microsoft recommends using GZRS for applications requiring maximum consistency, durability, and availability, excellent performance, and resilience for disaster recovery.
+Geo-zone-redundant storage (GZRS) combines the high availability provided by redundancy across availability zones with protection from regional outages provided by geo-replication. Data in a GZRS storage account is copied across three [Azure availability zones](../../availability-zones/az-overview.md) in the primary region. In addition, it also replicates to a secondary geographic region for protection from regional disasters. Microsoft recommends using GZRS for applications requiring maximum consistency, durability, and availability, excellent performance, and resilience for disaster recovery.
-With a GZRS storage account, you can continue to read and write data if an availability zone becomes unavailable or is unrecoverable. Additionally, your data is also durable in the case of a complete regional outage or a disaster in which the primary region isn't recoverable. GZRS is designed to provide at least 99.99999999999999% (16 9's) durability of objects over a given year.
+With a GZRS storage account, you can continue to read and write data if an availability zone becomes unavailable or is unrecoverable. Additionally, your data also remains durable during a complete regional outage or a disaster in which the primary region isn't recoverable. GZRS is designed to provide at least 99.99999999999999% (16 9s) durability of objects over a given year.
The following diagram shows how your data is replicated with GZRS or RA-GZRS: :::image type="content" source="media/storage-redundancy/geo-zone-redundant-storage.png" alt-text="Diagram showing how data is replicated with GZRS or RA-GZRS":::
-Only standard general-purpose v2 storage accounts support GZRS. GZRS is supported by all of the Azure Storage services, including:
+Only standard general-purpose v2 storage accounts support GZRS. All Azure Storage services support GZRS, including:
-- Azure Blob storage (hot and cool block blobs, non-disk page blobs)
+- Azure Blob storage (hot and cool block blobs, nondisk page blobs)
- Azure Files (all standard tiers: transaction optimized, hot, and cool) - Azure Table storage - Azure Queue storage
For a list of regions that support geo-zone-redundant storage (GZRS), see [Azure
## Read access to data in the secondary region
-Geo-redundant storage (with GRS or GZRS) replicates your data to another physical location in the secondary region to protect against regional outages. With an account configured for GRS or GZRS, data in the secondary region is not directly accessible to users or applications, unless a failover occurs. The failover process updates the DNS entry provided by Azure Storage so that the secondary endpoint becomes the new primary endpoint for your storage account. During the failover process, your data is inaccessible. After the failover is complete, you can read and write data to the new primary region. For more information, see [How customer-managed storage account failover works](storage-failover-customer-managed-unplanned.md).
+Geo-redundant storage (with GRS or GZRS) replicates your data to another physical location in the secondary region to protect against regional outages. With an account configured for GRS or GZRS, data in the secondary region isn't directly accessible to users or applications when an outage occurs in the primary region, unless a failover occurs. The failover process updates the DNS entry provided by Azure Storage so that the storage service endpoints in the secondary region become the new primary endpoints for your storage account. During the failover process, your data is inaccessible. After the failover is complete, you can read and write data to the new primary region. For more information, see [How customer-managed storage account failover to recover from an outage works](storage-failover-customer-managed-unplanned.md).
If your applications require high availability, then you can configure your storage account for read access to the secondary region. When you enable read access to the secondary region, then your data is always available to be read from the secondary, including in a situation where the primary region becomes unavailable. Read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS) configurations permit read access to the secondary region.
If your applications require high availability, then you can configure your stor
If your storage account is configured for read access to the secondary region, then you can design your applications to seamlessly shift to reading data from the secondary region if the primary region becomes unavailable for any reason.
-The secondary region is available for read access after you enable RA-GRS or RA-GZRS, so that you can test your application in advance to make sure that it will properly read from the secondary in the event of an outage. For more information about how to design your applications to take advantage of geo-redundancy, see [Use geo-redundancy to design highly available applications](geo-redundant-design.md).
+The secondary region is available for read access after you enable RA-GRS or RA-GZRS. This availability allows you to test your application in advance to ensure that it reads properly from the secondary region during an outage. For more information about how to design your applications to take advantage of geo-redundancy, see [Use geo-redundancy to design highly available applications](geo-redundant-design.md).
-When read access to the secondary is enabled, your application can be read from the secondary endpoint as well as from the primary endpoint. The secondary endpoint appends the suffix *-secondary* to the account name. For example, if your primary endpoint for Blob storage is `myaccount.blob.core.windows.net`, then the secondary endpoint is `myaccount-secondary.blob.core.windows.net`. The account access keys for your storage account are the same for both the primary and secondary endpoints.
+When read access to the secondary is enabled, your application can be read from both the secondary and primary endpoints. The secondary endpoint appends the suffix *-secondary* to the account name. For example, if your primary endpoint for Blob storage is `myaccount.blob.core.windows.net`, then the secondary endpoint is `myaccount-secondary.blob.core.windows.net`. The account access keys for your storage account are the same for both the primary and secondary endpoints.
#### Plan for data loss
-Because data is replicated asynchronously from the primary to the secondary region, the secondary region is typically behind the primary region in terms of write operations. If a disaster were to strike the primary region, it's likely that some data would be lost and that files within a directory or container would not be consistent. For more information about how to plan for potential data loss, see [Data loss and inconsistencies](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies).
+Because data is replicated asynchronously from the primary to the secondary region, the secondary region is typically behind the primary region in terms of write operations. If a disaster strikes the primary region, it's likely that some data would be lost and that files within a directory or container wouldn't be consistent. For more information about how to plan for potential data loss, see [Data loss and inconsistencies](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies).
## Summary of redundancy options
The following table describes key parameters for each redundancy option:
| Parameter | LRS | ZRS | GRS/RA-GRS | GZRS/RA-GZRS | |:-|:-|:-|:-|:-|
-| Percent durability of objects over a given year | at least 99.999999999% (11 9's) | at least 99.9999999999% (12 9's) | at least 99.99999999999999% (16 9's) | at least 99.99999999999999% (16 9's) |
+| Percent durability of objects over a given year | at least 99.999999999% (11 9s) | at least 99.9999999999% (12 9s) | at least 99.99999999999999% (16 9s) | at least 99.99999999999999% (16 9s) |
| Availability for read requests | At least 99.9% (99% for cool/cold/archive access tiers) | At least 99.9% (99% for cool/cold access tier) | At least 99.9% (99% for cool/cold/archive access tiers) for GRS<br/><br/>At least 99.99% (99.9% for cool/cold/archive access tiers) for RA-GRS | At least 99.9% (99% for cool/cold access tier) for GZRS<br/><br/>At least 99.99% (99.9% for cool/cold access tier) for RA-GZRS | | Availability for write requests | At least 99.9% (99% for cool/cold/archive access tiers) | At least 99.9% (99% for cool/cold access tier) | At least 99.9% (99% for cool/cold/archive access tiers) | At least 99.9% (99% for cool/cold access tier) | | Number of copies of data maintained on separate nodes | Three copies within a single region | Three copies across separate availability zones within a single region | Six copies total, including three in the primary region and three in the secondary region | Six copies total, including three across separate availability zones in the primary region and three locally redundant copies in the secondary region |
-For more information, see the [SLA for Storage Accounts](https://azure.microsoft.com/support/legal/sla/storage/v1_5/).
+For more information, see the [Service Level Agreement for Storage Accounts](https://azure.microsoft.com/support/legal/sla/storage/v1_5/).
### Durability and availability by outage scenario
The following table indicates whether your data is durable and available in a gi
| Outage scenario | LRS | ZRS | GRS/RA-GRS | GZRS/RA-GZRS | |:-|:-|:-|:-|:-| | A node within a data center becomes unavailable | Yes | Yes | Yes | Yes |
-| An entire data center (zonal or non-zonal) becomes unavailable | No | Yes | Yes<sup>1</sup> | Yes |
+| An entire data center (zonal or nonzonal) becomes unavailable | No | Yes | Yes<sup>1</sup> | Yes |
| A region-wide outage occurs in the primary region | No | No | Yes<sup>1</sup> | Yes<sup>1</sup> | | Read access to the secondary region is available if the primary region becomes unavailable | No | No | Yes (with RA-GRS) | Yes (with RA-GZRS) |
The following table indicates whether your data is durable and available in a gi
### Supported Azure Storage services
-The following table shows which redundancy options are supported by each Azure Storage service.
+The following table shows the redundancy options supported by each Azure Storage service.
| Service | LRS | ZRS | GRS | RA-GRS | GZRS | RA-GZRS | ||--|--|--|--|||
For pricing information for each redundancy option, see [Azure Storage pricing](
## Data integrity
-Azure Storage regularly verifies the integrity of data stored using cyclic redundancy checks (CRCs). If data corruption is detected, it's repaired using redundant data. Azure Storage also calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data.
+Azure Storage regularly verifies the integrity of data stored using cyclic redundancy checks (CRCs). And detected data corruption is repaired using redundant data. Azure Storage also calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data.
## See also
Azure Storage regularly verifies the integrity of data stored using cyclic redun
- [Azure Files](https://azure.microsoft.com/pricing/details/storage/files/) - [Table Storage](https://azure.microsoft.com/pricing/details/storage/tables/) - [Queue Storage](https://azure.microsoft.com/pricing/details/storage/queues/)
- - [Azure Disks](https://azure.microsoft.com/pricing/details/managed-disks/)
+ - [Azure Disks](https://azure.microsoft.com/pricing/details/managed-disks/)
storage File Sync Storsimple Cost Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-storsimple-cost-comparison.md
- Title: Comparing the costs of StorSimple to Azure File Sync
-description: Learn how you can save money and modernize your storage infrastructure by migrating from StorSimple to Azure File Sync.
--- Previously updated : 01/12/2023---
-# Comparing the costs of StorSimple to Azure File Sync
-StorSimple is a discontinued physical and virtual appliance product offered by Microsoft to help customers manage their on-premises storage footprint by tiering data to Azure.
-
-> [!NOTE]
-> The StorSimple Service (including the StorSimple Device Manager for 8000 and 1200 series and StorSimple Data Manager) has reached the end of support. The end of support for StorSimple was published in 2019 on the [Microsoft LifeCycle Policy](/lifecycle/products/?terms=storsimple) and [Azure Communications](https://azure.microsoft.com/updates/storsimpleeol/) pages. Additional notifications were sent via email and posted on the Azure portal and in the [StorSimple overview](../../storsimple/storsimple-overview.md). Contact [Microsoft Support](https://azure.microsoft.com/support/create-ticket/) for additional details.
-
-For most use cases of StorSimple, Azure File Sync is the recommended migration target for file shares being used with StorSimple. Azure File Sync supports similar capabilities to StorSimple, such as the ability to tier to the cloud. However, it provides additional features that StorSimple does not have, such as:
--- Storing data in a native file format accessible to administrators and users (Azure file shares) instead of a proprietary format only accessible through the StorSimple device-- Multi-site sync-- Integration with Azure services such as Azure Backup and Microsoft Defender for Storage-
-To learn more about Azure File Sync, see [Introduction to Azure File Sync](file-sync-introduction.md). To learn how to seamlessly migrate to Azure File Sync from StorSimple, see [StorSimple 8100 and 8600 migration to Azure File Sync](../files/storage-files-migration-storsimple-8000.md) or [StorSimple 1200 migration to Azure File Sync](../files/storage-files-migration-storsimple-1200.md).
-
-Although Azure File Sync supports additional functionality not supported by StorSimple, administrators familiar with StorSimple may be concerned about how much Azure File Sync will cost relative to their current solution. This document covers how to compare the costs of StorSimple to Azure File Sync to correctly determine the costs of each. Although the cost situation may vary by customer depending on the customer's usage and configuration of StorSimple, most customers will pay the same or less with Azure File Sync than they currently pay with StorSimple.
-
-## Cost comparison principles
-To ensure a fair comparison of StorSimple to Azure File Sync and other services, you must consider the following principles:
--- **All costs of the solutions are accounted for.** Both StorSimple and Azure File Sync have multiple cost components. To do a fair comparison, all cost components must be considered.--- **Cost comparison doesn't include the cost of features StorSimple doesn't support.** Azure File Sync supports multiple features that StorSimple does not. Some of the features of Azure File Sync, like multi-site sync, might increase the total cost of ownership of an Azure File Sync solution. It is reasonable to take advantage of new features as part of a migration; however, this should be viewed as an upgrade benefit of moving to Azure File Sync. Therefore, you should compare the costs of StorSimple and Azure File Sync *before* considering adopting new capabilities of Azure File Sync that StorSimple doesn't have.--- **Cost comparison considers as-is configuration of StorSimple.** StorSimple supports multiple configurations that might increase or decrease the price of a StorSimple solution. To perform a fair cost comparison to Azure File Sync, you should consider only your current configuration of StorSimple. For example:
- - **Use the same redundancy settings when comparing StorSimple and Azure File Sync.** If your StorSimple solution uses locally redundant storage (LRS) for its storage usage in Azure Blob storage, you should compare it to the cost of locally redundant storage in Azure Files, even if you would like to switch to zonally redundant (ZRS) or geo-redundant (GRS) storage when you adopt Azure File Sync.
-
- - **Use the Azure Blob storage pricing you are currently using.** Azure Blob storage supports a v1 and a v2 pricing model. Most StorSimple customers would save money if they adopted the v2 pricing; however, most StorSimple customers are currently using the v1 pricing. Because StorSimple is going away, to perform a fair comparison, use the pricing for the pricing model you are currently using.
-
-## StorSimple pricing components
-StorSimple has the following pricing components that you should consider in the cost comparison analysis:
--- **Capital and operational costs of servers fronting/running StorSimple.** Capital costs relate to the upfront cost of the physical, on-premises hardware, while operating costs relate to ongoing costs you must bear to run your solution, such as labor, maintenance, and power costs. Capital costs vary slightly depending on whether you have a StorSimple 8000 series appliance or a StorSimple 1200 series appliance:
- - **StorSimple 8000 series.** StorSimple 8000 series appliances are physical appliances that provide an iSCSI target that must be fronted by a file server. Although you may have purchased and configured this file server a long time ago, you should consider the capital and operational costs of running this server, in addition to the operating costs of running the StorSimple appliance. If your file server is hosted as a virtual machine (VM) on an on-premises hypervisor that hosts other workloads, to capture the opportunity cost of running the file server instead of other workloads, you should consider the file server VM as a fractional cost of the capital expenditure and operating costs for the host, in addition to the operating costs of the file server VM. Finally, you should include the cost of any StorSimple 8000 series virtual appliances and other VMs you might have deployed in Azure.
-
- - **StorSimple 1200 series.** StorSimple 1200 series appliances are virtual appliances that you can run on-premises in the hypervisor of your choice. StorSimple 1200 series appliances can be an iSCSI target for a file server or can directly be a file server without the need for an additional server. If you have the StorSimple 1200 series appliance configured as an iSCSI target, you should include both the cost of hosting the virtual appliance and the cost of the file server fronting it. Although your StorSimple 1200 series appliance may be hosted on a hypervisor that hosts other workloads, to capture the opportunity cost of running the StorSimple 1200 series appliance instead of other workloads, you should consider the virtual appliance as a fractional cost of the capital expenditure of the host, in addition to the operating costs of the virtual appliance.
--- **StorSimple service costs.** The StorSimple management service in Azure is a major component of most customers' Azure bill for StorSimple. There are two billing models for the StorSimple management service. Which one you are using likely depends on how and when you purchased your StorSimple appliance (consult your bill for more detail):
- - **StorSimple management fee per GiB of storage.** The StorSimple management fee per GiB of storage is the older billing model, and the one that most customers are using. In this model, you are charged for every logical GiB stored in StorSimple. You can see the price of management fee per GiB of storage on [the StorSimple pricing page](https://azure.microsoft.com/pricing/details/storsimple/), beneath the first table in the text (described as the "old pricing model"). It is important to note that the pricing page commentary is incorrect - customers were not transitioned to the per device billing model in December 2021.
-
- - **StorSimple management fee per device.** The StorSimple management fee per device is the newer model, but fewer customers are using it. In this model, you are charged a daily fee for each day you have your device active. The fee expense depends on whether you have a physical or virtual appliance, and which specific appliance you have. You can see the price of management fee per device on [the StorSimple pricing page](https://azure.microsoft.com/pricing/details/storsimple/) (first table).
--- **Azure Blob storage costs.** StorSimple stores all of the data in its proprietary format in Azure Blob storage. When considering your Azure Blob storage costs, you should consider the storage utilization, which may be less or equal to the logical size of your data due to deduplication and compression done as part of StorSimple's proprietary data format, and also the transaction on storage, which is done whenever files are changed or ranges are recalled to on-premises from the device. Depending on when you deployed your StorSimple appliance, you may be subject to one of two blob storage pricing models:
- - **Blob storage pricing v1, available in general purpose version 1 storage accounts.** Based on the age of most StorSimple deployments, most StorSimple customers are using the v1 Azure Blob storage pricing. This pricing has higher per GiB prices and lower transaction prices than the v2 model, and lacks the storage tiers that the Blob storage v2 pricing has. To see the Blob storage v1 prices, visit the [Azure Blob storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) and select the *Other* tab.
-
- - **Blob storage pricing v2, available in general purpose version 2 storage accounts.** Blob storage v2 has lower GiB prices and higher transaction prices than the v1 model. Although some StorSimple customers could save money by switching to the v2 pricing, most StorSimple customers are currently using the v1 pricing. Since StorSimple is reaching end of life, you should stay with the pricing model that you are currently using, rather than pricing out the cost comparison with the v2 pricing. To see the Blob storage v2 prices, visit the [Azure Blob storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) and select the **Recommended** tab (the default when you load the page).
-
-## Azure File Sync pricing components
-Azure File Sync has the following pricing components you should consider in the cost comparison analysis:
--
-### Translating quantities from StorSimple
-If you are trying to estimate the costs of Azure File Sync based on the expenses you see in StorSimple, be careful with the following items:
--- **Azure Files bills on logical size (standard file shares).** Unlike StorSimple, which encodes your data in the StorSimple proprietary format before storing it to Azure Blob storage, Azure Files stores the data from Azure File Sync in the same form as you see it on your Windows File Server. This means that if you are trying to figure out how much storage you will consume in Azure Files, you should look at the logical size of the data from StorSimple, rather than the amount stored in Azure Blob storage. Although this may look like it will cause you to pay more when using Azure File Sync, you need to do the complete analysis including all aspects of StorSimple costs to see the true comparison. Additionally, Azure Files offers reservations that enable you to buy storage at an up-to 36% discount over the list price. See [Reservations in Azure Files](../files/understanding-billing.md#reservations).--- **Don't assume a 1:1 ratio between transactions on StorSimple and transactions in Azure File Sync.** It might be tempting to look at the number of transactions done by StorSimple in Azure Blob storage and assume that number will be similar to the number of transactions that Azure File Sync will do on Azure Files. This number may overstate or understate the number of transactions Azure File Sync will do, so it's not a good way to estimate transaction costs. The best way to estimate transaction costs is to do a small proof-of-concept in Azure File Sync with a live file share similar to the file shares stored in StorSimple.-
-## See also
-- [Azure Files pricing page](https://azure.microsoft.com/pricing/details/storage/files/)-- [Planning for an Azure File Sync deployment](../file-sync/file-sync-planning.md)-- [Create a file share](../files/storage-how-to-create-file-share.md?toc=/azure/storage/file-sync/toc.json) and [Deploy Azure File Sync](../file-sync/file-sync-deployment-guide.md)
storage Files Data Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-data-protection-overview.md
description: Learn how to protect your data in Azure Files. Understand the conce
Previously updated : 07/26/2023 Last updated : 08/04/2024
Azure Files offers multiple redundancy options, including geo-redundancy, to hel
> Azure Files only supports geo-redundancy (GRS or GZRS) for standard SMB file shares. Premium file shares and NFS file shares must use locally redundant storage (LRS) or zone redundant storage (ZRS). ## Disaster recovery and failover
-In the case of a disaster or unplanned outage, restoring access to file share data is usually critical to keeping the business operational. Depending on the criticality of the data hosted in your file shares, you might need a disaster recovery strategy that includes failing your Azure file shares over to a secondary region.
+In the case of a disaster or unplanned outage, restoring access to file share data is critical to keeping the business operational. Depending on the criticality of the data hosted in your file shares, you might need a disaster recovery strategy that includes failing your Azure file shares over to a secondary region.
-Azure Files offers customer-managed failover for standard storage accounts if the data center in the primary region becomes unavailable. See [Disaster recovery and failover for Azure Files](files-disaster-recovery.md).
+Azure Files offers customer-managed unplanned failover for standard storage accounts if the data center in the primary region becomes unavailable. Customer-managed planned failover can also be utilized in multiple scenarios, including planned disaster recovery testing, a proactive approach to large scale disasters, or to recover from non-storage related outages.
+++
+See [Disaster recovery and failover for Azure Files](files-disaster-recovery.md).
## Prevent accidental deletion of storage accounts and file shares
storage Files Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-disaster-recovery.md
description: Learn how to recover your data in Azure Files. Understand the conce
Previously updated : 04/15/2024 Last updated : 08/05/2024
Microsoft strives to ensure that Azure services are always available. However, u
> [!IMPORTANT] > Azure File Sync only supports storage account failover if the Storage Sync Service is also failed over. This is because Azure File Sync requires the storage account and Storage Sync Service to be in the same Azure region. If only the storage account is failed over, sync and cloud tiering operations will fail until the Storage Sync Service is failed over to the secondary region. If you want to fail over a storage account containing Azure file shares that are being used as cloud endpoints in Azure File Sync, see [Azure File Sync disaster recovery best practices](../file-sync/file-sync-disaster-recovery-best-practices.md) and [Azure File Sync server recovery](../file-sync/file-sync-server-recovery.md).
+## Customer-managed planned failover (preview)
+
+Customer-managed planned failover can also be utilized in multiple scenarios, including planned disaster recovery testing, a proactive approach to large scale disasters, or to recover from non-storage related outages.
+
+During the planned failover process, the primary and secondary regions are swapped. The original primary region is demoted and becomes the new secondary region. At the same time, the original secondary region is promoted and becomes the new primary. After the failover completes, users can proceed to access data in the new primary region and administrators can validate their disaster recovery plan. The storage account must be available in both the primary and secondary regions before a planned failover can be initiated.
+
+Data loss isn't expected during the planned failover and failback process as long as the primary and secondary regions are available throughout the entire process. For more detail, see the [Anticipating data loss and inconsistencies](../common/storage-disaster-recovery-guidance.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&bc=%2Fazure%2Fstorage%2Fblobs%2Fbreadcrumb%2Ftoc.json#anticipate-data-loss-and-inconsistencies) section.
+
+To understand the effect of this type of failover on your users and applications, it's helpful to know what happens during every step of the planned failover and failback processes. For details about how this process works, seeΓÇ»[How customer-managed (planned) failover works](../common/storage-failover-customer-managed-planned.md).
+++ ## Recovery metrics and costs To formulate an effective DR strategy, an organization must understand:
Write access is restored for geo-redundant accounts once the DNS entry has been
> [!IMPORTANT] > After the failover is complete, the storage account is configured to be locally redundant in the new primary endpoint/region. To resume replication to the new secondary, configure the account for geo-redundancy again. >
-> Keep in mind that converting a locally redundant storage account to use geo-redundancy incurs both cost and time. For more information, see [Important implications of account failover](../common/storage-initiate-account-failover.md#important-implications-of-account-failover).
+> Keep in mind that converting a locally redundant storage account to use geo-redundancy incurs both cost and time. For more information, see [The time and cost of failing over](../common/storage-disaster-recovery-guidance.md#the-time-and-cost-of-failing-over).
### Anticipate data loss
storage Storage Files Migration Storsimple 1200 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-storsimple-1200.md
- Title: StorSimple 1200 migration to Azure File Sync
-description: Learn how to migrate a StorSimple 1200 series virtual appliance to Azure File Sync.
--- Previously updated : 01/12/2023---
-# StorSimple 1200 migration to Azure File Sync
-
-StorSimple 1200 series is a virtual appliance that runs in an on-premises data center. It's possible to migrate the data from this appliance to an Azure File Sync environment. Azure File Sync is the default and strategic long-term Azure service that StorSimple appliances can be migrated to. This article provides the background knowledge and migrations steps for a successful migration to Azure File Sync.
-
-> [!NOTE]
-> The StorSimple Service (including the StorSimple Device Manager for 8000 and 1200 series and StorSimple Data Manager) has reached the end of support. The end of support for StorSimple was published in 2019 on the [Microsoft LifeCycle Policy](/lifecycle/products/?terms=storsimple) and [Azure Communications](https://azure.microsoft.com/updates/storsimpleeol/) pages. Additional notifications were sent via email and posted on the Azure portal and in the [StorSimple overview](../../storsimple/storsimple-overview.md). Contact [Microsoft Support](https://azure.microsoft.com/support/create-ticket/) for additional details.
-
-## Applies to
-| File share type | SMB | NFS |
-|-|:-:|:-:|
-| Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
-| Standard file shares (GPv2), GRS/GZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
-| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
-
-## Azure File Sync
-
-Azure File Sync is a Microsoft cloud service, based on two main components:
-
-* File synchronization and cloud tiering.
-* File shares as native storage in Azure that can be accessed over multiple protocols like SMB and FileREST. An Azure file share is comparable to a file share on a Windows Server that you can natively mount as a network drive. It supports important file fidelity aspects like attributes, permissions, and timestamps. Unlike with StorSimple, no application/service is required to interpret the files and folders stored in the cloud. The ideal and most flexible approach is to store general purpose file server data and some application data in the cloud.
-
-This article focuses on the migration steps. If you'd like to learn more about Azure File Sync, we recommend the following articles:
-
-* [Azure File Sync - overview](../file-sync/file-sync-planning.md "Overview")
-* [Azure File Sync - deployment guide](../file-sync/file-sync-deployment-guide.md)
-
-## Migration goals
-
-The goal is to guarantee the integrity of the production data and guaranteeing availability. The latter requires keeping downtime to a minimum so that it can fit into or only slightly exceed regular maintenance windows.
-
-## StorSimple 1200 migration path to Azure File Sync
-
-A local Windows Server is required to run an Azure File Sync agent. The Windows Server can be at a minimum a 2012R2 server but ideally is a Windows Server 2019.
-
-There are numerous, alternative migration paths, and it would create too long of an article to document all of them and illustrate why they bear risk or disadvantages over the route we recommend as a best practice in this article.
--
-The previous image depicts steps that correspond to sections in this article.
-
-### Step 1: Provision your on-premises Windows Server and storage
-
-1. Create a Windows Server 2019 - at a minimum 2012R2 - as a virtual machine (VM) or physical server. A Windows Server failover cluster is also supported.
-2. Provision or add Direct Attached Storage (DAS as compared to NAS, which isn't supported). The size of the Windows Server storage must be equal to or larger than the size of the available capacity of your virtual StorSimple 1200 appliance.
-
-### Step 2: Configure your Windows Server storage
-
-In this step, you map your StorSimple storage structure (volumes and shares) to your Windows Server storage structure.
-If you plan to make changes to your storage structure, meaning the number of volumes, the association of data folders to volumes, or the subfolder structure above or below your current SMB/NFS shares, then now is the time to take these changes into consideration.
-Changing your file and folder structure after Azure File Sync is configured is cumbersome and should be avoided.
-This article assumes you're mapping 1:1, so you must take your mapping changes into consideration when you follow the steps in this article.
-
-* None of your production data should end up on the Windows Server system volume. Cloud tiering isn't supported on system volumes. However, this feature is required for the migration as well as continuous operations as a StorSimple replacement.
-* Provision the same number of volumes on your Windows Server as you have on your StorSimple 1200 virtual appliance.
-* Configure any Windows Server roles, features, and settings you need. We recommend you opt into Windows Server updates to keep your operating system safe and up to date. Similarly, we recommend opting into Microsoft Update to keep Microsoft applications up to date, including the Azure File Sync agent.
-* Don't configure any folders or shares before reading the following steps.
-
-### Step 3: Deploy the first Azure File Sync cloud resource
--
-### Step 4: Match your local volume and folder structure to Azure File Sync and Azure file share resources
--
-### Step 5: Provision Azure file shares
--
-#### Storage account settings
-
-There are many configurations you can make on a storage account. The following checklist should be used for your storage account configurations. For example, you can change the networking configuration after your migration is complete.
-
-> [!div class="checklist"]
-> * Firewall and virtual networks: Disabled - don't configure any IP restrictions or limit storage account access to a specific virtual network. The public endpoint of the storage account is used during the migration. All IP addresses from Azure VMs must be allowed. It's best to configure any firewall rules on the storage account after the migration.
-> * Private Endpoints: Supported - You can enable private endpoints, but the public endpoint is used for the migration and must remain available.
-
-### Step 6: Configure Windows Server target folders
-
-In previous steps, you considered all aspects that will determine the components of your sync topologies. Now it's time to prepare the server to receive files for upload.
-
-Create **all** folders that will sync each to their own Azure file share.
-It's important that you follow the folder structure you've documented earlier. If for example you decided to sync multiple, local SMB shares together into a single Azure file share, then you must place them under a common root folder on the volume. Create this target root folder on the volume now.
-
-The number of Azure file shares you provision should match the number of folders you've created in this step plus the number of volumes you want to sync at the root level.
-
-### Step 7: Deploy the Azure File Sync agent
--
-### Step 8: Configure sync
--
-> [!WARNING]
-> **Be sure to turn on cloud tiering!** This is required if your local server doesn't have enough space to store the total size of your data in the StorSimple cloud storage. Set your tiering policy temporarily to 99% volume free space, and change it back to a more reasonable level after the migration is complete.
-
-Repeat the steps of sync group creation and addition of the matching server folder as a server endpoint for all Azure file shares/server locations that must be configured for sync.
-
-### Step 9: Copy your files
-
-The basic migration approach is a RoboCopy from your StorSimple virtual appliance to your Windows Server, and Azure File Sync to Azure file shares.
-
-Run the first local copy to your Windows Server target folder:
-
-* Identify the first location on your virtual StorSimple appliance.
-* Identify the matching folder on the Windows Server that already has Azure File Sync configured on it.
-* Start the copy using RoboCopy
-
-The following RoboCopy command will recall files from your StorSimple Azure storage to your local StorSimple and then move them over to the Windows Server target folder. The Windows Server will sync it to the Azure file share(s). As the local Windows Server volume gets full, cloud tiering will kick in and tier files that have successfully synced already. Cloud tiering will generate enough space to continue the copy from the StorSimple virtual appliance. Cloud tiering checks once an hour to see what has synced and to free up disk space to reach the 99% volume free space.
--
-When you run the RoboCopy command for the first time, your users and applications are still accessing the StorSimple files and folders and can potentially make changes. It's possible that RoboCopy has processed a directory, moved on to the next, and then a user on the source location (StorSimple) adds, changes, or deletes a file that now won't be processed in this current RoboCopy run. That's fine.
-
-The first run is about moving the bulk of the data back to on-premises, over to your Windows Server, and back up into the cloud via Azure File Sync. This can take a long time, depending on:
-
-* your download bandwidth
-* the recall speed of the StorSimple cloud service
-* the upload bandwidth
-* the number of items (files and folders) that must be processed by either service
-
-Once the initial run is complete, run the command again.
-
-The second time it will finish faster, because it only needs to transport changes that happened since the last run. Those changes are likely local to the StorSimple already, because they are recent. That further reduces the time because the need for recall from the cloud is reduced. Still, new changes can accumulate during this second run.
-
-Repeat this process until you're satisfied that the amount of time it takes to complete is an acceptable amount of downtime.
-
-When you've consider the acceptable downtime and you're prepared to take the StorSimple location offline, then do so now. For example, remove the SMB share so that no user can access the folder, or take any other appropriate step that prevents content to change in this folder on StorSimple.
-
-Run one last RoboCopy round. This will pick up any changes that might have been missed.
-How long this final step takes depends on the speed of the RoboCopy scan. You can estimate the time (which is equal to your downtime) by measuring how long the previous run took.
-
-Create a share on the Windows Server folder and possibly adjust your DFS-N deployment to point to it. Be sure to set the same share-level permissions as on your StorSimple SMB share.
-
-You've now finished migrating a share or group of shares into a common root or volume, depending on what you mapped previously.
-
-You can try to run a few of these copies in parallel. We recommend processing the scope of one Azure file share at a time.
-
-> [!WARNING]
-> Once you've moved all the data from you StorSimple to the Windows Server and your migration is complete: Return to ***all*** sync groups in the Azure portal and adjust the cloud tiering volume free space percent value to something better suited for cache utilization, for example 20%.
-
-The cloud tiering volume free space policy acts on a volume level with potentially multiple server endpoints syncing from it. If you forget to adjust the free space on even one server endpoint, sync will continue to apply the most restrictive rule and attempt to keep 99% free disk space, and the local cache won't perform as you might expect. Unless it's your goal to only have the namespace for a volume that only contains rarely accessed, archival data, you'll need to adjust the free space policy on every server endpoint.
-
-## Troubleshoot
-
-The most likely issue you can run into is that the RoboCopy command fails with *"Volume full"* on the Windows Server side. If that happens, then your download speed is likely better than your upload speed. Cloud tiering acts once every hour to evacuate content from the local Windows Server disk that has synced.
-
-Let sync progress and cloud tiering free up disk space. You can observe that in File Explorer on your Windows Server.
-
-When your Windows Server has sufficient available capacity, rerunning the command will resolve the problem. Nothing breaks when you get into this situation, and you can move forward with confidence. The inconvenience of running the command again is the only consequence.
-
-You might also run into other Azure File Sync issues. If that happens, see [Azure File Sync troubleshooting guide](/troubleshoot/azure/azure-storage/file-sync-troubleshoot?toc=/azure/storage/file-sync/toc.json).
----
-> [!NOTE]
-> Still have questions or encountered any issues?</br>
-> We're here to help: :::image type="content" source="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-migration-email.png" alt-text="Email address in one word: Azure Files migration at microsoft dot com":::
-
-## Relevant links
-
-Migration content:
-
-* [StorSimple 8000 series migration guide](storage-files-migration-storsimple-8000.md)
-
-Azure File Sync content:
-
-* [Azure File Sync overview](../file-sync/file-sync-planning.md)
-* [Deploy Azure File Sync](../file-sync/file-sync-deployment-guide.md)
-* [Azure File Sync troubleshooting guide](/troubleshoot/azure/azure-storage/file-sync-troubleshoot?toc=/azure/storage/file-sync/toc.json)
storage Storage Files Migration Storsimple 8000 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-storsimple-8000.md
- Title: StorSimple 8000 series migration to Azure File Sync
-description: Learn how to migrate a StorSimple 8100 or 8600 appliance to Azure File Sync.
--- Previously updated : 01/12/2023----
-# StorSimple 8100 and 8600 migration to Azure File Sync
-
-The StorSimple 8000 series includes either the 8100 or the 8600 physical, on-premises appliances and their cloud service components. StorSimple 8010 and 8020 virtual appliances are also covered in this migration guide. It's possible to migrate the data from either of these appliances to Azure file shares with optional Azure File Sync. Azure File Sync is the default and strategic long-term Azure service that replaces the StorSimple on-premises functionality. This article provides the necessary background knowledge and migration steps for a successful migration to Azure File Sync.
-
-> [!NOTE]
-> The StorSimple Service (including the StorSimple Device Manager for 8000 and 1200 series and StorSimple Data Manager) has reached the end of support. The end of support for StorSimple was published in 2019 on the [Microsoft LifeCycle Policy](/lifecycle/products/?terms=storsimple) and [Azure Communications](https://azure.microsoft.com/updates/storsimpleeol/) pages. Additional notifications were sent via email and posted on the Azure portal and in the [StorSimple overview](../../storsimple/storsimple-overview.md). Contact [Microsoft Support](https://azure.microsoft.com/support/create-ticket/) for additional details.
-
- :::column:::
- [![Migration overview - click to play!](media/storage-files-migration-storsimple-8000/video-0.png)](https://www.youtube.com/watch?v=tHwuhCi4SjE&list=PLEq-KSMM-P-0tAnJ-9bslX-nrwWfL5hpi)
- <br />
- :::column-end:::
- :::column:::
- This video provides an overview of:
- - Azure Files
- - Azure File Sync
- - Comparison of StorSimple & Azure Files
- - StorSimple Data Manager migration tool and process overview
- :::column-end:::
-
-## Phase 1: Prepare for migration
-
-This section contains the steps you should take at the beginning of your migration from StorSimple volumes to Azure file shares.
-
- :::column:::
- [![Prepare your migration - click to play!](media/storage-files-migration-storsimple-8000/video-1.png)](https://youtu.be/jpNhJrNp7w8?list=PLEq-KSMM-P-0tAnJ-9bslX-nrwWfL5hpi)
- <br />
- :::column-end:::
- :::column:::
- This video covers:
- - Selecting storage tier
- - Selecting storage redundancy options
- - Selecting direct-share-access vs. Azure File Sync
- - StorSimple Service Data Encryption Key and Serial Number
- - StorSimple Volume Backup migration
- - Mapping StorSimple volumes and shares to Azure file shares
- - Grouping shares inside Azure file shares
- - Mapping considerations
- - Migration planning worksheet
- - Namespace mapping spreadsheet
- :::column-end:::
-
-### Inventory
-
-When you begin planning your migration, first identify all the StorSimple appliances and volumes you need to migrate. Afterwards, you can decide on the best migration path.
-
-* StorSimple physical appliances (8000 series) use this migration guide.
-* StorSimple virtual appliances (1200 series) use a [different migration guide](storage-files-migration-storsimple-1200.md).
-
-### Migration cost summary
-
-Migrations to Azure file shares from StorSimple volumes via migration jobs in a StorSimple Data Manager resource are free of charge. Other costs might be incurred during and after a migration:
-
-* **Network egress:** Your StorSimple files live in a storage account within a specific Azure region. If you provision the Azure file shares you migrate into a storage account in the same Azure region, no egress costs occur. However, if you move your files to a storage account in a different region as part of this migration, egress costs will apply.
-* **Azure file share transactions:** When files are copied into an Azure file share (as part of a migration or outside of one), transaction costs apply as files and metadata are being written. As a best practice, start your Azure file share on the transaction optimized tier during the migration. Switch to your desired tier after the migration is finished. The phases described in this article call this out at the appropriate point.
-* **Change an Azure file share tier:** Changing the tier of an Azure file share costs transactions. In most cases, it is more cost efficient to follow the advice from the previous point.
-* **Storage cost:** When this migration starts copying files into an Azure file share, storage is consumed and billed. Migrated backups become [Azure file share snapshots](storage-snapshots-files.md). File share snapshots only consume storage capacity for the differences they contain.
-* **StorSimple:** Until you deprovision the StorSimple devices and storage accounts, StorSimple cost for storage, backups, and appliances will continue to occur.
-
-### Direct-share-access vs. Azure File Sync
-
-Azure file shares open up a new world of opportunities for structuring your file services deployment. An Azure file share is an SMB share in the cloud that you can set up to have users access directly over the SMB protocol with the familiar Kerberos authentication and existing NTFS permissions (file and folder ACLs) working natively. Learn more about [identity-based access to Azure file shares](storage-files-active-directory-overview.md).
-
-An alternative to direct access is [Azure File Sync](../file-sync/file-sync-planning.md). Azure File Sync is a direct analog for StorSimple's ability to cache frequently used files on-premises.
-
-Azure File Sync is a Microsoft cloud service, based on two main components:
-
-* File synchronization and cloud tiering to create a performance access cache on any Windows Server.
-* File shares as native storage in Azure that can be accessed over multiple protocols like SMB and file REST.
-
-Azure file shares retain important file fidelity aspects like attributes, permissions, and timestamps. With Azure file shares, there's no longer a need for an application or service to interpret the files and folders stored in the cloud. You can access them natively over familiar protocols and clients. Azure file shares allow you to store general-purpose file server data and application data in the cloud.
-
-This article focuses on the migration steps. If you want to learn more about Azure File Sync before migrating, see the following articles:
-
-* [Azure File Sync planning guide](../file-sync/file-sync-planning.md)
-* [Azure File Sync deployment guide](../file-sync/file-sync-deployment-guide.md)
-
-### StorSimple service data encryption key
-
-When you first set up your StorSimple appliance, it generated a service data encryption key and instructed you to securely store the key. This key is used to encrypt all data in the associated Azure storage account where the StorSimple appliance stores your files.
-
-The service data encryption key is necessary for a successful migration. Retrieve this key from your records, one for each of the appliances in your inventory.
-
-If you can't find the keys in your records, you can generate a new key from the appliance. Each appliance has a unique encryption key.
-
-#### Change the service data encryption key
--
-> [!CAUTION]
-> When you're deciding how to connect to your StorSimple appliance, consider the following:
->
-> * Connecting through an HTTPS session is the most secure and recommended option.
-> * Connecting directly to the device serial console is secure, but connecting to the serial console over network switches isn't.
-> * HTTP session connections are an option but are *not encrypted*. They're not recommended unless they're used within in a closed, trusted network.
-
-### Known limitations
-
-The StorSimple Data Manager and Azure file shares have a few limitations you should consider before you begin, as they can prevent a migration:
-
-* Only NTFS volumes from your StorSimple appliance are supported. ReFS volumes aren't supported.
-* Any volume placed on [Windows Server Dynamic Disks](/troubleshoot/windows-server/backup-and-storage/best-practices-using-dynamic-disks) isn't supported.
-* The service doesn't work with volumes that are BitLocker encrypted or have [Data Deduplication](/windows-server/storage/data-deduplication/understand) enabled.
-* Corrupted StorSimple backups can't be migrated.
-* Special networking options, such as firewalls or private endpoint-only communication, can't be enabled on either the source storage account where StorSimple backups are stored, nor on the target storage account that holds your Azure file shares.
-
-### File fidelity
-
-If none of the limitations in [Known limitations](#known-limitations) prevent a migration, there are still limitations on what can be stored in Azure file shares.
-
-File fidelity refers to the multitude of attributes, timestamps, and data that compose a file. In a migration, file fidelity is a measure of how well the information on the source (StorSimple volume) can be translated (migrated) to the target Azure file share.
-
-[Azure Files supports a subset](/rest/api/storageservices/set-file-properties) of the [NTFS file properties](/windows/win32/fileio/file-attribute-constants). Windows ACLs, common metadata, and some timestamps are migrated.
-
-The following items won't prevent a migration but will cause per-item issues during a migration:
-
-* Timestamps: File change time won't be set. It's currently read-only over the REST protocol. Last access timestamp on a file won't be moved, as it isn't a supported attribute on files stored in an Azure file share.
-* [Alternative Data Streams](/openspecs/windows_protocols/ms-fscc/b134f29a-6278-4f3f-904f-5e58a713d2c5) can't be stored in Azure file shares. Files holding Alternate Data Streams will be copied, but Alternate Data Streams are stripped from the file in the process.
-* Symbolic links, hard links, junctions, and reparse points are skipped during a migration. The migration copy logs list each skipped item and a reason.
-* EFS encrypted files fail to copy. Copy logs show the item failed to copy with "Access is denied".
-* Corrupt files are skipped. The copy logs might list different errors for each item that is corrupt on the StorSimple disk: "The request failed due to a fatal device hardware error" or "The file or directory is corrupted or unreadable" or "The access control list (ACL) structure is invalid".
-* Individual files larger than 4 TiB are skipped.
-* File path lengths must be equal to or fewer than 2048 characters. Files and folders with longer paths are skipped.
-* Reparse points are skipped. Any Microsoft Data Deduplication / SIS reparse points or those of third parties can't be resolved by the migration engine and will prevent a migration of the affected files and folders.
-
-The [troubleshooting section](#troubleshooting) at the end of this article has more details for item level and migration job level error codes and where possible, their mitigation options.
-
-### StorSimple volume backups
-
-StorSimple offers differential backups on the volume level. Azure file shares also have this ability, called share snapshots.
-
-Your migration jobs can only move backups, never data from the live volume. Therefore the most recent backup is closest to the live data and thus should always be part of the list of backups to be moved in a migration.
-
-Decide if you need to move any older backups during your migration. It's a best practice to keep this list as small as possible so your migration jobs complete faster.
-
-To identify critical backups that must be migrated, make a checklist of your backup policies. For example:
-
-* The most recent backup.
-* One backup a month for 12 months.
-* One backup a year for three years.
-
-When you create your migration jobs, you can use this list to identify the exact StorSimple volume backups that must be migrated to satisfy your requirements.
-
-It's best to suspend all StorSimple backup retention policies before you select a backup for migration. Migrating your backups can take several days or weeks. StorSimple offers backup retention policies that delete backups. Backups you've selected for this migration might get deleted before they've had a chance to be migrated.
-
-> [!CAUTION]
-> Selecting more than **50** StorSimple volume backups isn't supported.
-
-### Map your existing StorSimple volumes to Azure file shares
--
-### Number of storage accounts
-
-Your migration will likely benefit from deploying multiple storage accounts that each hold a smaller number of Azure file shares.
-
-If your file shares are highly active (utilized by many users or applications), two Azure file shares might reach the performance limit of your storage account. Because of this, it's often better migrate to multiple storage accounts, each with their own individual file shares, and typically no more than two or three shares per storage account. A best practice is to deploy storage accounts with one file share each. You can pool multiple Azure file shares into the same storage account, if you have archival shares in them.
-
-These considerations apply more to [direct cloud access](#direct-share-access-vs-azure-file-sync) (through an Azure VM or service) than to Azure File Sync. If you plan to exclusively use Azure File Sync on these shares, grouping several into a single Azure storage account is fine. In the future, you might want to lift and shift an app into the cloud that would then directly access a file share, as this scenario would benefit from having higher IOPS and throughput. Or you could start using a service in Azure that would also benefit from having higher IOPS and throughput.
-
-After making a list of your shares, map each share to the storage account where it will reside. Decide on an Azure region, and ensure each storage account and Azure File Sync resource matches the region you selected.
-
-> [!IMPORTANT]
-> Don't configure network and firewall settings for the storage accounts now. Making these configurations at this point would make a migration impossible. Configure these Azure storage settings after the migration is complete.
-
-### Storage account settings
-
-There are many configurations you can make on a storage account. Use the following checklist to confirm your storage account configurations. You can change the networking configuration after your migration is complete.
-
-> [!div class="checklist"]
-> * Firewall and virtual networks: Disabled - don't configure any IP restrictions or limit storage account access to a specific virtual network. The public endpoint of the storage account is used during the migration. All IP addresses from Azure VMs must be allowed. It's best to configure any firewall rules on the storage account after the migration. Configure both your source and target storage accounts this way.
-> * Private Endpoints: Supported - You can enable private endpoints, but the public endpoint is used for the migration and must remain available. This applies to both your source and target storage accounts.
-
-### Phase 1 summary
-
-At the end of Phase 1:
-
-* You have a good overview of your StorSimple devices and volumes.
-* The Data Manager service is ready to access your StorSimple volumes in the cloud because you've retrieved your service data encryption key for each StorSimple device.
-* You have a plan for which volumes and backups (if any beyond the most recent) need to be migrated.
-* You know how to map your volumes to the appropriate number of Azure file shares and storage accounts.
-
-## Phase 2: Deploy Azure storage and migration resources
-
-This section discusses considerations around deploying the different resource types that are needed in Azure. Some will hold your data post migration, and some are needed solely for the migration. Don't start deploying resources until you've finalized your deployment plan. It's difficult, sometimes impossible, to change certain aspects of your Azure resources after they've been deployed.
-
- :::column:::
- [![Deploy required resources - click to play!](media/storage-files-migration-storsimple-8000/video-2.png)](https://youtu.be/1k4jZgcC6jw?list=PLEq-KSMM-P-0tAnJ-9bslX-nrwWfL5hpi)
- <br />
- :::column-end:::
- :::column:::
- This video covers deployment of:
- - Storage accounts
- - Subscription(s) and resource groups
- - Storage accounts
- - Types and name(s)
- - Performance and share size
- - Location and replication types
- - Azure file shares
- - StorSimple Data Manager Service
- :::column-end:::
-
-### Deploy storage accounts
-
-You'll likely need to deploy several Azure storage accounts. Each one will hold a smaller number of Azure file shares, as per your deployment plan. Go to the Azure portal to [deploy your planned storage accounts](../common/storage-account-create.md#create-a-storage-account). Consider adhering to the following basic settings for any new storage account.
-
-> [!IMPORTANT]
-> Don't configure network and firewall settings for your storage accounts before or during your migration. Making those configurations at this point would make a migration impossible. The public endpoint must be accessible on source and target storage accounts. Limiting to specific IP ranges or virtual networks isn't supported. You can change the storage account networking configurations after the migration is complete.
-
-#### Subscription
-
-You can use the same subscription you used for your StorSimple deployment, or you can use a different one. The only limitation is that your subscription must be in the same Microsoft Entra tenant as the StorSimple subscription. Consider moving the StorSimple subscription to the appropriate tenant before you start a migration. You can only move the entire subscription, as individual StorSimple resources can't be moved to a different tenant or subscription.
-
-#### Resource group
-
-Resource groups in Azure assist with organization of resources and admin management permissions. [Find out more](../../azure-resource-manager/management/manage-resource-groups-portal.md#what-is-a-resource-group).
-
-#### Storage account name
-
-The name of your storage account will become part of a URL used to access your file share, and has certain character limitations. In your naming convention, consider that storage account names must be unique in the world, allow only lowercase letters and numbers, require between 3 to 24 characters, and don't allow special characters like hyphens or underscores. See [Azure storage resource naming rules](../../azure-resource-manager/management/resource-name-rules.md#microsoftstorage).
-
-#### Location
-
-The Azure region of a storage account is important. If you use Azure File Sync, all your storage accounts must be in the same region as your Storage Sync Service resource. The Azure region you pick should be close or central to your local servers and users. After you deploy your resource, you can't change its region.
-
-You can pick a different region from where your StorSimple data (storage account) currently resides, however, if you do, [egress charges will apply](https://azure.microsoft.com/pricing/details/bandwidth) during the migration. Data will leave the StorSimple region and enter your new storage account region. No bandwidth charges apply if you stay within the same Azure region.
-
-#### Performance
-
-You have the option to pick premium storage (SSD) for Azure file shares or standard storage. Standard storage includes [several tiers for a file share](storage-how-to-create-file-share.md#change-the-tier-of-an-azure-file-share). Standard storage is the right option for most customers migrating from StorSimple.
-
-* Choose premium storage if you need the [performance of a premium Azure file share](understanding-billing.md#provisioned-model).
-* Choose standard storage for general-purpose file server workloads, which includes hot data and archive data. Also choose standard storage if the only workload on the share in the cloud will be Azure File Sync.
-* For premium file shares, choose *File shares* in the create storage account wizard.
-
-#### Replication
-
-There are several replication settings available. Only choose from the following two options:
-
-* *Locally redundant storage (LRS)*.
-* *Zone redundant storage (ZRS)*, which isn't available in all Azure regions.
-
-> [!NOTE]
-> Geo redundant storage (GRS) and geo-zone redundant storage aren't supported.
-
-### Azure file shares
-
-After creating your storage accounts, go to the **File share** section of the storage account(s) and deploy the appropriate number of Azure file shares as per your migration plan from Phase 1. Consider adhering to the following basic settings for your new file shares in Azure.
-
- :::column:::
- :::image type="content" source="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-new-share.png" alt-text="An Azure portal screenshot showing the new file share UI.":::
- :::column-end:::
- :::column:::
- </br>**Name**</br>Lowercase letters, numbers, and hyphens are supported.</br></br>**Quota**</br>Quota here is comparable to an SMB hard quota on a Windows Server instance. The best practice is to not set a quota here because your migration and other services will fail when the quota is reached.</br></br>**Tiers**</br>Select **Transaction optimized** for your new file share. During the migration, many transactions will occur. It's more cost efficient to change your tier later to the tier best suited to your workload.
- :::column-end:::
-
-### StorSimple Data Manager
-
-The Azure resource that holds your migration jobs is called a **StorSimple Data Manager**. Select **New resource**, and search for it. Then select **Create**.
-
-This temporary resource is used for orchestration. You deprovision it after your migration completes. Make sure to deploy it in the same subscription, resource group, and region as your StorSimple storage account.
-
-### Azure File Sync
-
-With Azure File Sync, you can add on-premises caching of the most frequently accessed files. Similar to the caching abilities of StorSimple, the Azure File Sync cloud tiering feature offers local-access latency in combination with improved control over the available cache capacity on the Windows Server instance and multi-site sync. If having an on-premises cache is your goal, then in your local network, prepare a Windows Server VM (physical servers and failover clusters are also supported) with sufficient direct-attached storage capacity.
-
-> [!IMPORTANT]
-> Don't set up Azure File Sync yet. Deploying Azure File Sync shouldn't start before Phase 4 of a migration.
-
-### Phase 2 summary
-
-At the end of Phase 2, you'll have deployed your storage accounts and all Azure file shares across them. You'll also have a StorSimple Data Manager resource. You'll use the latter in Phase 3 when you configure your migration jobs.
-
-## Phase 3: Create and run a migration job
-
-This section describes how to set up a migration job and map the directories on a StorSimple volume that should be copied into the target Azure file share you select.
-
- :::column:::
- [![Create and run migration jobs - click to play!](media/storage-files-migration-storsimple-8000/video-3.png)](https://youtu.be/2hICfmrvk5s?list=PLEq-KSMM-P-0tAnJ-9bslX-nrwWfL5hpi)
- <br />
- :::column-end:::
- :::column:::
- This video covers:
- - Creating a migration job
- - Summary
- - Source
- - Selecting volume backups to migrate
- - Target
- - Directory mapping
- - Semantic rules
- - Running a migration Job
- - Run job definition
- - Viewing the state of the job
- - Running jobs in parallel
- - Interpreting the log files
- :::column-end:::
-
-To get started, go to your StorSimple Data Manager, find **Job definitions** on the menu, and select **+ Job definition**. The correct target storage type is the default: **Azure file share**.
-
-![StorSimple 8000 series migration job types.](media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-new-job-type.png "A screenshot of the Job definitions Azure portal with a new Job definitions dialog box opened that asks for the type of job: Copy to a file share or a blob container.")
-
- :::column:::
- :::image type="content" source="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-new-job.png" alt-text="Screenshot of the new job creation form for a migration job.":::
- :::column-end:::
- :::column:::
- **Job definition name**</br>This name should indicate the set of files you're moving. Giving it a similar name as your Azure file share is a good practice. </br></br>**Location where the job runs**</br>When selecting a region, you must select the same region as your StorSimple storage account or, if that isn't available, then a region close to it. </br></br><h3>Source</h3>**Source subscription**</br>Select the subscription in which you store your StorSimple Device Manager resource. </br></br>**StorSimple resource**</br>Select your StorSimple Device Manager your appliance is registered with. </br></br>**Service data encryption key**</br>Check this [prior section in this article](#storsimple-service-data-encryption-key) in case you can't locate the key in your records. </br></br>**Device**</br>Select your StorSimple device that holds the volume where you want to migrate. </br></br>**Volume**</br>Select the source volume. Later you'll decide if you want to migrate the whole volume or subdirectories into the target Azure file share. </br></br> **Volume backups**</br>You can select *Select volume backups* to choose specific backups to move as part of this job. An upcoming, [dedicated section in this article](#selecting-volume-backups-to-migrate) covers the process in detail.</br></br><h3>Target</h3>Select the subscription, storage account, and Azure file share as the target of this migration job.</br></br><h3>Directory mapping</h3>[A dedicated section in this article](#directory-mapping), discusses all relevant details.
- :::column-end:::
-
-### Selecting volume backups to migrate
-
-There are important aspects around choosing backups that need to be migrated:
-
-* Your migration jobs can only move backups, not live volume data. So the most recent backup is closest to the live data and should always be on the list of backups moved in a migration. When you open the Backup selection dialog, it's selected by default.
-* Make sure your latest backup is recent to keep the delta to the live share as small as possible. It could be worth manually triggering and completing another volume backup before creating a migration job. A small delta to the live share improves your migration experience. If this delta can be zero, meaning that no more changes to the StorSimple volume happened after the newest backup was taken in your list, then the user cut-over will be drastically simplified and sped up.
-* Backups must be played back into the Azure file share **from oldest to newest**. An older backup can't be "sorted into" the list of backups on the Azure file share after running a migration job. Therefore you must ensure that your list of backups is complete *before* you create a job.
-* This list of backups in a job can't be modified once the job is created, even if the job never ran.
-* In order to select backups, the StorSimple volume you want to migrate must be online.
-
- :::column:::
- :::image type="content" source="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-select-backups.png" alt-text="A screenshot of the new job creation form detailing the portion where StorSimple backups are selected for migration." lightbox="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-select-backups-expanded.png":::
- :::column-end:::
- :::column:::
- To select backups of your StorSimple volume for your migration job, select the *Select volume backups* on the job creation form.
- :::column-end:::
- :::column:::
- :::image type="content" source="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-select-backups-annotated.png" alt-text="An image showing that the upper half of the blade for selecting backups lists all available backups. A selected backup will be grayed-out in this list and added to a second list on the lower half of the blade. There it can also be deleted again." lightbox="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-select-backups-annotated.png":::
- :::column-end:::
- :::column:::
- When the backup selection blade opens, it's separated into two lists. In the first list, all available backups are displayed. You can expand and narrow the result set by filtering for a specific time range. (see next section) </br></br>A selected backup will display as grayed-out and is added to a second list on the lower half of the blade. The second list displays all the backups selected for migration. A backup selected in error can also be removed again.
- > [!CAUTION]
- > You must select **all** backups you wish to migrate. You can't add older backups later. You can't modify the job to change your selection once the job is created.
- :::column-end:::
- :::column:::
- :::image type="content" source="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-select-backups-time.png" alt-text="A screenshot showing the selection of a time range of the backup selection blade." lightbox="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-select-backups-time-expanded.png":::
- :::column-end:::
- :::column:::
- By default, the list is filtered to show the StorSimple volume backups within the past seven days. The most recent backup is selected by default, even if it didn't occur in the past seven days. For older backups, use the time range filter at the top of the blade. You can either select from an existing filter or set a custom time range to filter for only the backups taken during this period.
- :::column-end:::
-
-> [!CAUTION]
-> Selecting more than 50 StorSimple volume backups isn't supported. Jobs with a large number of backups may fail. Make sure your backup retention policies don't delete a selected backup before it got a chance to be migrated!
-
-### Directory mapping
-
-Directory mapping is optional for your migration job. If you leave the section empty, *all* the files and folders on the root of your StorSimple volume will be moved into the root of your target Azure file share. In most cases, storing an entire volume's content in an Azure file share isn't the best approach. It's often better to split a volume's content across multiple file shares in Azure. If you haven't made a plan already, see [Map your StorSimple volume to Azure file shares](#map-your-existing-storsimple-volumes-to-azure-file-shares) first.
-
-As part of your migration plan, you might have decided that the folders on a StorSimple volume need to be split across multiple Azure file shares. If that's the case, you can accomplish that split by:
-
-1. Defining multiple jobs to migrate the folders on one volume. Each will have the same StorSimple volume source but a different Azure file share as the target.
-1. Specifying precisely which folders from the StorSimple volume need to be migrated into the specified file share by using the **Directory-mapping** section of the job creation form and following the specific [mapping semantics](#semantic-elements).
-
-> [!IMPORTANT]
-> The paths and mapping expressions in this form can't be validated when the form is submitted. If mappings are specified incorrectly, a job might either fail completely or produce an undesirable result. In that case, it's usually best to delete the Azure file share, re-create it, and then fix the mapping statements in a new migration job for the share. Running a new job with fixed mapping statements can fix omitted folders and bring them into the existing share. However, only folders that were omitted because of path misspellings can be addressed this way.
-
-#### Semantic elements
-
-A mapping is expressed from left to right: [\source path] \> [\target path].
-
-|Semantic character | Meaning |
-|:|:|
-| **\\** | Root level indicator. |
-| **\>** | [Source] and [target-mapping] operator. |
-|**\|** or RETURN (new line) | Separator of two folder-mapping instructions. </br>Alternatively, you can omit this character and select **Enter** to get the next mapping expression on its own line. |
-
-### Examples
-
-Moves the content of folder *User data* to the root of the target file share:
-``` console
-\User data > \
-```
-Moves the entire volume content into a new path on the target file share:
-``` console
-\ > \Apps\HR tracker
-```
-Moves the source folder content into a new path on the target file share:
-``` console
-\HR resumes-Backup > \Backups\HR\resumes
-```
-Sorts multiple source locations into a new directory structure:
-``` console
-\HR\Candidate Tracker\v1.0 > \Apps\Candidate tracker
-\HR\Candidates\Resumes > \HR\Candidates\New
-\Archive\HR\Old Resumes > \HR\Candidates\Archived
-```
-
-### Semantic rules
-
-* Always specify folder paths relative to the root level.
-* Begin each folder path with a root level indicator "\\".
-* Don't include drive letters.
-* When specifying multiple paths, source or target paths can't overlap:</br>
- Invalid source path overlap example:</br>
- *\\folder\1 > \\folder*</br>
- *\\folder\\1\\2 > \\folder2*</br>
- Invalid target path overlap example:</br>
- *\\folder > \\*</br>
- *\\folder2 > \\*</br>
-* Source folders that don't exist are ignored.
-* Folder structures that don't exist on the target are created.
-* Like Windows, folder names are case insensitive but case preserving.
-
-> [!NOTE]
-> Contents of the *\System Volume Information* folder and the *$Recycle.Bin* on your StorSimple volume won't be copied by the migration job.
-
-### Run a migration job
-
-Your migration jobs are listed under *Job definitions* in the Data Manager resource you've deployed to a resource group. From the list of job definitions, select the job you want to run.
-
-In the job blade that opens, you can see your job's current status and a list of backups you've selected. The list of backups is sorted by oldest to newest and will be migrated to your Azure file share in this order.
-
- :::column:::
- :::image type="content" source="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-never-ran-focused.png" alt-text="Screenshot of the migration job blade with a highlight around the command to start the job. It also displays the selected backups scheduled for migration." lightbox="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-never-ran.png":::
- :::column-end:::
- :::column:::
- Initially, the migration job will have the status: **Never ran**. </br>When you're ready, start the migration job. Select the image for a version with higher resolution. </br> When a backup is successfully migrated, an automatic Azure file share snapshot will be taken. The original backup date of your StorSimple backup is placed in the *Comments* section of the Azure file share snapshot. Utilizing this field allows you to see when the data was originally backed up as compared to the time the file share snapshot was taken.
- :::column-end:::
-
-> [!CAUTION]
-> Backups must be processed from oldest to newest. Once a migration job is created, you can't change the list of selected StorSimple volume backups. Don't start the job if the list of Backups is incorrect or incomplete. Delete the job and make a new one with the correct backups selected. For each selected backup, check your retention schedules. Backups might get deleted by one or more of your retention policies before they got a chance to be migrated!
-
-### Per-item errors
-
-The migration jobs have two columns in the list of backups that list any issues that might have occurred during the copy:
-
-* Copy errors </br>This column lists files or folders that should have been copied but weren't. These errors are often recoverable. When a backup lists item issues in this column, review the copy logs. If you need to migrate these files, select **Retry backup**. This option becomes available once the backup finishes processing. The [Managing a migration job](#manage-a-migration-job) section explains your options in more detail.
-* Unsupported files </br>This column lists files or folders that can't be migrated. Azure Storage has limitations in file names, path lengths, and file types that currently or logically can't be stored in an Azure file share. A migration job won't pause for these kinds of errors. Retrying migration of the backup won't change the result. When a backup lists item issues in this column, review the copy logs and take note. If such issues arise in your last backup and you found in the copy log that the failure was due to a file name, path length, or other issue you have influence over, you might want to remedy the issue in the live StorSimple volume, take a StorSimple volume backup, and create a new migration job with just that backup. You can then migrate this remedied namespace and it will become the most recent / live version of the Azure file share. This is a manual and time consuming process. Review the copy logs carefully and evaluate if it's worth it.
-
-These copy logs are *\*.csv* files listing namespace items succeeded and items that failed to get copied. The errors are further split into the previously discussed categories. From the log file location, you can find logs for failed files by searching for "failed". The result should be a set of logs for files that failed to copy. Sort these logs by size. There might be extra logs produced at 17 bytes in size. They are empty and can be ignored. With a sort, you can focus on the logs with content.
-
-The same process applies for log files recording successful copies.
-
-### Manage a migration job
-
-Migration jobs have the following states:
-
-* **Never ran** </br>A new job that has been defined but never run.
-* **Waiting** </br>A job in this state is waiting for resources to be provisioned in the migration service. It will automatically switch to a different state when ready.
-* **Failed** </br>A failed job hit a fatal error that prevents it from processing more backups. A job isn't expected to enter this state. A support request is the best course of action.
-* **Canceled** / **Canceling**</br>Either and entire migration job or individual backups within the job can be canceled. Canceled backups won't be processed, as a canceled migration job will stop processing backups. Expect that canceling a job will take a long time. This doesn't prevent you from creating a new job. The best course of action is to let a job fully arrive in the **Canceled** state. You can either ignore failed / canceled jobs or delete them later. You won't have to delete jobs before you can delete the Data Manager resource at the end of your StorSimple migration.
--
- :::column:::
- :::image type="content" source="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-running-focused.png" alt-text="Screenshot of the migration job blade with a large status icon on the top in the running state." lightbox="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-running.png":::
- :::column-end:::
- :::column:::
- **Running** </br></br>A running job is currently processing a backup. Refer to the table on the bottom half of the blade to see which backup is currently being processed and which ones might have been migrated already. </br>Already migrated backups have a column with a link to a copy log. If a backup reports any errors, you should review its copy log.
- :::column-end:::
- :::column:::
- :::image type="content" source="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-paused-focused.png" alt-text="Screenshot of the migration job blade with a large status icon on the top in the paused state." lightbox="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-paused.png":::
- :::column-end:::
- :::column:::
- **Paused** </br></br>A migration job is paused when there is a decision needed. This condition enables two command buttons on the top of the blade: </br>Choose **Retry backup** when the backup shows files that were supposed to move but didn't (*Copy error* column). </br>Choose **Skip backup** when the backup is missing (was deleted by policy since you created the migration job) or when the backup is corrupt. You can find detailed error information in the blade that opens when you click on the failed backup. </br></br>When you *skip* or *retry* the current backup, the migration service will create a new snapshot in your target Azure file share. You might want to delete the previous one later, as it's likely incomplete.
- :::column-end:::
- :::column:::
- :::image type="content" source="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-success-focused.png" alt-text="An image showing the migration job blade with a large status icon on the top in the complete state." lightbox="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-success.png":::
- :::column-end:::
- :::column:::
- **Complete** and **Complete with warnings**</br></br>A migration job is listed as **Complete** when all backups in the job have been successfully processed. </br>**Complete with warnings** is a state that occurs when: <ul><li>A backup ran into a recoverable issue. This backup is marked as *partial success* or *failed*.</li><li>You decided to continue on the paused job by skipping the backup with said issues. (You chose *Skip backup* instead of *Retry backup*)</li></ul> If the migration job completes with warnings, you should always review the copy log for the relevant backups.
- :::column-end:::
-
-#### Run jobs in parallel
-
-You will likely have multiple StorSimple volumes, each with their own shares that must be migrated to an Azure file share. It's important that you understand how much you can do in parallel. There are limitations that aren't enforced in the user experience and will either degrade or inhibit a complete migration if jobs are executed at the same time.
-
-There are no limits in defining migration jobs. You can define the same StorSimple source volume, the same Azure file share, across the same or different StorSimple appliances. However, running them has limitations:
-
-* Only one migration job with the same StorSimple source volume can run at the same time.
-* Only one migration job with the same target Azure file share can run at the same time.
-* Before starting the next job, ensure that any of the previous jobs are in the `copy stage` and show progress of moving files for at least 30 minutes.
-* You can run up to four migration jobs in parallel per StorSimple device manager, as long as you abide by the previous rules.
-
-When you attempt to start a migration job, the previous rules are checked. If there are jobs running, you might not be able to start a new job. You'll receive an alert that lists the name of currently running job(s) that must finish before you can start the new job.
-
-> [!TIP]
-> It's a good idea to regularly check your migration jobs in the *Job definition* tab of your *Data Manager* resource to see if any of them have paused and need your input to complete.
-
-### Phase 3 summary
-
-At the end of Phase 3, you'll have run at least one of your migration jobs from StorSimple volumes into Azure file share(s). With your run, you will have migrated your specified backups into Azure file share snapshots. You can now focus on either setting up Azure File Sync for the share (once migration jobs for a share have completed) or direct-share-access for your information workers and apps to the Azure file share.
-
-## Phase 4: Access your Azure file shares
-
-There are two main strategies for accessing your Azure file shares:
-
-* **Azure File Sync**: [Deploy Azure File Sync](#deploy-azure-file-sync) to an on-premises Windows Server instance. Azure File Sync has all the advantages of a local cache, just like StorSimple.
-* **Direct-share-access**: [Deploy direct-share-access](#deploy-direct-share-access). Use this strategy if your access scenario for a given Azure file share won't benefit from local caching, or if you no longer have an ability to host an on-premises Windows Server instance. Here, your users and apps will continue to access SMB shares over the SMB protocol. These shares are no longer on an on-premises server but directly in the cloud.
-
-You should have already decided which option is best for you in [Phase 1](#phase-1-prepare-for-migration) of this guide.
-
-The remainder of this section focuses on deployment instructions.
-
- :::column:::
- [![Access options for Azure file shares - click to play!](media/storage-files-migration-storsimple-8000/video-4.png)](https://youtu.be/YSaYeX19fsc?list=PLEq-KSMM-P-0tAnJ-9bslX-nrwWfL5hpi)
- <br />
- :::column-end:::
- :::column:::
- This video covers:
- - Approaches to access Azure file shares
- - Azure File Sync
- - Direct-share-access
- - Deploying Azure File Sync
- - Deploy the Azure File Sync cloud resource
- - Deploy an on-premises Windows Server instance
- - Preparing the Windows Server instance for Azure File Sync
- - Configuring Azure File Sync on the Windows Server instance
- - Monitoring initial sync
- - Testing Azure File Sync
- - Creating the SMB shares
- :::column-end:::
-
-### Deploy Azure File Sync
-
-It's time to deploy a part of Azure File Sync.
-
-1. Create the Azure File Sync cloud resource.
-1. Deploy the Azure File Sync agent on your on-premises server.
-1. Register the server with the cloud resource.
-
-Don't create any sync groups yet. Setting up sync with an Azure file share should only occur after your migration jobs to an Azure file share have completed. If you start using Azure File Sync before your migration completes, it will make your migration unnecessarily difficult because you won't be able to easily tell when it was time to initiate a cut-over.
-
-#### Deploy the Azure File Sync cloud resource
--
-> [!TIP]
-> If you want to change the Azure region your data resides in after the migration is finished, deploy the Storage Sync Service in the same region as the target storage accounts for this migration.
-
-#### Deploy an on-premises Windows Server instance
-
-* Create Windows Server 2019 (at a minimum 2012R2) as a virtual machine or physical server. A Windows Server failover cluster is also supported. Don't reuse the server fronting the StorSimple 8100 or 8600.
-* Provision or add direct-attached storage. Network-attached storage isn't supported.
-
-It's best practice to give your new Windows Server instance an equal or larger amount of storage than your StorSimple 8100 or 8600 appliance has locally available for caching. You'll use the Windows Server instance the same way you used the StorSimple appliance. If it has the same amount of storage as the appliance, the caching experience should be similar, if not the same. You can add or remove storage from your Windows Server instance at will. This capability enables you to scale your local volume size and the amount of local storage available for caching.
-
-#### Prepare the Windows Server instance for file sync
--
-#### Configure Azure File Sync on the Windows Server instance
-
-Your registered on-premises Windows Server instance must be ready and connected to the internet for this process.
-
-> [!IMPORTANT]
-> Your StorSimple migration of files and folders into the Azure file share must be complete before you proceed. Make sure there are no more changes done to the file share.
--
-> [!IMPORTANT]
-> Be sure to turn on cloud tiering. Cloud tiering is the Azure File Sync feature that allows the local server to have less storage capacity than is stored in the cloud, yet have the full namespace available. Locally interesting data is also cached locally for fast performance. Another reason to turn on cloud tiering at this step is that we don't want to sync file content at this stage. Only the namespace should be moving at this time.
-
-### Deploy direct-share-access
-
- :::column:::
- > [!VIDEO https://www.youtube-nocookie.com/embed/jd49W33DxkQ]
- :::column-end:::
- :::column:::
- This video is a guide and demo for how to securely expose Azure file shares directly to information workers and apps in five simple steps.</br>
- The video references dedicated documentation for the following topics. Note that Azure Active Directory is now Microsoft Entra ID. For more info, see [New name for Azure AD](https://aka.ms/azureadnewname).
-
-* [Identity overview](storage-files-active-directory-overview.md)
-* [How to domain join a storage account](storage-files-identity-auth-active-directory-enable.md)
-* [Networking overview for Azure file shares](storage-files-networking-overview.md)
-* [How to configure public and private endpoints](storage-files-networking-endpoints.md)
-* [How to configure a S2S VPN](storage-files-configure-s2s-vpn.md)
-* [How to configure a Windows P2S VPN](storage-files-configure-p2s-vpn-windows.md)
-* [How to configure a Linux P2S VPN](storage-files-configure-p2s-vpn-linux.md)
-* [How to configure DNS forwarding](storage-files-networking-dns.md)
-* [Configure DFS-N](/windows-server/storage/dfs-namespaces/dfs-overview)
- :::column-end:::
-
-### Phase 4 summary
-
-At the end of this phase, you've created and run multiple migration jobs in your StorSimple Data Manager. Those jobs have migrated your files and folders and their backups to Azure file shares. You've also deployed Azure File Sync or prepared your network and storage accounts for direct-share-access.
-
-## Phase 5: User cut-over
-
-In this phase, you'll complete your migration:
-
-* Plan your downtime.
-* Catch up with any changes your users and apps produced on the StorSimple side while the migration jobs in Phase 3 have been running.
-* Fail over your users to the new Windows Server instance with Azure File Sync or to the Azure file shares via direct-share-access.
-
- :::column:::
- [![Steps to cut over a workload to Azure file shares - click to play!](media/storage-files-migration-storsimple-8000/video-5.png)](https://youtu.be/gyu1vdK-Lj8?list=PLEq-KSMM-P-0tAnJ-9bslX-nrwWfL5hpi)
- <br />
- :::column-end:::
- :::column:::
- This video covers:
- - Steps to take before your workload cut-over
- - Executing your cut-over
- - Post cut-over steps
- :::column-end:::
-
-### Plan your downtime
-
-This migration approach requires some downtime for your users and apps. The goal is to keep downtime to a minimum. The following considerations can help:
-
-* Keep your StorSimple volumes available while running your migration jobs.
-* When you've finished running your data migration jobs for a share, it's time to remove user access (at least write access) from the StorSimple volumes or shares. A final RoboCopy will catch up your Azure file share. Then you can cut over your users. Where you run RoboCopy depends on whether you chose to use Azure File Sync or direct-share-access. The upcoming section covers that subject.
-* After you've completed the RoboCopy catch-up, you're ready to expose the new location to your users by either the Azure file share directly or an SMB share on a Windows Server instance with Azure File Sync. Often a DFS-N deployment will help accomplish a cut-over quickly and efficiently. It will keep your existing share addresses consistent and repoint to a new location that contains your migrated files and folders.
-
-For archival data, it's a fully viable approach to take downtime on your StorSimple volume (or subfolder), take one more StorSimple volume backup, migrate, and then open up the migration destination for access by users and apps. This will spare you the need for a catch-up RoboCopy. However, this approach comes at the cost of a prolonged downtime window that might stretch to several days or longer depending on the number of files and backups you need to migrate. This is likely only an option for archival workloads that can do without write access for prolonged periods of time.
-
-### Determine when your namespace has fully synced to your server
-
-When you use Azure File Sync for an Azure file share, it's important to determine that your entire namespace has finished downloading to the server *before* you start any local RoboCopy. The time it takes to download your namespace depends on the number of items in your Azure file share. There are two methods for determining whether your namespace has fully arrived on the server.
-
-#### Azure portal
-
-You can use the Azure portal to see when your namespace has fully arrived.
-
-* Sign in to the Azure portal, and go to your sync group. Check the sync status of your sync group and server endpoint.
-* The interesting direction is download. If the server endpoint is newly provisioned, it will show **Initial sync**, which indicates the namespace is still coming down. After that state changes to anything but **Initial sync**, your namespace will be fully populated on the server.
-
-You can now proceed with a local RoboCopy.
-
-#### Windows Server Event Viewer
-
-You can also use the Event Viewer on your Windows Server instance to tell when the namespace has fully arrived.
-
-1. Open the **Event Viewer**, and go to **Applications and Services**.
-1. Go to and open **Microsoft\FileSync\Agent\Telemetry**.
-1. Look for the most recent **event 9102**, which corresponds to a completed sync session.
-1. Select **Details**, and confirm that you're looking at an event where the **SyncDirection** value is **Download**.
-1. For the time where your namespace has completed download to the server, there will be a single event with **Scenario**, the value **FullGhostedSync**, and **HResult** = **0**.
-1. If you miss that event, you can also look for other **9102 events** with **SyncDirection** = **Download** and **Scenario** = **"RegularSync"**. Finding one of these events also indicates that the namespace has finished downloading and sync progressed to regular sync sessions, whether there's anything to sync or not at this time.
-
-### A final RoboCopy
-
-At this point, there are differences between your on-premises Windows Server instance and the StorSimple 8100 or 8600 appliance.
-
-1. You need to catch up with the changes that users or apps produced on the StorSimple side while the migration was ongoing.
-1. For cases where you use Azure File Sync: The StorSimple appliance has a populated cache versus the Windows Server instance with just a namespace with no file content stored locally at this time. The final RoboCopy can help jump-start your local Azure File Sync cache by pulling over locally cached file content as much as is available and can fit on the Azure File Sync server.
-1. Some files might have been left behind by the migration job because of invalid characters. If so, copy them to the Azure File Sync-enabled Windows Server instance. Later, you can adjust them so that they will sync. If you don't use Azure File Sync for a particular share, you're better off renaming the files with invalid characters on the StorSimple volume. Then run the RoboCopy directly against the Azure file share.
-
-> [!WARNING]
-> Robocopy in Windows Server 2019 experienced an issue that caused files tiered by Azure File Sync on the target server to be recopied from the source and re-uploaded to Azure when using the `/MIR` function. We recommend running Robocopy on a Windows Server other than 2019, such as Windows Server 2016.
-
-> [!WARNING]
-> You *must not* start the RoboCopy before the server has the namespace for an Azure file share downloaded fully. For more information, see [Determine when your namespace has fully downloaded to your server](#determine-when-your-namespace-has-fully-synced-to-your-server).
-
- You only want to copy files that were changed after the migration job last ran and files that haven't moved through these jobs before. You can solve the problem as to why they didn't move later on the server, after the migration is complete. For more information, see [Azure File Sync troubleshooting](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors#how-do-i-see-if-there-are-specific-files-or-folders-that-are-not-syncing?toc=/azure/storage/file-sync/toc.json).
-
-RoboCopy has several parameters. The following example showcases a finished command and a list of reasons for choosing these parameters.
--
-When you configure source and target locations of the RoboCopy command, make sure you review the structure of the source and target to ensure they match. If you used the directory-mapping feature of the migration job, your root-directory structure might be different than the structure of your StorSimple volume. If that's the case, you might need multiple RoboCopy jobs, one for each subdirectory. If you unsure if the command will perform as expected, you can use the */L* parameter, which will simulate the command without actually making any changes.
-
-This RoboCopy command uses `/MIR`, so it won't move files that are the same (tiered files, for instance). But if you get the source and target path wrong, `/MIR` also purges directory structures on your Windows Server instance or Azure file share that aren't present on the StorSimple source path. They must match exactly for the RoboCopy job to reach its intended goal of updating your migrated content with the latest changes made while the migration is ongoing.
-
-Consult the RoboCopy log file to see if files have been left behind. If issues exist, fix them, and rerun the RoboCopy command. Don't deprovision any StorSimple resources before you fix outstanding issues for files or folders you care about.
-
-If you don't use Azure File Sync to cache the particular Azure file share in question but instead opted for direct-share-access:
-
-1. [Mount your Azure file share](storage-how-to-use-files-windows.md#mount-the-azure-file-share) as a network drive to a local Windows machine.
-1. Perform the RoboCopy between your StorSimple and the mounted Azure file share. If files don't copy, fix up their names on the StorSimple side to remove invalid characters. Then retry RoboCopy. The previously listed RoboCopy command can be run multiple times without causing unnecessary recall to StorSimple.
-
-### Troubleshoot and optimize
--
-### User cut-over
-
-If you use Azure File Sync, you likely need to create the SMB shares on that Azure File Sync-enabled Windows Server instance that match the shares you had on the StorSimple volumes. You can front-load this step and do it earlier to not lose time here. But you must ensure that before this point, nobody has access to cause changes to the Windows Server instance.
-
-If you have a DFS-N deployment, you can point the DFN-Namespaces to the new server folder locations. If you don't have a DFS-N deployment, and you fronted your 8100 or 8600 appliance locally with a Windows Server instance, you can take that server off the domain. Then domain-join your new Azure File Sync-enabled Windows Server instance. During that process, give the server the same server name and share names as the old server so that cut-over remains transparent for your users, group policy, and scripts.
-
-Learn more about [DFS-N](/windows-server/storage/dfs-namespaces/dfs-overview).
-
-## Phase 6: Deprovision
-
-When you deprovision a resource, you lose access to the configuration of that resource and its data. Deprovisioning can't be undone. Don't proceed until you've confirmed that:
-
-* Your migration is complete.
-* There are no dependencies whatsoever on the StorSimple files, folders, or volume backups that you're about to deprovision.
-
-Before you begin, it's a best practice to observe your new Azure File Sync deployment in production for a while. That time gives you the opportunity to fix any problems you might encounter. After you've observed your Azure File Sync deployment for at least a few days, you can begin to deprovision resources in this order:
-
-1. Deprovision your StorSimple Data Manager resource via the Azure portal. All of your DTS jobs will be deleted with it. You won't be able to easily retrieve the copy logs. If they're important for your records, retrieve them before you deprovision.
-1. Make sure that your StorSimple physical appliances have been migrated, and then unregister them. If you aren't sure that they've been migrated, don't proceed. If you deprovision these resources while they're still necessary, you won't be able to recover the data or their configuration.<br>Optionally you can first deprovision the StorSimple volume resource, which will clean up the data on the appliance. This process can take several days and won't forensically zero out the data on the appliance. If this is important to you, handle disk zeroing separately from the resource deprovisioning and according to your policies.
-1. If there are no more registered devices left in a StorSimple Device Manager, you can proceed to remove that Device Manager resource itself.
-1. It's now time to delete the StorSimple storage account in Azure. Again, stop and confirm your migration is complete and that nothing and no one depends on this data before you proceed.
-1. Unplug the StorSimple physical appliance from your data center.
-1. If you own the StorSimple appliance, you're free to PC Recycle it. If your device is leased, inform the lessor and return the device as appropriate.
-
-Your migration is complete.
---
-> [!NOTE]
-> Still have questions or encountered any issues?</br>
-> We're here to help: :::image type="content" source="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-migration-email.png" alt-text="Email address in one word: Azure Files migration at microsoft dot com":::
-
-## Troubleshooting
-
-When using the StorSimple Data Manager migration service, either an entire migration job or individual files may fail for various reasons. The file fidelity section has more details on supported / unsupported scenarios. The following tables list error codes, error details, and where possible, mitigation options.
-
-### Job level errors
-
-|Phase |Error |Details / Mitigation |
-|||-|
-|**Backup** |*Could not find a backup for the parameters specified* |The backup selected for the job run is not found at the time of "Estimation" or "Copy". Ensure that the backup is still present in the StorSimple backup catalog. Sometimes automatic backup retention policies delete backups between selecting them for migration and actually running the migration job for this backup. Consider disabling any backup retention schedules before starting a migration. |
-|**Estimation </br> Configure compute** |*Installation of encryption keys failed* |Your *Service Data Encryption Key* is incorrect. Review the [encryption key section in this article](#storsimple-service-data-encryption-key) for more details and help retrieving the correct key. |
-| |*Batch error* |It's possible that starting up all the internal infrastructure required to perform a migration runs into an issue. Multiple other services are involved in this process. These problems generally resolve themselves when you attempt to run the job again. |
-| |*StorSimple Manager encountered an internal error. Wait for a few minutes and then try the operation again. If the issue persists, contact Microsoft Support. (Error code: 1074161829)* |This generic error has multiple causes, but one possibility encountered is that the StorSimple device manager reached the limit of 50 appliances. Check if the most recently run jobs in the device manager have suddenly started to fail with this error, which would suggest this is the problem. The mitigation for this particular issue is to remove any offline StorSimple 8001 appliances created and used by the Data Manager Service. You can file a support ticket or delete them manually in the portal. Make sure to only delete offline 8001 series appliances. |
-|**Estimating Files** |*Clone volume job failed* |This error most likely indicates that you specified a backup that was somehow corrupted. The migration service can't mount or read it. You can try out the backup manually or open a support ticket. |
-| |*Cannot proceed as volume is in non-NTFS format* |Only NTFS volumes, non dedupe enabled, can be used by the migration service. If you have a differently formatted volume, like ReFS or a third-party format, the migration service won't be able to migrate this volume. See the [Known limitations](#known-limitations) section. |
-| |*Contact support. No suitable partition found on the disk* |The StorSimple disk that is supposed to have the volume specified for migration doesn't appear to have a partition for said volume. That is unusual and can indicate a corruption or management mis-alignment. Your only option to further investigate this issue is to file a support ticket. |
-| |*Timed out* |The estimation phase failing with a timeout is typically an issue with either the StorSimple appliance, or the source Volume Backup being slow and sometimes even corrupt. If re-running the backup doesn't work, then filing a support ticket is your best course of action. |
-| |*Could not find file &lt;path&gt; </br>Could not find a part of the path* |The job definition allows you to provide a source sub-path. This error is shown when that path does not exist. For instance: *\Share1 > \Share\Share1* </br> In this example you've specified *\Share1* as a sub-path in the source, mapping to another sub-path in the target. However, the source path does not exist (was misspelled?). Note: Windows is case preserving but not case dependent. Meaning specifying *\Share1* and *\share1* is equivalent. Also: Target paths that don't exist will be automatically created. |
-| |*This request is not authorized to perform this operation* |This error shows when the source StorSimple storage account or the target storage account with the Azure file share has a firewall setting enabled. You must allow traffic over the public endpoint and not restrict it with further firewall rules. Otherwise the Data Transformation Service will be unable to access either storage account, even if you authorized it. Disable any firewall rules and re-run the job. |
-|**Copying Files** |*The account being accessed does not support HTTP* |Disable internet routing on the target storage account or use the Microsoft routing endpoint. |
-| |*The specified share is full* |If the target is a premium Azure file share, ensure that you've provisioned sufficient capacity for the share. Temporary over-provisioning is a common practice. |
-
-### Item level errors
-
-During the copy phase of a migration job run, individual namespace items (files and folders) can encounter errors. The following table lists the most common errors and suggests mitigation options when possible.
-
-|Phase |Error |Mitigation |
-|-|--||
-|**Copy** |*-2146233088 </br>The server is busy.* |Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again, but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. |
-| |*-2146233088 </br>Operation could not be completed within the specified time.* |Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again, but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. |
-| |*Upload timed out or copy not started* |Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again, but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. |
-| |*-2146233029 </br>The operation was canceled.* |Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again, but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. |
-| |*1920 </br>The file cannot be accessed by the system.* |This is a common error when the migration engine encounters a reparse point, link, or junction. They are not supported. These types of files can't be copied. Review the [Known limitations](#known-limitations) section and the [File fidelity](#file-fidelity) section in this article. |
-| |*-2147024891 </br>Access is denied* |This is an error for files that are encrypted in a way that they can't be accessed on the disk. Files that can be read from disk but simply have encrypted content are not affected and can be copied. Your only option is to copy them manually. You can find such items by mounting the affected volume and running the following command: `get-childitem <path> [-Recurse] -Force -ErrorAction SilentlyContinue | Where-Object {$_.Attributes -ge "Encrypted"} | format-list fullname, attributes` |
-| |*Not a valid Win32 FileTime. Parameter name: fileTime* |In this case, the file can be accessed but can't be evaluated for copy because a timestamp the migration engine depends on is either corrupted or was written by an application in an incorrect format. There is not much you can do, because you can't change the timestamp in the backup. If retaining this file is important, perhaps on the latest version (last backup containing this file) you manually copy the file, fix the timestamp, and then move it to the target Azure file share. This option doesn't scale very well but is an option for high-value files where you want to have at least one version retained in your target. |
-| |*-2146232798 </br>Safe handle has been closed* |Often a transient error. Rerun the job if there are too many failures. If there are only very few errors, you can try running the job again, but often a manual copy of the failed items can be faster. Then resume the migration by skipping to processing the next backup. |
-| |*-2147024413 </br>Fatal device hardware error* |This is a rare error and not actually reported for a physical device, but rather the 8001 series virtualized appliances used by the migration service. The appliance ran into an issue. Files with this error won't stop the migration from proceeding to the next backup. That makes it hard for you to perform a manual copy or retry the backup that contains files with this error. If the files left behind are very important or there is a large number of files, you may need to start the migration of all backups again. Open a support ticket for further investigation. |
-|**Delete </br>(Mirror purging)** |*The specified directory is not empty.* |This error occurs when the migration mode is set to *mirror* and the process that removes items from the Azure file share ran into an issue that prevented it from deleting items. Deletion happens only in the live share, not from previous snapshots. The deletion is necessary because the affected files are not in the current backup and thus must be removed from the live share before the next snapshot. There are two options: Option 1: mount the target Azure file share and delete the files with this error manually. Option 2: you can ignore these errors and continue processing the next backup with an expectation that the target isn't identical to source and has some extra items that weren't in the original StorSimple backup. |
-| |*Bad request* |This error indicates that the source file has certain characteristics that couldn't be copied to the Azure file share. Most notably there could be invisible control characters in a file name or 1 byte of a double byte character in the file name or file path. You can use the copy logs to get path names, copy the files to a temporary location, rename the paths to remove the unsupported characters, and then robocopy again to the Azure file share. You can then resume the migration by skipping to the next backup to be processed. |
-
-## Next steps
-
-* Understand the flexibility of [cloud tiering](../file-sync/file-sync-cloud-tiering-overview.md) policies.
-* [Enable Azure Backup](../../backup/backup-afs.md#configure-backup-from-the-file-share-pane) on your Azure file shares to schedule snapshots and define backup retention schedules.
-* If you see in the Azure portal that some files are permanently not syncing, review the [Troubleshooting guide](/troubleshoot/azure/azure-storage/file-sync-troubleshoot?toc=/azure/storage/file-sync/toc.json) for steps to resolve these issues.
storage Storage How To Use Files Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-linux.md
Previously updated : 07/26/2024 Last updated : 08/05/2024
HTTP_ENDPOINT=$(az storage account show \
SMB_PATH=$(echo $HTTP_ENDPOINT | cut -c7-${#HTTP_ENDPOINT})$FILE_SHARE_NAME if [ -z "$(grep $SMB_PATH\ $MNT_PATH /etc/fstab)" ]; then
- echo "$SMB_PATH $MNT_PATH cifs _netdev,nofail,credentials=$SMB_CREDENTIAL_FILE,serverino,nosharesock,actimeo=30" | sudo tee -a /etc/fstab >
+ echo "$SMB_PATH $MNT_PATH cifs _netdev,nofail,credentials=$SMB_CREDENTIAL_FILE,serverino,nosharesock,actimeo=30,mfsymlinks" | sudo tee -a /etc/fstab >
else
- echo "/etc/fstab was not modified to avoid conflicting entries as this Azure file share was already present. You may want to double check /etc/fstab to ensure the configuration is as desired."
+ echo "/etc/fstab was not modified to avoid conflicting entries as this Azure file share was already present. You might want to double check /etc/fstab to ensure the configuration is as desired."
fi sudo mount -a
HTTP_ENDPOINT=$(az storage account show \
--query "primaryEndpoints.file" --output tsv | tr -d '"') SMB_PATH=$(echo $HTTP_ENDPOINT | cut -c7-$(expr length $HTTP_ENDPOINT))$FILE_SHARE_NAME
-echo "$FILE_SHARE_NAME -fstype=cifs,credentials=$SMB_CREDENTIAL_FILE :$SMB_PATH" > /etc/auto.fileshares
+echo "$FILE_SHARE_NAME -fstype=cifs,credentials=$SMB_CREDENTIAL_FILE,serverino,nosharesock,actimeo=30,mfsymlinks :$SMB_PATH" > /etc/auto.fileshares
echo "/fileshares /etc/auto.fileshares --timeout=60" > /etc/auto.master ```
update-manager Manage Pre Post Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/manage-pre-post-events.md
Title: Manage the pre and post (preview) maintenance configuration events in Azure Update Manager
+ Title: Manage the pre and post maintenance configuration events in Azure Update Manager
description: The article provides the steps to manage the pre and post maintenance events in Azure Update Manager. Last updated 07/24/2024
-# Manage pre and post events (preview) maintenance configuration events
+# Manage pre and post events maintenance configuration events
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers :heavy_check_mark: Azure VMs.
-This article describes on how to register your subscription and manage pre and post events in Azure Update Manager.
-
-## Register your subscription for public preview
-
-To self-register your subscription for public preview, follow these steps:
-
-#### [Azure portal](#tab/portal)
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and select **More services**.
-1. On the **All services** page, search for **Preview features**.
-1. On the **Preview Features** page, search and select **Pre and Post Events**.
-1. Select the feature and then select **Register** to register the subscription.
-
- :::image type="content" source="./media/tutorial-using-functions/register-feature.png" alt-text="Screenshot that shows how to register the preview feature." lightbox="./media/tutorial-using-functions/register-feature.png":::
-
-#### [Azure CLI](#tab/cli)
-
-```azurecli-interactive
-az feature register --name InGuestPatchPrePostMaintenanceActivity --namespace Microsoft.Maintenance
-```
-
-#### [PowerShell](#tab/ps)
-
-```azurepowershell-interactive
-Register-AzProviderFeature -FeatureName "InGuestPatchPrePostMaintenanceActivity" -ProviderNamespace "Microsoft.Maintenance"
-```
-
+This article describes on how to manage pre and post events in Azure Update Manager.
## Manage pre and post events
update-manager Pre Post Events Schedule Maintenance Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/pre-post-events-schedule-maintenance-configuration.md
Title: Create the pre and post (preview) maintenance configuration events in Azure Update Manager
+ Title: Create the pre and post maintenance configuration events in Azure Update Manager
description: The article provides the steps to create the pre and post maintenance events in Azure Update Manager. Last updated 07/24/2024
zone_pivot_groups: create-pre-post-events-maintenance-configuration
-# Create pre and post events (preview)
+# Create pre and post events
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers :heavy_check_mark: Azure VMs.
update-manager Pre Post Scripts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/pre-post-scripts-overview.md
Title: An overview of pre and post events (preview) in your Azure Update Manager
+ Title: An overview of pre and post events in your Azure Update Manager
description: This article provides an overview on pre and post events and its requirements. Last updated 07/24/2024
-# About pre and post events (preview)
+# About pre and post events
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
update-manager Tutorial Using Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/tutorial-using-functions.md
Title: Create pre and post events (preview) using Azure Functions.
+ Title: Create pre and post events using Azure Functions.
description: In this tutorial, you learn how to create the pre and post events using Azure Functions. Last updated 07/24/2024
#Customer intent: As an IT admin, I want create pre and post events using Azure Functions.
-# Tutorial: Create pre and post events (preview) using Azure Functions
+# Tutorial: Create pre and post events using Azure Functions
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure VMs :heavy_check_mark: Azure Arc-enabled servers.
update-manager Tutorial Webhooks Using Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/tutorial-webhooks-using-runbooks.md
Title: Create pre and post events (preview) using a webhook with Automation runbooks.
+ Title: Create pre and post events using a webhook with Automation runbooks.
description: In this tutorial, you learn how to create the pre and post events using webhook with Automation runbooks. Last updated 07/24/2024
#Customer intent: As an IT admin, I want create pre and post events using a webhook with Automation runbooks.
-# Tutorial: Create pre and post events (preview) using a webhook with Automation
+# Tutorial: Create pre and post events using a webhook with Automation
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure VMs :heavy_check_mark: Azure Arc-enabled servers.
update-manager Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/whats-new.md
Last updated 07/24/2024
[Azure Update Manager](overview.md) helps you manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. This article summarizes new releases and features in Azure Update Manager.
+## August 2024
+
+### Pre and Post events
+
+General Availability: Azure Update Manager now supports creating and managing pre and post events on scheduled maintenance configurations. [Learn more](pre-post-scripts-overview.md).
+ ## July 2024 ### Support for Windows IoT Enterprise on Arc enabled IaaS VMs
virtual-desktop Configure Rdp Shortpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-rdp-shortpath.md
Last updated 06/18/2024
# Configure RDP Shortpath for Azure Virtual Desktop > [!IMPORTANT]
-> - Using RDP Shortpath for public networks via TURN for Azure Virtual Desktop is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-> - RDP Shortpath is only available in the Azure public cloud.
+> RDP Shortpath for public networks via TURN for Azure Virtual Desktop is only available in the Azure public cloud.
Users can connect to a remote session from Azure Virtual Desktop using the Remote Desktop Protocol (RDP) with a UDP or TCP-based transport. RDP Shortpath establishes a UDP-based transport between a local device Windows App or the Remote Desktop app on supported platforms and session host.
There are four options for RDP Shortpath that provide flexibility for how you wa
- **RDP Shortpath for public networks with ICE/STUN**: A *direct* UDP connection between a client device and session host using a public connection. ICE/STUN is used to discover available IP addresses and a dynamic port that can be used for a connection. The RDP Shortpath listener and an inbound port aren't required. The port range is configurable. -- **RDP Shortpath for public networks via TURN** (preview): An *indirect* UDP connection between a client device and session host using a public connection where TURN relays traffic through an intermediate server between a client and session host. An example of when you use this option is if a connection uses Symmetric NAT. A dynamic port is used for a connection; the port range is configurable. For a list of Azure regions that TURN is available, see [supported Azure regions with TURN availability](rdp-shortpath.md#turn-availability-preview). The connection from the client device must also be within a supported location. The RDP Shortpath listener and an inbound port aren't required.
+- **RDP Shortpath for public networks via TURN**: An *indirect* UDP connection between a client device and session host using a public connection where TURN relays traffic through an intermediate server between a client and session host. An example of when you use this option is if a connection uses Symmetric NAT. A dynamic port is used for a connection; the port range is configurable. For a list of Azure regions that TURN is available, see [supported Azure regions with TURN availability](rdp-shortpath.md#turn-availability). The connection from the client device must also be within a supported location. The RDP Shortpath listener and an inbound port aren't required.
Which of the four options your client devices can use is also dependent on their network configuration. To learn more about how RDP Shortpath works, together with some example scenarios, see [RDP Shortpath](rdp-shortpath.md).
Here are the default behaviors for each option and what you need to configure:
| RDP Shortpath for managed networks | UDP and TCP are enabled in Windows by default.<br /><br />You need to enable the RDP Shortpath listener on session hosts using Microsoft Intune or Group Policy, and allow an inbound port to accept connections. | Default (enabled) | UDP and TCP are enabled in Windows by default. | | RDP Shortpath for managed networks with ICE/STUN | UDP and TCP are enabled in Windows by default.<br /><br />You don't need any extra configuration, but you can limit the port range used. | Default (enabled) | UDP and TCP are enabled in Windows by default. | | RDP Shortpath for public networks with ICE/STUN | UDP and TCP are enabled in Windows by default.<br /><br />You don't need any extra configuration, but you can limit the port range used. | Default (enabled) | UDP and TCP are enabled in Windows by default. |
-| RDP Shortpath for public networks via TURN | UDP and TCP are enabled in Windows by default.<br /><br />You don't need any extra configuration, but you can limit the port range used. | Default (disabled) | UDP and TCP are enabled in Windows by default. |
+| RDP Shortpath for public networks via TURN | UDP and TCP are enabled in Windows by default.<br /><br />You don't need any extra configuration, but you can limit the port range used. | Default (enabled) | UDP and TCP are enabled in Windows by default. |
## Prerequisites
Here's how to configure RDP Shortpath in the host pool networking settings using
:::image type="content" source="media/configure-rdp-shortpath/rdp-shortpath-host-pool-configuration.png" alt-text="A screenshot showing the RDP Shortpath tab of a host pool's networking properties." lightbox="media/configure-rdp-shortpath/rdp-shortpath-host-pool-configuration.png":::
-1. For each option, select a value from the drop-down each based on your requirements. **Default** corresponds to **Enabled** for each option, except **RDP Shortpath for public networks via TURN**, which is **Disabled** during its preview.
+1. For each option, select a value from the drop-down each based on your requirements. **Default** corresponds to **Enabled** for each option.
1. Select **Save**.
Here's how to configure RDP Shortpath in the host pool networking settings using
RelayUdp : Default ```
- The available PowerShell parameters for RDP Shortpath map to the options as follows. Valid values for each of these parameters are **Default**, **Enabled**, or **Disabled**. Default corresponds to what Microsoft sets it to, in this case **Enabled** for each option, except **RDP Shortpath for public networks via TURN**, which is **Disabled** during its preview.
+ The available PowerShell parameters for RDP Shortpath map to the options as follows. Valid values for each of these parameters are **Default**, **Enabled**, or **Disabled**. Default corresponds to what Microsoft sets it to, in this case **Enabled** for each option.
| PowerShell Parameter | RDP Shortpath option | 'Default' meaning | |--|--|--| | ManagedPrivateUdp | RDP Shortpath for managed networks | Enabled | | DirectUdp | RDP Shortpath for managed networks with ICE/STUN | Enabled | | PublicUdp | RDP Shortpath for public networks with ICE/STUN | Enabled |
- | RelayUdp | RDP Shortpath for public networks via TURN | Disabled (during preview) |
+ | RelayUdp | RDP Shortpath for public networks via TURN | Enabled |
3. Use the `Update-AzWvdHostPool` cmdlet with the following examples to configure RDP Shortpath.
Here's how to configure RDP Shortpath in the host pool networking settings using
Update-AzWvdHostPool @parameters ```
- - To only use RDP Shortpath for public networks via TURN and disable the other options, run the following commands. During the preview, the default value for TURN corresponds to **Disabled**, so it explicitly needs to be set to **Enabled**.
+ - To only use RDP Shortpath for public networks via TURN and disable the other options, run the following commands.
```azurepowershell $parameters = @{
Here's how to configure RDP Shortpath in the host pool networking settings using
-> [!IMPORTANT]
-> For connections using TURN, during the preview TURN is only available for connections to session hosts in a validation host pool. To configure your host pool as a validation environment, see [Define your host pool as a validation environment](create-validation-host-pool.md#define-your-host-pool-as-a-validation-host-pool).
## Check that UDP is enabled on Windows client devices
virtual-desktop Rdp Shortpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/rdp-shortpath.md
Last updated 06/17/2024
# RDP Shortpath for Azure Virtual Desktop
-> [!IMPORTANT]
-> Using RDP Shortpath for public networks with TURN for Azure Virtual Desktop is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- RDP Shortpath establishes a direct UDP-based transport between a local device Windows App or the Remote Desktop app on supported platforms and session host in Azure Virtual Desktop. By default, the Remote Desktop Protocol (RDP) tries to establish a remote session using UDP and uses a TCP-based reverse connect transport as a fallback connection mechanism. UDP-based transport offers better connection reliability and more consistent latency. TCP-based reverse connect transport provides the best compatibility with various networking configurations and has a high success rate for establishing RDP connections.
RDP Shortpath can be used in two ways:
1. A *direct* UDP connection using the Simple Traversal Underneath NAT (STUN) protocol between a client and session host.
- 1. An *indirect* UDP connection using the Traversal Using Relay NAT (TURN) protocol with a relay between a client and session host. This is in preview.
+ 1. An *indirect* UDP connection using the Traversal Using Relay NAT (TURN) protocol with a relay between a client and session host.
The transport used for RDP Shortpath is based on the [Universal Rate Control Protocol (URCP)](https://www.microsoft.com/research/publication/urcp-universal-rate-control-protocol-for-real-time-communication-applications/). URCP enhances UDP with active monitoring of the network conditions and provides fair and full link utilization. URCP operates at low delay and loss levels as needed. > [!IMPORTANT]
-> - During the preview, TURN is only available for connections to session hosts in a validation host pool. To configure your host pool as a validation environment, see [Define your host pool as a validation environment](create-validation-host-pool.md#define-your-host-pool-as-a-validation-host-pool).
->
-> - RDP Shortpath for public networks with TURN is only available in the Azure public cloud.
+> RDP Shortpath for public networks with TURN is only available in the Azure public cloud.
## Key benefits
If your environment uses Symmetric NAT, which is the mapping of a single private
Where users have RDP Shortpath for both managed network and public networks is available to them, then the first algorithm found will be used. The user will use whichever connection gets established first for that session. For more information, see [Example scenarios](#example-scenarios).
-#### TURN availability (preview)
+#### TURN availability
TURN is available in the following Azure regions:
virtual-desktop Whats New Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-insights.md
When one of the numbers is increased, all numbers after it must change, too. One
In this update, we made the following change: -- Connection reliability tab now generally available.
+- Connection reliability is generally available.
## Version 3.4.0
In this update, we made the following change:
In this update, we made the following changes: -- Added HCI core count.-- Updated reliability of users per core calculation.
+- Added Azure Stack HCI core count.
+- Updated the reliability of the calculation for users per core.
## Version 3.3.1
In this update, we made the following changes:
In this update, we made the following change: -- Introduced previews for Connection Reliability and Autoscale Reporting.
+- Introduced previews for connection reliability and autoscale reporting.
## Version 3.2.2
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Previously updated : 07/12/2024 Last updated : 07/31/2024 # What's new in Azure Virtual Desktop?
Make sure to check back here often to keep up with new updates.
> [!TIP] > See [What's new in documentation](whats-new-documentation.md), where we highlight new and updated articles for Azure Virtual Desktop.
+## July 2024
+
+Here's what changed in July 2024:
+
+### New Teams available on Windows Enterprise multi-session images with M365 apps pre-installed
+
+Our Windows Enterprise multi-session images + Microsoft M365 apps are updated with the new Teams app pre-installed. Users accessing newly provisioned session hosts with the latest images, updated late July, enjoy the new experience. Learn more about [What's changing in the new Microsoft Teams](/microsoftteams/new-teams-whats-changing).
+
+Learn more about Windows Enterprise multi-session in our [FAQ](windows-multisession-faq.yml).
+ ## June 2024 Here's what changed in June 2024:
virtual-machine-scale-sets Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-portal.md
# Quickstart: Create a Virtual Machine Scale Set in the Azure portal
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets > [!NOTE]
First, create a public Standard Load Balancer by using the portal. The name and
![Create a load balancer](./media/virtual-machine-scale-sets-create-portal/load-balancer.png) ## Create Virtual Machine Scale Set
-You can deploy a scale set with a Windows Server image or Linux image such as RHEL, CentOS, Ubuntu, or SLES.
+You can deploy a scale set with a Windows Server image or Linux image such as RHEL, Ubuntu, or SLES.
1. Type **Scale set** in the search box. In the results, under **Marketplace**, select **Virtual Machine Scale Sets**. Select **Create** on the **Virtual Machine Scale Sets** page, which opens the **Create a Virtual Machine Scale Set** page. 1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and select *myVMSSResourceGroup* from resource group list.
virtual-machine-scale-sets Spot Priority Mix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/spot-priority-mix.md
# Spot Priority Mix for high availability and cost savings
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- **Applies to:** :heavy_check_mark: Flexible scale sets Azure allows you to have the flexibility of running a mix of uninterruptible standard VMs and interruptible Spot VMs for Virtual Machine Scale Set deployments. You're able to deploy this Spot Priority Mix using Flexible orchestration to easily balance between high-capacity availability and lower infrastructure costs according to your workload requirements. This feature allows you to easily manage your scale set capability to achieve the following goals:
az vmss create -n myScaleSet \
--regular-priority-percentage 50 \ --orchestration-mode flexible \ --instance-count 4 \
- --image CentOS85Gen2 \
+ --image Ubuntu2204 \
--priority Spot \ --eviction-policy Deallocate \ --single-placement-group False \
virtual-machine-scale-sets Tutorial Autoscale Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-autoscale-cli.md
# Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- When you create a scale set, you define the number of VM instances that you wish to run. As your application demand changes, you can automatically increase or decrease the number of VM instances. The ability to autoscale lets you keep up with customer demand or respond to application performance changes throughout the lifecycle of your app. In this tutorial you learn how to: > [!div class="checklist"]
sudo apt-get update
sudo apt-get -y install stress sudo stress --cpu 10 --timeout 420 & ```
-# [RHEL, CentOS](#tab/redhat)
+# [RHEL](#tab/redhat)
```bash sudo dnf install stress-ng
sudo apt-get -y install stress
sudo stress --cpu 10 --timeout 420 & ```
-# [RHEL, CentOS](#tab/redhat)
+# [RHEL](#tab/redhat)
```bash sudo dnf install stress-ng
virtual-machines Enable Nvme Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/enable-nvme-interface.md
# Supported OS images for remote NVMe
-> [!NOTE]
-> This article references CentOS, a Linux distribution that reached the end of support. Consider your use and plan accordingly. For more information, see the [guidance for CentOS end of support](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- The following lists provide up-to-date information on which OS images are tagged as supported for remote NVM Express (NVMe). > [!IMPORTANT]
For more information about enabling the NVMe interface on virtual machines creat
| Almalinux 8.x (currently 8.7) | almalinux: almalinux:8-gen2: latest | | Almalinux 9.x (currently 9.1) | almalinux: almalinux:9-gen2: latest | | Debian 11 | Debian: debian-11:11-gen2: latest |
-| CentOS 7.9 | openlogic: centos:7_9-gen2: latest |
| RHEL 7.9 | RedHat: RHEL:79-gen2: latest | | RHEL 8.6 | RedHat: RHEL:86-gen2: latest | | RHEL 8.7 | RedHat: RHEL:87-gen2: latest |
virtual-machines Enable Infiniband https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/enable-infiniband.md
# Enable InfiniBand
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets [RDMA capable](../sizes-hpc.md#rdma-capable-instances) [HB-series](../sizes-hpc.md) and [N-series](../sizes-gpu.md) VMs communicate over the low latency and high bandwidth InfiniBand network. The RDMA capability over such an interconnect is critical to boost the scalability and performance of distributed-node HPC and AI workloads. The InfiniBand enabled HB-series and N-series VMs are connected in a non-blocking fat tree with a low-diameter design for optimized and consistent RDMA performance.
To add the VM extension to a VM, you can use [Azure PowerShell](/powershell/azur
### Linux
-The [OFED drivers for Linux](https://www.mellanox.com/products/infiniband-drivers/linux/mlnx_ofed) can be installed with the example below. Though the example here is for RHEL/CentOS, but the steps are general and can be used for any compatible Linux operating system such as Ubuntu (18.04, 19.04, 20.04) and SLES (12 SP4+ and 15). More examples for other distros are on the [azhpc-images repo](https://github.com/Azure/azhpc-images/blob/master/ubuntu/ubuntu-20.x/ubuntu-20.04-hpc/install_mellanoxofed.sh). The inbox drivers also work as well, but the Mellanox OFED drivers provide more features.
+The [OFED drivers for Linux](https://www.mellanox.com/products/infiniband-drivers/linux/mlnx_ofed) can be installed with the example below. Though the example here is for RHEL, but the steps are general and can be used for any compatible Linux operating system such as Ubuntu (18.04, 19.04, 20.04) and SLES (12 SP4+ and 15). More examples for other distros are on the [azhpc-images repo](https://github.com/Azure/azhpc-images/blob/master/ubuntu/ubuntu-20.x/ubuntu-20.04-hpc/install_mellanoxofed.sh). The inbox drivers also work as well, but the Mellanox OFED drivers provide more features.
```bash MLNX_OFED_DOWNLOAD_URL=http://content.mellanox.com/ofed/MLNX_OFED-5.0-2.1.8.0/MLNX_OFED_LINUX-5.0-2.1.8.0-rhel7.7-x86_64.tgz
tar zxvf MLNX_OFED_LINUX-5.0-2.1.8.0-rhel7.7-x86_64.tgz
KERNEL=( $(rpm -q kernel | sed 's/kernel\-//g') ) KERNEL=${KERNEL[-1]} # Uncomment the lines below if you are running this on a VM
-#RELEASE=( $(cat /etc/centos-release | awk '{print $4}') )
-#yum -y install http://olcentgbl.trafficmanager.net/centos/${RELEASE}/updates/x86_64/kernel-devel-${KERNEL}.rpm
+#RELEASE=( $(cat /etc/redhat-release | awk '{print $4}') )
+#yum -y install http://olcentgbl.trafficmanager.net/redhat/${RELEASE}/updates/x86_64/kernel-devel-${KERNEL}.rpm
sudo yum install -y kernel-devel-${KERNEL} sudo ./MLNX_OFED_LINUX-5.0-2.1.8.0-rhel7.7-x86_64/mlnxofedinstall --kernel $KERNEL --kernel-sources /usr/src/kernels/${KERNEL} --add-kernel-support --skip-repo ```
sudo ./MLNX_OFED_LINUX-5.0-2.1.8.0-rhel7.7-x86_64/mlnxofedinstall --kernel $KERN
For Windows, download and install the [Mellanox OFED for Windows drivers](https://www.mellanox.com/products/adapter-software/ethernet/windows/winof-2). ## Enable IP over InfiniBand (IB)
-If you plan to run MPI jobs, you typically don't need IPoIB. The MPI library will use the verbs interface for IB communication (unless you explicitly use the TCP/IP channel of MPI library). But if you have an app that uses TCP/IP for communication and you want to run over IB, you can use IPoIB over the IB interface. Use the following commands (for RHEL/CentOS) to enable IP over InfiniBand.
+If you plan to run MPI jobs, you typically don't need IPoIB. The MPI library will use the verbs interface for IB communication (unless you explicitly use the TCP/IP channel of MPI library). But if you have an app that uses TCP/IP for communication and you want to run over IB, you can use IPoIB over the IB interface. Use the following commands (for RHEL) to enable IP over InfiniBand.
> [!IMPORTANT] > To avoid issues, ensure you aren't running older versions of Microsoft Azure Linux Agent (waagent). We recommend using at least [version 2.4.0.2](https://github.com/Azure/WALinuxAgent/releases/tag/v2.4.0.2) before enabling IP over IB.
virtual-machines Hpc Compute Infiniband Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpc-compute-infiniband-linux.md
# InfiniBand Driver Extension for Linux
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- This extension installs InfiniBand OFED drivers on InfiniBand and SR-IOV-enabled ('r' sizes) [HB-series](../sizes-hpc.md) and [N-series](../sizes-gpu.md) VMs running Linux. Depending on the VM family, the extension installs the appropriate drivers for the Connect-X NIC. It does not install the InfiniBand ND drivers on the non-SR-IOV enabled [HB-series](../sizes-hpc.md) and [N-series](../sizes-gpu.md) VMs. Instructions on manual installation of the OFED drivers are available in [Enable InfiniBand on HPC VMs](enable-infiniband.md#manual-installation).
This extension supports the following OS distros, depending on driver support fo
| Distribution | Version | InfiniBand NIC drivers | |||| | Ubuntu | 18.04 LTS, 20.04 LTS, 22.04 LTS | CX3-Pro, CX5, CX6 |
-| CentOS | 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8.1, 8,2 | CX3-Pro, CX5, CX6 |
| Red Hat Enterprise Linux | 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8.1, 8,2 | CX3-Pro, CX5, CX6 | > [!IMPORTANT]
virtual-machines Hb Hc Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hb-hc-known-issues.md
Title: Troubleshooting known issues with HPC and GPU VMs - Azure Virtual Machines | Microsoft Docs
-description: Learn about troubleshooting known issues with HPC and GPU VM sizes in Azure.
+description: Learn about troubleshooting known issues with HPC and GPU virtual machine (VM) sizes in Azure.
# Known issues with HB-series and N-series VMs
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets This article attempts to list recent common issues and their solutions when using the [HB-series](sizes-hpc.md) and [N-series](sizes-gpu.md) HPC and GPU VMs. ## Cache topology on Standard_HB120rs_v3
-`lstopo` displays incorrect cache topology on the Standard_HB120rs_v3 VM size. It may display that thereΓÇÖs only 32 MB L3 per NUMA. However in practice, there is indeed 120 MB L3 per NUMA as expected since the same 480 MB of L3 to the entire VM is available as with the other constrained-core HBv3 VM sizes. This is a cosmetic error in displaying the correct value, which should not impact workloads.
+`lstopo` displays incorrect cache topology on the Standard_HB120rs_v3 VM size. It may display that thereΓÇÖs only 32 MB L3 per nonuniform memory access (NUMA) node. However, in practice, there's indeed 120 MB L3 per NUMA as expected since the same 480 MB of L3 to the entire VM is available as with the other constrained-core HBv3 VM sizes. This incorrect display is a cosmetic error and shouldn't affect workloads.
## qp0 Access Restriction
-To prevent low-level hardware access that can result in security vulnerabilities, Queue Pair 0 is not accessible to guest VMs. This should only affect actions typically associated with administration of the ConnectX InfiniBand NIC, and running some InfiniBand diagnostics like ibdiagnet, but not end-user applications.
+To prevent low-level hardware access that can result in security vulnerabilities, Queue Pair 0 isn't accessible to guest VMs. This restriction should only affect actions typically associated with administration of the ConnectX InfiniBand network interface card (NIC) and running some InfiniBand diagnostics like ibdiagnet, but not end-user applications.
## MOFED installation on Ubuntu
-On Ubuntu-18.04 based marketplace VM images with kernels version `5.4.0-1039-azure #42` and newer, some older Mellanox OFED are incompatible causing an increase in VM boot time up to 30 minutes in some cases. This has been reported for both Mellanox OFED versions 5.2-1.0.4.0 and 5.2-2.2.0.0. The issue is resolved with Mellanox OFED 5.3-1.0.0.1.
-If it is necessary to use the incompatible OFED, a solution is to use the **Canonical:UbuntuServer:18_04-lts-gen2:18.04.202101290** marketplace VM image, or older and not to update the kernel.
+On Ubuntu-18.04 based marketplace VM images with kernels version `5.4.0-1039-azure #42` and newer, some older Mellanox OFED are incompatible causing an increase in VM boot time up to 30 minutes in some cases. This issue is reported in both Mellanox OFED versions 5.2-1.0.4.0 and 5.2-2.2.0.0. The issue is resolved with Mellanox OFED 5.3-1.0.0.1.
+If it's necessary to use the incompatible OFED, a solution is to use the **Canonical:UbuntuServer:18_04-lts-gen2:18.04.202101290** marketplace VM image, or older and not to update the kernel.
## Accelerated Networking on HB, HC, HBv2, HBv3, HBv4, HX, NDv2 and NDv4
-[Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/) is now available on the RDMA and InfiniBand capable and SR-IOV enabled VM sizes [HB](hb-series.md), [HC](hc-series.md), [HBv2](hbv2-series.md), [HBv3](hbv3-series.md), [HBv4](hbv4-series.md), [HX](hx-series.md), [NDv2](ndv2-series.md) and [NDv4](nda100-v4-series.md). This capability now allows enhanced throughout (up to 30 Gbps) and latencies over the Azure Ethernet network. Though this is separate from the RDMA capabilities over the InfiniBand network, some platform changes for this capability may impact behavior of certain MPI implementations when running jobs over InfiniBand. Specifically the InfiniBand interface on some VMs may have a slightly different name (mlx5_1 as opposed to earlier mlx5_0). This may require tweaking of the MPI command lines especially when using the UCX interface (commonly with OpenMPI and HPC-X).
-
-The simplest solution currently is to use the latest HPC-X on the CentOS-HPC VM images where we rename the InfiniBand and Accelerated Networking interfaces accordingly or to run the [script](https://github.com/Azure/azhpc-images/blob/master/common/install_azure_persistent_rdma_naming.sh) to rename the InfiniBand interface.
+[Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/) is now available on the RDMA and InfiniBand capable. It's also available on SR-IOV enabled VM sizes [HB](hb-series.md), [HC](hc-series.md), [HBv2](hbv2-series.md), [HBv3](hbv3-series.md), [HBv4](hbv4-series.md), [HX](hx-series.md), [NDv2](ndv2-series.md), and [NDv4](nda100-v4-series.md). This capability now allows enhanced throughout (up to 30 Gbps) and latencies over the Azure Ethernet network. Though this enhanced throughput is separate from the RDMA capabilities over the InfiniBand network, some platform changes for this capability could affect behavior of certain MPI implementations when running jobs over InfiniBand. Specifically the InfiniBand interface on some VMs may have a slightly different name (mlx5_1 as opposed to earlier mlx5_0). This issue may require tweaking of the MPI command lines, especially when using the UCX interface (commonly with OpenMPI and HPC-X).
-More details on this are available on this [TechCommunity article](https://techcommunity.microsoft.com/t5/azure-compute/accelerated-networking-on-hb-hc-and-hbv2/ba-p/2067965) with instructions on how to address any observed issues.
+For more information on this issue, see the [TechCommunity article with instructions on how to address any observed issues](https://techcommunity.microsoft.com/t5/azure-compute/accelerated-networking-on-hb-hc-and-hbv2/ba-p/2067965).
## InfiniBand driver installation on non-SR-IOV VMs
-Currently H16r, H16mr, and NC24r are not SR-IOV enabled. For more information on the InfiniBand stack bifurcation, see [Azure VM sizes - HPC](sizes-hpc.md#rdma-capable-instances).
-InfiniBand can be configured on the SR-IOV enabled VM sizes with the OFED drivers while the non-SR-IOV VM sizes require ND drivers. This IB support is available appropriately for [CentOS, RHEL, and Ubuntu](configure.md).
+Currently H16r, H16mr, and NC24r aren't SR-IOV enabled. For more information on the InfiniBand stack bifurcation, see [Azure VM sizes - HPC](sizes-hpc.md#rdma-capable-instances).
+InfiniBand can be configured on the SR-IOV enabled VM sizes with the OFED drivers while the non-SR-IOV VM sizes require ND drivers. This IB support is available appropriately for [RHEL and Ubuntu](configure.md).
## Duplicate MAC with cloud-init with Ubuntu on H-series and N-series VMs
-There's a known issue with cloud-init on Ubuntu VM images as it tries to bring up the IB interface. This can happen either on VM reboot or when trying to create a VM image after generalization. The VM boot logs may show an error like so:
+There's a known issue with cloud-init on Ubuntu VM images as it tries to bring up the IB interface. This issue can happen either on VM reboot or when trying to create a VM image after generalization. The VM boot logs may show an error like so:
```output ΓÇ£Starting Network Service...RuntimeError: duplicate mac found! both 'eth1' and 'ib0' have macΓÇ¥. ```
-This 'duplicate MAC with cloud-init on Ubuntu" is a known issue. This will be resolved in newer kernels. If this issue is encountered, the workaround is:
+This 'duplicate MAC with cloud-init on Ubuntu" is a known issue. Newer kernels plan to resolve the issue. If this issue is encountered, the workaround is:
1) Deploy the (Ubuntu 18.04) marketplace VM image 2) Install the necessary software packages to enable IB ([instruction here](https://techcommunity.microsoft.com/t5/azure-compute/configuring-infiniband-for-ubuntu-hpc-and-gpu-vms/ba-p/1221351))
-3) Edit waagent.conf to change EnableRDMA=y
-4) Disable networking in cloud-init
+3) Edit waagent.conf and set EnableRDMA=y
+4) Disable networking in cloud-init:
```bash echo network: {config: disabled} | sudo tee /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg ```
-5) Edit netplan's networking configuration file generated by cloud-init to remove the MAC
+5) To remove the MAC, edit netplan's networking configuration file generated by cloud-init:
```bash sudo bash -c "cat > /etc/netplan/50-cloud-init.yaml" <<'EOF' network:
This 'duplicate MAC with cloud-init on Ubuntu" is a known issue. This will be re
## DRAM on HB-series VMs
-HB-series VMs can only expose 228 GB of RAM to guest VMs at this time. Similarly, 458 GB on HBv2 and 448 GB on HBv3 VMs. This is due to a known limitation of Azure hypervisor to prevent pages from being assigned to the local DRAM of AMD CCXΓÇÖs (NUMA domains) reserved for the guest VM.
+HB-series VMs can only expose 228 GB of RAM to guest VMs at this time. Similarly, 458 GB on HBv2 and 448 GB on HBv3 VMs. This limitation is due to a known limitation of Azure hypervisor to prevent pages from being assigned to the local DRAM of AMD CCXs (NUMA domains) reserved for the guest VM.
## GSS Proxy
-GSS Proxy has a known bug in CentOS/RHEL 7.5 that can manifest as a significant performance and responsiveness penalty when used with NFS. This can be mitigated with:
+GSS Proxy has a known bug in RHEL 7.5 that can manifest as a significant performance and responsiveness penalty when used with NFS. This bug can be mitigated with:
```bash sudo sed -i 's/GSS_USE_PROXY="yes"/GSS_USE_PROXY="no"/g' /etc/sysconfig/nfs
sudo sed -i 's/GSS_USE_PROXY="yes"/GSS_USE_PROXY="no"/g' /etc/sysconfig/nfs
## Cache Cleaning
-On HPC systems, it is often useful to clean up the memory after a job has finished before the next user is assigned the same node. After running applications in Linux you may find that your available memory reduces while your buffer memory increases, despite not running any applications.
+On HPC systems, it's often useful to clean up the memory after a job finishes before the next user is assigned the same node. After running applications in Linux, you may find that your available memory reduces while your buffer memory increases, despite not running any applications.
![Screenshot of command prompt before cleaning](./media/hpc/cache-cleaning-1.png)
-Using `numactl -H` will show which NUMAnode(s) the memory is buffered with (possibly all). In Linux, users can clean the caches in three ways to return buffered or cached memory to ΓÇÿfreeΓÇÖ. You need to be root or have sudo permissions.
+Using `numactl -H` shows which NUMAnodes the memory is buffered with (possibly all). In Linux, users can clean the caches in three ways to return buffered or cached memory to ΓÇÿfreeΓÇÖ. You need to be root or have sudo permissions.
```bash sudo echo 1 > /proc/sys/vm/drop_caches [frees page-cache]
virtual-machines Hbv2 Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv2-series-overview.md
Title: HBv2-series VM overview - Azure Virtual Machines | Microsoft Docs
+ Title: HBv2-series virtual machine (VM) overview - Azure Virtual Machines | Microsoft Docs
description: Learn about the HBv2-series VM size in Azure.
# HBv2 series virtual machine overview
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets. Maximizing high performance compute (HPC) application performance on AMD EPYC requires a thoughtful approach memory locality and process placement. Below we outline the AMD EPYC architecture and our implementation of it on Azure for HPC applications. We use the term **pNUMA** to refer to a physical NUMA domain, and **vNUMA** to refer to a virtualized NUMA domain.
NUMA domains within VM OS = 4
C-states = Enabled ```
-As a result, the server boots with 4 NUMA domains (2 per socket) each 32 cores in size. Each NUMA has direct access to 4 channels of physical DRAM operating at 3200 MT/s.
+As a result, the server boots with 4 NUMA domains (2 per socket). Each domain is 32 cores in size. Each NUMA has direct access to 4 channels of physical DRAM operating at 3,200 MT/s.
To provide room for the Azure hypervisor to operate without interfering with the VM, we reserve 8 physical cores per server.
Process pinning works on HBv2-series VMs because we expose the underlying silico
| MPI Support | HPC-X, Intel MPI, OpenMPI, MVAPICH2, MPICH, Platform MPI | | Additional Frameworks | UCX, libfabric, PGAS | | Azure Storage Support | Standard and Premium Disks (maximum 8 disks) |
-| OS Support for SRIOV RDMA | CentOS/RHEL 7.9+, Ubuntu 18.04+, SLES 12 SP5+, WinServer 2016+ |
+| OS Support for SRIOV RDMA | RHEL 7.9+, Ubuntu 18.04+, SLES 12 SP5+, WinServer 2016+ |
| Orchestrator Support | CycleCloud, Batch, AKS; [cluster configuration options](sizes-hpc.md#cluster-configuration-options) | > [!NOTE]
virtual-machines Hbv3 Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv3-series-overview.md
Title: HBv3-series VM overview, architecture, topology - Azure Virtual Machines | Microsoft Docs
+ Title: HBv3-series virtual machine (VM) overview, architecture, topology - Azure Virtual Machines | Microsoft Docs
description: Learn about the HBv3-series VM size in Azure.
# HBv3-series virtual machine overview
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets An [HBv3-series](hbv3-series.md) server features 2 * 64-core EPYC 7V73X CPUs for a total of 128 physical "Zen3" cores with AMD 3D V-Cache. Simultaneous Multithreading (SMT) is disabled on HBv3. These 128 cores are divided into 16 sections (8 per socket), each section containing 8 processor cores with uniform access to a 96 MB L3 cache. Azure HBv3 servers also run the following AMD BIOS settings:
NUMA domains within VM OS = 4
C-states = Enabled ```
-As a result, the server boots with 4 NUMA domains (2 per socket) each 32 cores in size. Each NUMA has direct access to 4 channels of physical DRAM operating at 3200 MT/s.
+As a result, the server boots with 4 NUMA domains (2 per socket). Each domain is 32 cores in size. Each NUMA has direct access to 4 channels of physical DRAM operating at 3,200 MT/s.
To provide room for the Azure hypervisor to operate without interfering with the VM, we reserve 8 physical cores per server.
When paired in a striped array, the NVMe SSD provides up to 7 GB/s reads and 3 G
| MPI Support | HPC-X, Intel MPI, OpenMPI, MVAPICH2, MPICH | | Additional Frameworks | UCX, libfabric, PGAS | | Azure Storage Support | Standard and Premium Disks (maximum 32 disks) |
-| OS Support for SRIOV RDMA | CentOS/RHEL 7.9+, Ubuntu 18.04+, SLES 15.4, WinServer 2016+ |
-| Recommended OS for Performance | CentOS 8.1, Windows Server 2019+
+| OS Support for SRIOV RDMA | RHEL 7.9+, Ubuntu 18.04+, SLES 15.4, WinServer 2016+ |
+| Recommended OS for Performance | Windows Server 2019+
| Orchestrator Support | Azure CycleCloud, Azure Batch, AKS; [cluster configuration options](sizes-hpc.md#cluster-configuration-options) | > [!NOTE]
virtual-machines Imaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/imaging.md
# Bringing and creating Linux images in Azure
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets This overview covers the basic concepts around imaging and how to successfully build and use Linux images in Azure. Before you bring a custom image to Azure, you need to be aware of the types and options available to you.
-This article will talk through the image decision points and requirements as well as explain key concepts so that you can follow this and be able to create your own custom images to your specification.
+This article talks through the image decision points and requirements as well as explain key concepts so that you can follow this and be able to create your own custom images to your specification.
## Difference between managed disks and images
Azure images can be made up of multiple OS disks and data disks. When you use a
## Generalized and specialized
-Azure offers two main image types, generalized and specialized. The terms generalized and specialized are originally Windows terms which migrated in to Azure. These types define how the platform will handle the VM when it turns it on. Both types have advantages, disadvantages, and prerequisites. Before you get started, you need to know what image type you will need. Below summarizes the scenarios and type you would need to choose:
+Azure offers two main image types, generalized and specialized. The terms generalized and specialized are originally Windows terms which migrated in to Azure. These types define how the platform handles the VM when it turns it on. Both types have advantages, disadvantages, and prerequisites. Before you get started, you need to know what image type you need. Below summarizes the scenarios and type you would need to choose:
| Scenario | Image type | Storage options | | - |:-:| :-:| | Create an image that can be configured for use by multiple VMs. You can set the hostname, add an admin user, and perform other tasks during first boot. | Generalized | Azure Compute Gallery or stand-alone managed images | | Create an image from a VM snapshot or a backup. | Specialized |Azure Compute Gallery or a managed disk |
-| Quickly create an image that does not need any configuration for creating multiple VMs. |Specialized |Azure Compute Gallery |
+| Quickly create an image that doesn't need any configuration for creating multiple VMs. |Specialized |Azure Compute Gallery |
### Generalized images
-A generalized image is an image that requires setup to be completed on first boot. For example, on first boot you set the hostname, admin user, and other VM-specific configurations. This is useful when you want the image to be reused multiple times and when you want to pass in parameters during creation. If the generalized image contains the Azure agent, the agent will process the parameters and signal back to the platform that the initial configuration has completed. This process is called [provisioning](./provisioning.md).
+A generalized image is an image that requires setup to be completed on first boot. For example, on first boot you set the hostname, admin user, and other VM-specific configurations. This is useful when you want the image to be reused multiple times and when you want to pass in parameters during creation. If the generalized image contains the Azure agent, the agent processes the parameters and signal back to the platform that the initial configuration has completed. This process is called [provisioning](./provisioning.md).
Provisioning requires that a provisioner is included in the image. There are two provisioners: - [Azure Linux Agent](../extensions/agent-linux.md)
These are [prerequisites](./create-upload-generic.md) for creating an image.
### Specialized images
-These are images that are completely configured and don't require VM or special parameters. The platform will just turn the VM on and you will need to handle uniqueness within the VM, like setting a hostname, to avoid DNS conflicts on the same VNET.
+These are images that are completely configured and don't require VM or special parameters. The platform will just turn on the VM, and you need to handle uniqueness within the VM, like setting a hostname, to avoid DNS conflicts on the same VNET.
Provisioning agents aren't required for these images, however you may want to have extension handling capabilities. You can install the Linux Agent but disable the provisioning option. Even though you don't need a provisioning agent, the image must fulfill [prerequisites](./create-upload-generic.md) for Azure Images.
Azure supports Hyper-V Generation 1 (Gen1) and Generation 2 (Gen2). Gen2 is the
If you still need to create your own image, ensure it meets the [image prerequisites](./create-upload-generic.md) and upload to Azure. Distribution specific requirements: -- [CentOS-based Distributions](create-upload-centos.md) - [Debian Linux](debian-create-upload-vhd.md) - [Flatcar Container Linux](flatcar-create-upload-vhd.md) - [FreeBSD](freebsd-intro-on-azure.md)
virtual-machines Tutorial Manage Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-manage-vm.md
# Tutorial: Create and Manage Linux VMs with the Azure CLI
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets Azure virtual machines provide a fully configurable and flexible computing environment. This tutorial covers basic Azure virtual machine deployment items such as selecting a VM size, selecting a VM image, and deploying a VM. You learn how to:
exit
## Understand VM images
-The Azure Marketplace includes many images that can be used to create VMs. In the previous steps, a virtual machine was created using an Ubuntu image. In this step, the Azure CLI is used to search the marketplace for a CentOS image, which is then used to deploy a second virtual machine.
+The Azure Marketplace includes many images that can be used to create VMs. In the previous steps, a virtual machine was created using an Ubuntu image. In this step, the Azure CLI is used to search the marketplace for an Ubuntu image, which is then used to deploy a second virtual machine.
To see a list of the most commonly used images, use the [az vm image list](/cli/azure/vm/image) command.
The command output returns the most popular VM images on Azure.
```output Architecture Offer Publisher Sku Urn UrnAlias Version -- - - - --
-x64 CentOS OpenLogic 7.5 OpenLogic:CentOS:7.5:latest CentOS latest
x64 debian-10 Debian 10 Debian:debian-10:10:latest Debian latest x64 flatcar-container-linux-free kinvolk stable kinvolk:flatcar-container-linux-free:stable:latest Flatcar latest x64 opensuse-leap-15-3 SUSE gen2 SUSE:opensuse-leap-15-3:gen2:latest openSUSE-Leap latest x64 RHEL RedHat 7-LVM RedHat:RHEL:7-LVM:latest RHEL latest x64 sles-15-sp3 SUSE gen2 SUSE:sles-15-sp3:gen2:latest SLES latest
-x64 UbuntuServer Canonical 18.04-LTS Canonical:UbuntuServer:18.04-LTS:latest UbuntuLTS latest
+x64 0001-com-ubuntu-server-jammy Canonical 18.04-LTS Canonical:UbuntuServer:18.04-LTS:latest UbuntuLTS latest
x64 WindowsServer MicrosoftWindowsServer 2022-Datacenter MicrosoftWindowsServer:WindowsServer:2022-Datacenter:latest Win2022Datacenter latest x64 WindowsServer MicrosoftWindowsServer 2022-datacenter-azure-edition-core MicrosoftWindowsServer:WindowsServer:2022-datacenter-azure-edition-core:latest Win2022AzureEditionCore latest x64 WindowsServer MicrosoftWindowsServer 2019-Datacenter MicrosoftWindowsServer:WindowsServer:2019-Datacenter:latest Win2019Datacenter latest
x64 WindowsServer MicrosoftWindowsServer 2012-Datac
x64 WindowsServer MicrosoftWindowsServer 2008-R2-SP1 MicrosoftWindowsServer:WindowsServer:2008-R2-SP1:latest Win2008R2SP1 latest ```
-A full list can be seen by adding the `--all` parameter. The image list can also be filtered by `--publisher` or `ΓÇô-offer`. In this example, the list is filtered for all images, published by OpenLogic, with an offer that matches *CentOS*.
+A full list can be seen by adding the `--all` parameter. The image list can also be filtered by `--publisher` or `ΓÇô-offer`. In this example, the list is filtered for all images, published by OpenLogic, with an offer that matches *0001-com-ubuntu-server-jammy*.
```azurecli-interactive
-az vm image list --offer CentOS --publisher OpenLogic --all --output table
+az vm image list --offer 0001-com-ubuntu-server-jammy --publisher Canonical --all --output table
``` Example partial output: ```output
-Architecture Offer Publisher Sku Urn Version
- -- --
-x64 CentOS OpenLogic 8_2 OpenLogic:CentOS:8_2:8.2.2020111800 8.2.2020111800
-x64 CentOS OpenLogic 8_2-gen2 OpenLogic:CentOS:8_2-gen2:8.2.2020062401 8.2.2020062401
-x64 CentOS OpenLogic 8_2-gen2 OpenLogic:CentOS:8_2-gen2:8.2.2020100601 8.2.2020100601
-x64 CentOS OpenLogic 8_2-gen2 OpenLogic:CentOS:8_2-gen2:8.2.2020111801 8.2.2020111801
-x64 CentOS OpenLogic 8_3 OpenLogic:CentOS:8_3:8.3.2020120900 8.3.2020120900
-x64 CentOS OpenLogic 8_3 OpenLogic:CentOS:8_3:8.3.2021020400 8.3.2021020400
-x64 CentOS OpenLogic 8_3-gen2 OpenLogic:CentOS:8_3-gen2:8.3.2020120901 8.3.2020120901
-x64 CentOS OpenLogic 8_3-gen2 OpenLogic:CentOS:8_3-gen2:8.3.2021020401 8.3.2021020401
-x64 CentOS OpenLogic 8_4 OpenLogic:CentOS:8_4:8.4.2021071900 8.4.2021071900
-x64 CentOS OpenLogic 8_4-gen2 OpenLogic:CentOS:8_4-gen2:8.4.2021071901 8.4.2021071901
-x64 CentOS OpenLogic 8_5 OpenLogic:CentOS:8_5:8.5.2022012100 8.5.2022012100
-x64 CentOS OpenLogic 8_5 OpenLogic:CentOS:8_5:8.5.2022101800 8.5.2022101800
-x64 CentOS OpenLogic 8_5-gen2 OpenLogic:CentOS:8_5-gen2:8.5.2022012101 8.5.2022012101
+Architecture Offer Publisher Sku Urn Version
+-- --
+x64 0001-com-ubuntu-server-jammy Canonical 22_04-lts Canonical:0001-com-ubuntu-server-jammy:22_04-lts:22.04.202204200 22.04.202204200
+x64 0001-com-ubuntu-server-jammy Canonical 22_04-lts Canonical:0001-com-ubuntu-server-jammy:22_04-lts:22.04.202205060 22.04.202205060
+x64 0001-com-ubuntu-server-jammy Canonical 22_04-lts Canonical:0001-com-ubuntu-server-jammy:22_04-lts:22.04.202205280 22.04.202205280
+x64 0001-com-ubuntu-server-jammy Canonical 22_04-lts Canonical:0001-com-ubuntu-server-jammy:22_04-lts:22.04.202206040 22.04.202206040
+x64 0001-com-ubuntu-server-jammy Canonical 22_04-lts Canonical:0001-com-ubuntu-server-jammy:22_04-lts:22.04.202206090 22.04.202206090
+x64 0001-com-ubuntu-server-jammy Canonical 22_04-lts Canonical:0001-com-ubuntu-server-jammy:22_04-lts:22.04.202206160 22.04.202206160
+x64 0001-com-ubuntu-server-jammy Canonical 22_04-lts Canonical:0001-com-ubuntu-server-jammy:22_04-lts:22.04.202206220 22.04.202206220
+x64 0001-com-ubuntu-server-jammy Canonical 22_04-lts Canonical:0001-com-ubuntu-server-jammy:22_04-lts:22.04.202207060 22.04.202207060
```
x64 CentOS OpenLogic 8_5-gen2 OpenLog
> [!NOTE] > Canonical has changed the **Offer** names they use for the most recent versions. Before Ubuntu 20.04, the **Offer** name is UbuntuServer. For Ubuntu 20.04 the **Offer** name is `0001-com-ubuntu-server-focal` and for Ubuntu 22.04 it's `0001-com-ubuntu-server-jammy`.
-To deploy a VM using a specific image, take note of the value in the *Urn* column, which consists of the publisher, offer, SKU, and optionally a version number to [identify](cli-ps-findimage.md#terminology) the image. When specifying the image, the image version number can be replaced with `latest`, which selects the latest version of the distribution. In this example, the `--image` parameter is used to specify the latest version of a CentOS 8.5.
+To deploy a VM using a specific image, take note of the value in the *Urn* column, which consists of the publisher, offer, SKU, and optionally a version number to [identify](cli-ps-findimage.md#terminology) the image. When specifying the image, the image version number can be replaced with `latest`, which selects the latest version of the distribution. In this example, the `--image` parameter is used to specify the latest version of a Ubuntu 22.04.
```azurecli-interactive
-az vm create --resource-group myResourceGroupVM --name myVM2 --image OpenLogic:CentOS:8_5:latest --generate-ssh-keys
+az vm create --resource-group myResourceGroupVM --name myVM2 --image Canonical:0001-com-ubnutu-server-jammy:22_04-lts:latest --generate-ssh-keys
``` ## Understand VM sizes
In this tutorial, you learned about basic VM creation and management such as how
Advance to the next tutorial to learn about VM disks. > [!div class="nextstepaction"]
-> [Create and Manage VM disks](./tutorial-manage-disks.md)
+> [Create and Manage VM disks](./tutorial-manage-disks.md)
virtual-machines Using Cloud Init https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/using-cloud-init.md
# cloud-init support for virtual machines in Azure
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly.
->
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets This article explains the support that exists for [cloud-init](https://cloudinit.readthedocs.io) to configure a virtual machine (VM) or Virtual Machine Scale Sets at provisioning time in Azure. These cloud-init configurations are run on first boot once the resources have been provisioned by Azure.
-VM Provisioning is the process where the Azure will pass down your VM Create parameter values, such as hostname, username, and password, and make them available to the VM as it boots up. A 'provisioning agent' will consume those values, configure the VM, and report back when completed.
+VM Provisioning is the process where Azure passes down your VM Create parameter values, such as hostname, username, and password, and make them available to the VM as it boots up. A 'provisioning agent' will consume those values, configure the VM, and report back when completed.
Azure supports two provisioning agents [cloud-init](https://cloudinit.readthedocs.io), and the [Azure Linux Agent (WALA)](../extensions/agent-linux.md).
There are two stages to making cloud-init available to the supported Linux distr
|RedHat 8 |RHEL |8.1, 8.2, 8_3, 8_4 |latest |yes | yes | |RedHat 9 |RHEL |9_0, 9_1 |latest |yes | yes |
-* All other RedHat SKUs starting from RHEL 7 (version 7.7) and RHEL 8 (version 8.1) including both Gen1 and Gen2 images are provisioned using cloud-init. Cloud-init is not supported on RHEL 6.
--
-### CentOS
- Publisher / Version| Offer | SKU | Version | image cloud-init ready | cloud-init package support on Azure|
-|: |: |: |: |: |: |
-|OpenLogic 7 |CentOS |7.7, 7.8, 7.9 |latest |yes | yes |
-|OpenLogic 8 |CentOS |8.1, 8.2, 8.3 |latest |yes | yes |
-
-* All other CentOS SKUs starting from CentOS 7 (version 7.7) and CentOS 8 (version 8.1) including both Gen1 and Gen2 images are provisioned using cloud-init. CentOS 6.10, 7.4, 7.5, and 7.6 images don't support cloud-init.
+* All other RedHat SKUs starting from RHEL 7 (version 7.7) and RHEL 8 (version 8.1) including both Gen1 and Gen2 images are provisioned using cloud-init. Cloud-init isn't supported on RHEL 6.
### Oracle
There are two stages to making cloud-init available to the supported Linux distr
* All other Oracle SKUs starting from Oracle 7 (version 7.7) and Oracle 8 (version 8.1) including both Gen1 and Gen2 images are provisioned using cloud-init. - ### SUSE SLES Publisher / Version| Offer | SKU | Version | image cloud-init ready | cloud-init package support on Azure|
There are two stages to making cloud-init available to the supported Linux distr
|SUSE 15 |SLES (SUSE Linux Enterprise Server) |sp1, sp2, sp3 |latest |yes | yes | |SUSE 12 |SLES (SUSE Linux Enterprise Server) |sp5 |latest |yes | yes | - ### Debian | Publisher / Version | Offer | SKU | Version | image cloud-init ready | cloud-init package support on Azure| |: |: |: |: |: |: |
There are two stages to making cloud-init available to the supported Linux distr
| debian-10 |Debian |10-cloudinit-gen2 |Debian:debian-10:10-cloudinit-gen2:0.0.991 |yes |yes | debian-10 |Debian |10-cloudinit-gen2 |Debian:debian-10:10-cloudinit-gen2:0.0.999 |yes |yes -
-Currently Azure Stack will support the provisioning of cloud-init enabled images.
+Currently, Azure Stack supports the provisioning of cloud-init enabled images.
## What is the difference between cloud-init and the Linux Agent (WALA)? WALA is an Azure platform-specific agent used to provision and configure VMs, and handle [Azure extensions](../extensions/features-linux.md). We're enhancing the task of configuring VMs to use cloud-init instead of the Linux Agent in order to allow existing cloud-init customers to use their current cloud-init scripts, or new customers to take advantage of the rich cloud-init configuration functionality. If you have existing investments in cloud-init scripts for configuring Linux systems, there are **no additional settings required** to enable cloud-init process them.
-cloud-init cannot process Azure extensions, so WALA is still required in the image to process extensions, but will need to have its provisioning code disabled, for endorsed Linux distros images that are being converted to provision by cloud-init, they will have WALA installed, and setup correctly.
+cloud-init can't process Azure extensions, so WALA is still required in the image to process extensions but needs to have its provisioning code disabled. For endorsed Linux distros images that are being converted to provision by cloud-init, they have WALA installed, and setup correctly.
When creating a VM, if you don't include the Azure CLI `--custom-data` switch at provisioning time, cloud-init or WALA takes the minimal VM provisioning parameters required to provision the VM and complete the deployment with the defaults. If you reference the cloud-init configuration with the `--custom-data` switch, whatever is contained in your custom data will be available to cloud-init when the VM boots.
-cloud-init configurations applied to VMs do not have time constraints and will not cause a deployment to fail by timing out. This isn't true for WALA, if you change the WALA defaults to process custom-data, it can't exceed the total VM provisioning time allowance of 40 minutes, if so, the VM Create will fail.
+cloud-init configurations applied to VMs don't have time constraints and won't cause a deployment to fail by timing out. This isn't true for WALA, if you change the WALA defaults to process custom-data, it can't exceed the total VM provisioning time allowance of 40 minutes, if so, the VM Create will fail.
## cloud-init VM provisioning without a UDF driver Beginning with cloud-init 21.2, you can use cloud-init to provision a VM in Azure without a UDF driver. If a UDF driver isn't available in the image, cloud-init uses the metadata that's available in the Azure Instance Metadata Service to provision the VM. This option works only for SSH key and [user data](../user-data.md). To pass in a password or custom data to a VM during provisioning, you must use a UDF driver. ## Deploying a cloud-init enabled Virtual Machine
-Deploying a cloud-init enabled virtual machine is as simple as referencing a cloud-init enabled distribution during deployment. Linux distribution maintainers have to choose to enable and integrate cloud-init into their base Azure published images. Once you've confirmed the image you want to deploy is cloud-init enabled, you can use the Azure CLI to deploy the image.
+Deploying a cloud-init enabled virtual machine is as simple as referencing a cloud-init enabled distribution during deployment. Linux distribution maintainers have to choose to enable and integrate cloud-init into their base Azure published images. Once you confirm the image you want to deploy is cloud-init enabled, you can use the Azure CLI to deploy the image.
The first step in deploying this image is to create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed.
Exit the file and save the file according to the editor. Verify file name on exi
The final step is to create a VM with the [az vm create](/cli/azure/vm) command.
-The following example creates a VM named `centos74` and creates SSH keys if they don't already exist in a default key location. To use a specific set of keys, use the `--ssh-key-value` option. Use the `--custom-data` parameter to pass in your cloud-init config file. Provide the full path to the *cloud-init.txt* config if you saved the file outside of your present working directory.
+The following example creates a VM named `ubuntu2204` and creates SSH keys if they don't already exist in a default key location. To use a specific set of keys, use the `--ssh-key-value` option. Use the `--custom-data` parameter to pass in your cloud-init config file. Provide the full path to the *cloud-init.txt* config if you saved the file outside of your present working directory.
```azurecli-interactive az vm create \ --resource-group myResourceGroup \
- --name centos74 \
- --image OpenLogic:CentOS-CI:7-CI:latest \
+ --name ubuntu2204 \
+ --image Canonical:UbuntuServer:22_04-lts:latest \
--custom-data cloud-init.txt \ --generate-ssh-keys ```
-When the VM has been created, the Azure CLI shows information specific to your deployment. Take note of the `publicIpAddress`. This address is used to access the VM. It takes some time for the VM to be created, the packages to install, and the app to start. There are background tasks that continue to run after the Azure CLI returns you to the prompt. You can SSH into the VM and use the steps outlined in the Troubleshooting section to view the cloud-init logs.
+When the VM is created, the Azure CLI shows information specific to your deployment. Take note of the `publicIpAddress`. This address is used to access the VM. It takes some time for the VM to be created, the packages to install, and the app to start. There are background tasks that continue to run after the Azure CLI returns you to the prompt. You can SSH into the VM and use the steps outlined in the Troubleshooting section to view the cloud-init logs.
You can also deploy a cloud-init enabled VM by passing the [parameters in ARM template](../../azure-resource-manager/templates/deploy-cli.md#inline-parameters). ## Troubleshooting cloud-init
-Once the VM has been provisioned, cloud-init will run through all the modules and script defined in `--custom-data` in order to configure the VM. If you need to troubleshoot any errors or omissions from the configuration, you need to search for the module name (`disk_setup` or `runcmd` for example) in the cloud-init log - located in **/var/log/cloud-init.log**.
+Once the VM has been provisioned, cloud-init runs through all the modules and script defined in `--custom-data` in order to configure the VM. If you need to troubleshoot any errors or omissions from the configuration, you need to search for the module name (`disk_setup` or `runcmd` for example) in the cloud-init log - located in **/var/log/cloud-init.log**.
> [!NOTE] > Not every module failure results in a fatal cloud-init overall configuration failure. For example, using the `runcmd` module, if the script fails, cloud-init will still report provisioning succeeded because the runcmd module executed.
-For more details of cloud-init logging, see the [cloud-init documentation](https://cloudinit.readthedocs.io/en/latest/development/logging.html)
+For more information of cloud-init logging, see the [cloud-init documentation](https://cloudinit.readthedocs.io/en/latest/development/logging.html)
## Telemetry cloud-init collects usage data and sends it to Microsoft to help improve our products and services. Telemetry is only collected during the provisioning process (first boot of the VM). The data collected helps us investigate provisioning failures and monitor performance and reliability. Data collected doesn't include any identifiers (personal identifiers). Read our [privacy statement](https://go.microsoft.com/fwlink/?LinkId=521839) to learn more. Some examples of telemetry being collected are (this isn't an exhaustive list): OS-related information (cloud-init version, distro version, kernel version), performance metrics of essential VM provisioning actions (time to obtain DHCP lease, time to retrieve metadata necessary to configure the VM, etc.), cloud-init log, and dmesg log.
-Telemetry collection is currently enabled for most of our marketplace images that use cloud-init. It is enabled by specifying KVP telemetry reporter for cloud-init. In most Azure Marketplace images, this configuration can be found in the file /etc/cloud/cloud.cfg.d/10-azure-kvp.cfg. Removing this file during image preparation will disable telemetry collection for any VM created from this image.
+Telemetry collection is currently enabled for most of our marketplace images that use cloud-init. It's enabled by specifying KVP telemetry reporter for cloud-init. In most Azure Marketplace images, this configuration can be found in the file /etc/cloud/cloud.cfg.d/10-azure-kvp.cfg. Removing this file during image preparation disables telemetry collection for any VM created from this image.
Sample content of 10-azure-kvp.cfg ```
virtual-machines Setup Mpi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/setup-mpi.md
# Set up Message Passing Interface for HPC
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-The [Message Passing Interface (MPI)](https://en.wikipedia.org/wiki/Message_Passing_Interface) is an open library and de-facto standard for distributed memory parallelization. It is commonly used across many HPC workloads. HPC workloads on the [RDMA capable](sizes-hpc.md#rdma-capable-instances) [HB-series](sizes-hpc.md) and [N-series](sizes-gpu.md) VMs can use MPI to communicate over the low latency and high bandwidth InfiniBand network.
+The [Message Passing Interface (MPI)](https://en.wikipedia.org/wiki/Message_Passing_Interface) is an open library and defacto standard for distributed memory parallelization. It's commonly used across many HPC workloads. HPC workloads on the [RDMA capable](sizes-hpc.md#rdma-capable-instances) [HB-series](sizes-hpc.md) and [N-series](sizes-gpu.md) VMs can use MPI to communicate over the low latency and high bandwidth InfiniBand network.
- The SR-IOV enabled VM sizes on Azure allow almost any flavor of MPI to be used with Mellanox OFED. - On non-SR-IOV enabled VMs, supported MPI implementations use the Microsoft Network Direct (ND) interface to communicate between VMs. Hence, only Microsoft MPI (MS-MPI) 2012 R2 or later and Intel MPI 5.x versions are supported. Later versions (2017, 2018) of the Intel MPI runtime library may or may not be compatible with the Azure RDMA drivers.
-For SR-IOV enabled [RDMA capable VMs](sizes-hpc.md#rdma-capable-instances), [Ubuntu-HPC VM images](configure.md#ubuntu-hpc-vm-images) and [AlmaLinux-HPC VM images](configure.md#almalinux-hpc-vm-images) are suitable. These VM images come optimized and pre-loaded with the OFED drivers for RDMA and various commonly used MPI libraries and scientific computing packages and are the easiest way to get started.
+For SR-IOV enabled [RDMA capable VMs](sizes-hpc.md#rdma-capable-instances), [Ubuntu-HPC VM images](configure.md#ubuntu-hpc-vm-images) and [AlmaLinux-HPC VM images](configure.md#almalinux-hpc-vm-images) are suitable. These VM images come optimized and preloaded with the OFED drivers for RDMA and various commonly used MPI libraries and scientific computing packages and are the easiest way to get started.
-Though the examples here are for RHEL/CentOS, but the steps are general and can be used for any compatible Linux operating system such as Ubuntu (18.04, 20.04, 22.04) and SLES (12 SP4 and 15 SP4). More examples for setting up other MPI implementations on others distros is on the [azhpc-images repo](https://github.com/Azure/azhpc-images/blob/master/ubuntu/ubuntu-18.x/ubuntu-18.04-hpc/install_mpis.sh).
+Though the examples here are for RHEL, but the steps are general and can be used for any compatible Linux operating system such as Ubuntu (18.04, 20.04, 22.04) and SLES (12 SP4 and 15 SP4). More examples for setting up other MPI implementations on others distros is on the [azhpc-images repo](https://github.com/Azure/azhpc-images/blob/master/ubuntu/ubuntu-18.x/ubuntu-18.04-hpc/install_mpis.sh).
> [!NOTE] > Running MPI jobs on SR-IOV enabled VMs with certain MPI libraries (such as Platform MPI) may require setting up of partition keys (p-keys) across a tenant for isolation and security. Follow the steps in the [Discover partition keys](#discover-partition-keys) section for details on determining the p-key values and setting them correctly for an MPI job with that MPI library.
${HPCX_PATH}mpirun -np 2 --map-by ppr:2:node -x UCX_TLS=rc ${HPCX_PATH}/ompi/tes
### Optimizing MPI collectives
-MPI Collective communication primitives offer a flexible, portable way to implement group communication operations. They are widely used across various scientific parallel applications and have a significant impact on the overall application performance. Refer to the [TechCommunity article](https://techcommunity.microsoft.com/t5/azure-compute/optimizing-mpi-collective-communication-using-hpc-x-on-azurehpc/ba-p/1356740) for details on configuration parameters to optimize collective communication performance using HPC-X and HCOLL library for collective communication.
+MPI Collective communication primitives offer a flexible, portable way to implement group communication operations. They're widely used across various scientific parallel applications and have a significant impact on the overall application performance. Refer to the [TechCommunity article](https://techcommunity.microsoft.com/t5/azure-compute/optimizing-mpi-collective-communication-using-hpc-x-on-azurehpc/ba-p/1356740) for details on configuration parameters to optimize collective communication performance using HPC-X and HCOLL library for collective communication.
As an example, if you suspect your tightly coupled MPI application is doing an excessive amount of collective communication, you can try enabling hierarchical collectives (HCOLL). To enable those features, use the following parameters. ```bash
As an example, if you suspect your tightly coupled MPI application is doing an e
## OpenMPI
-Install UCX as described above. HCOLL is part of the [HPC-X software toolkit](https://www.mellanox.com/products/hpc-x-toolkit) and does not requires special installation.
+Install UCX as described above. HCOLL is part of the [HPC-X software toolkit](https://www.mellanox.com/products/hpc-x-toolkit) and doesn't requires special installation.
OpenMPI can be installed from the packages available in the repo.
sudo ./platform_mpi-09.01.04.03r-ce.bin
Follow the installation process.
-The following commands are examples of running MPI pingpong and allreduce using Platform MPI on HBv3 VMs using CentOS-HPC 7.6, 7.8, and 8.1 VM images.
-
-```bash
-/opt/ibm/platform_mpi/bin/mpirun -hostlist 10.0.0.8:1,10.0.0.9:1 -np 2 -e MPI_IB_PKEY=0x800a -ibv /home/jijos/mpi-benchmarks/IMB-MPI1 pingpong
-/opt/ibm/platform_mpi/bin/mpirun -hostlist 10.0.0.8:120,10.0.0.9:120 -np 240 -e MPI_IB_PKEY=0x800a -ibv /home/jijos/mpi-benchmarks/IMB-MPI1 allreduce -npmin 240
-```
-- ## MPICH Install UCX as described above. Build MPICH.
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/overview.md
To learn more about a specific size family or series, click the tab for that fam
GPU optimized VM sizes are specialized virtual machines available with single, multiple, or fractional GPUs. These sizes are designed for compute-intensive, graphics-intensive, and visualization workloads. #### [Family list](#tab/gpusizelist)
-List of storage optimized VM size families:
+List of GPU optimized VM size families:
| Family | Workloads | Series List | |-|||
virtual-machines Cli Ps Findimage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/cli-ps-findimage.md
# Find and use Azure Marketplace VM images with Azure PowerShell
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
$domainNameLabel = "d" + $rgname
$securePassword = <Password> | ConvertTo-SecureString -AsPlainText -Force $username = <Username> $credential = New-Object System.Management.Automation.PSCredential ($username, $securePassword)
-New-AzVM -ResourceGroupName $rgname -Location $location -Name $vmName -image CentOS85Gen285Gen2 -Credential $credential -DomainNameLabel $domainNameLabel
+New-AzVM -ResourceGroupName $rgname -Location $location -Name $vmName -image Ubuntu2204 -Credential $credential -DomainNameLabel $domainNameLabel
``` The Linux image alias names and their details are: ```output Alias Architecture Offer Publisher Sku Urn Version -- -- - - -
-CentOS85Gen2 x64 CentOS OpenLogic 8_5-gen2 OpenLogic:CentOS:8_5-gen2:latest latest
Debian11 x64 Debian-11 Debian 11-backports-gen2 Debian:debian-11:11-backports-gen2:latest latest FlatcarLinuxFreeGen2 x64 flatcar-container-linux-free kinvolk stable kinvolk:flatcar-container-linux-free:stable:latest latest OpenSuseLeap154Gen2 x64 opensuse-leap-15-4 SUSE gen2 SUSE:opensuse-leap-15-4:gen2:latest latest
virtual-network Add Dual Stack Ipv6 Vm Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/add-dual-stack-ipv6-vm-portal.md
New-AzPublicIpAddress @ip6
# [Azure portal](#tab/azureportal)
-The virtual machine must be stopped to add the IPv6 configuration to the existing virtual machine. You stop the virtual machine and add the IPv6 configuration to the existing virtual machine's network interface.
+In this section, you will configure your virtual machineΓÇÖs network interface to include both a private and a public IPv6 address.
1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. 2. Select **myVM** or your existing virtual machine name.
-3. Stop **myVM**.
+3. Select **Networking** in **Settings**.
-4. Select **Networking** in **Settings**.
+4. Select your network interface name next to **Network Interface:**. In this example, the network interface is named **myvm404**.
-5. Select your network interface name next to **Network Interface:**. In this example, the network interface is named **myvm404**.
+5. Select **IP configurations** in **Settings** of the network interface.
-6. Select **IP configurations** in **Settings** of the network interface.
+6. In **IP configurations**, select **+ Add**.
-7. In **IP configurations**, select **+ Add**.
-
-8. Enter or select the following information in **Add IP configuration**.
+7. Enter or select the following information in **Add IP configuration**.
| Setting | Value | | - | -- |
The virtual machine must be stopped to add the IPv6 configuration to the existin
9. Select **OK**.
-10. Start **myVM**.
- # [Azure CLI](#tab/azurecli/) Use [az network nic ip-config create](/cli/azure/network/nic/ip-config#az-network-nic-ip-config-create) to create the IPv6 configuration for the network interface. The **`--nic-name`** used in the example is **myvm569**. Replace this value with the name of the network interface in your virtual machine.
For more information about IPv6 and IP addresses in Azure, see:
- [Overview of IPv6 for Azure Virtual Network.](ipv6-overview.md) -- [What is Azure Virtual Network IP Services?](ip-services-overview.md)
+- [What is Azure Virtual Network IP Services?](ip-services-overview.md)
virtual-wan Route Maps About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/route-maps-about.md
Before using Route-maps, take into consideration the following limitations:
* During Preview, hubs that are using Route-maps must be deployed in their own virtual WANs. * The Route-maps feature is only available for virtual hubs running on the Virtual Machine Scale Sets infrastructure. For more information, see the [FAQ](virtual-wan-faq.md). * When using Route-maps to summarize a set of routes, the hub router strips the *BGP Community* and *AS-PATH* attributes from those routes. This applies to both inbound and outbound routes.
-* When adding ASNs to the AS-PAT, only use the Private ASN range 64512 - 65535, but don't use ASN's Reseverd by Azure:
+* When adding ASNs to the AS-PATH, only use the Private ASN range 64512 - 65535, but don't use ASN's Reseverd by Azure:
* Public ASNs: 8074, 8075, 12076 * Private ASNs: 65515, 65517, 65518, 65519, 65520 * When using Route-maps, do not remove the Azure BGP communities:
vpn-gateway Point To Site Entra Register Custom App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-entra-register-custom-app.md
+
+ Title: Create custom app ID for P2S VPN Microsoft Entra ID authentication
+
+description: Learn how to create a custom audience App ID or upgrade an existing custom App ID to the new Microsoft-registered Azure VPN Client app values.
+++ Last updated : 08/05/2024+++
+# Create a custom audience app ID for P2S VPN Microsoft Entra ID authentication
+
+The steps in this article help you create a Microsoft Entra ID custom App ID (custom audience) for the new Microsoft-registered Azure VPN Client for point-to-site (P2S) connections. You can also update your existing tenant to [change the new Microsoft-registered Azure VPN Client app](#change) from the previous Azure VPN Client app.
+
+If you need to create a custom audience using a value other than the Azure Public value `c632b3df-fb67-4d84-bdcf-b95ad541b5c8`, you can replace this value with the value you require. For more information and to see the available audience ID values for the Azure VPN Client app, see [Microsoft Entra ID authentication for P2S](point-to-site-about.md#microsoft-entra-id-authentication).
+
+This article provides high-level steps. The screenshots might be different than what you experience in the Azure portal, but the settings are the same. For more information, see [Quickstart: Register an application](/entra/identity-platform/quickstart-register-app).
+
+## Prerequisites
+
+This article assumes that you already have a Microsoft Entra tenant and the permissions to create an Enterprise Application, typically the Cloud Application administrator role or higher. For more information, see [Create a new tenant in Microsoft Entra ID](/entra/fundamentals/create-new-tenant) and [Assign user roles with Microsoft Entra ID](/entra/fundamentals/users-assign-role-azure-portal).
++
+After you've completed these steps, continue to [Configure P2S VPN Gateway for Microsoft Entra ID authentication ΓÇô Microsoft-registered app](point-to-site-entra-gateway.md).
+
+## <a name="change"></a>Change to Microsoft-registered VPN client app
++
+## Next steps
+
+* [Configure P2S VPN Gateway for Microsoft Entra ID authentication ΓÇô Microsoft-registered app](point-to-site-entra-gateway.md).
+* To connect to your virtual network, you must configure the Azure VPN client on your client computers. See [Configure a VPN client for P2S VPN connections](point-to-site-entra-vpn-client-windows.md).
+* For frequently asked questions, see the **Point-to-site** section of the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md#P2S).